Jun 20 19:02:23.844594 kernel: Linux version 6.6.94-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Fri Jun 20 17:12:40 -00 2025 Jun 20 19:02:23.844613 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=c5ce7ee72c13e935b8a741ba19830125b417ea1672f46b6a215da9317cee8e17 Jun 20 19:02:23.844622 kernel: BIOS-provided physical RAM map: Jun 20 19:02:23.844627 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jun 20 19:02:23.844632 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jun 20 19:02:23.844636 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jun 20 19:02:23.844642 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007cfdbfff] usable Jun 20 19:02:23.844647 kernel: BIOS-e820: [mem 0x000000007cfdc000-0x000000007cffffff] reserved Jun 20 19:02:23.844653 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jun 20 19:02:23.844658 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jun 20 19:02:23.844662 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jun 20 19:02:23.844667 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jun 20 19:02:23.844672 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jun 20 19:02:23.844677 kernel: NX (Execute Disable) protection: active Jun 20 19:02:23.844684 kernel: APIC: Static calls initialized Jun 20 19:02:23.844689 kernel: SMBIOS 3.0.0 present. Jun 20 19:02:23.844694 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Jun 20 19:02:23.844699 kernel: Hypervisor detected: KVM Jun 20 19:02:23.844705 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jun 20 19:02:23.844710 kernel: kvm-clock: using sched offset of 3003493596 cycles Jun 20 19:02:23.844715 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jun 20 19:02:23.844720 kernel: tsc: Detected 2445.404 MHz processor Jun 20 19:02:23.844726 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jun 20 19:02:23.844732 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jun 20 19:02:23.844738 kernel: last_pfn = 0x7cfdc max_arch_pfn = 0x400000000 Jun 20 19:02:23.844744 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jun 20 19:02:23.844749 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jun 20 19:02:23.844754 kernel: Using GB pages for direct mapping Jun 20 19:02:23.844759 kernel: ACPI: Early table checksum verification disabled Jun 20 19:02:23.844764 kernel: ACPI: RSDP 0x00000000000F5270 000014 (v00 BOCHS ) Jun 20 19:02:23.844770 kernel: ACPI: RSDT 0x000000007CFE2693 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 20 19:02:23.844775 kernel: ACPI: FACP 0x000000007CFE2483 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jun 20 19:02:23.844780 kernel: ACPI: DSDT 0x000000007CFE0040 002443 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 20 19:02:23.844787 kernel: ACPI: FACS 0x000000007CFE0000 000040 Jun 20 19:02:23.844792 kernel: ACPI: APIC 0x000000007CFE2577 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jun 20 19:02:23.844797 kernel: ACPI: HPET 0x000000007CFE25F7 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 20 19:02:23.844802 kernel: ACPI: MCFG 0x000000007CFE262F 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 20 19:02:23.844808 kernel: ACPI: WAET 0x000000007CFE266B 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 20 19:02:23.844813 kernel: ACPI: Reserving FACP table memory at [mem 0x7cfe2483-0x7cfe2576] Jun 20 19:02:23.844818 kernel: ACPI: Reserving DSDT table memory at [mem 0x7cfe0040-0x7cfe2482] Jun 20 19:02:23.844827 kernel: ACPI: Reserving FACS table memory at [mem 0x7cfe0000-0x7cfe003f] Jun 20 19:02:23.844833 kernel: ACPI: Reserving APIC table memory at [mem 0x7cfe2577-0x7cfe25f6] Jun 20 19:02:23.844838 kernel: ACPI: Reserving HPET table memory at [mem 0x7cfe25f7-0x7cfe262e] Jun 20 19:02:23.844844 kernel: ACPI: Reserving MCFG table memory at [mem 0x7cfe262f-0x7cfe266a] Jun 20 19:02:23.844849 kernel: ACPI: Reserving WAET table memory at [mem 0x7cfe266b-0x7cfe2692] Jun 20 19:02:23.844855 kernel: No NUMA configuration found Jun 20 19:02:23.844860 kernel: Faking a node at [mem 0x0000000000000000-0x000000007cfdbfff] Jun 20 19:02:23.844867 kernel: NODE_DATA(0) allocated [mem 0x7cfd6000-0x7cfdbfff] Jun 20 19:02:23.844873 kernel: Zone ranges: Jun 20 19:02:23.844879 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jun 20 19:02:23.844884 kernel: DMA32 [mem 0x0000000001000000-0x000000007cfdbfff] Jun 20 19:02:23.844889 kernel: Normal empty Jun 20 19:02:23.844895 kernel: Movable zone start for each node Jun 20 19:02:23.844900 kernel: Early memory node ranges Jun 20 19:02:23.844906 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jun 20 19:02:23.844911 kernel: node 0: [mem 0x0000000000100000-0x000000007cfdbfff] Jun 20 19:02:23.844918 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007cfdbfff] Jun 20 19:02:23.844923 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jun 20 19:02:23.844929 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jun 20 19:02:23.844934 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jun 20 19:02:23.844945 kernel: ACPI: PM-Timer IO Port: 0x608 Jun 20 19:02:23.844950 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jun 20 19:02:23.844956 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jun 20 19:02:23.844961 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jun 20 19:02:23.844967 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jun 20 19:02:23.846945 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jun 20 19:02:23.846955 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jun 20 19:02:23.846962 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jun 20 19:02:23.846968 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jun 20 19:02:23.846995 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jun 20 19:02:23.847001 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jun 20 19:02:23.847007 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jun 20 19:02:23.847013 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jun 20 19:02:23.847018 kernel: Booting paravirtualized kernel on KVM Jun 20 19:02:23.847024 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jun 20 19:02:23.847033 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jun 20 19:02:23.847039 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 Jun 20 19:02:23.847044 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 Jun 20 19:02:23.847050 kernel: pcpu-alloc: [0] 0 1 Jun 20 19:02:23.847056 kernel: kvm-guest: PV spinlocks disabled, no host support Jun 20 19:02:23.847062 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=c5ce7ee72c13e935b8a741ba19830125b417ea1672f46b6a215da9317cee8e17 Jun 20 19:02:23.847069 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 20 19:02:23.847074 kernel: random: crng init done Jun 20 19:02:23.847081 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jun 20 19:02:23.847087 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jun 20 19:02:23.847092 kernel: Fallback order for Node 0: 0 Jun 20 19:02:23.847098 kernel: Built 1 zonelists, mobility grouping on. Total pages: 503708 Jun 20 19:02:23.847103 kernel: Policy zone: DMA32 Jun 20 19:02:23.847109 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 20 19:02:23.847115 kernel: Memory: 1920004K/2047464K available (14336K kernel code, 2295K rwdata, 22872K rodata, 43488K init, 1588K bss, 127200K reserved, 0K cma-reserved) Jun 20 19:02:23.847121 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jun 20 19:02:23.847126 kernel: ftrace: allocating 37938 entries in 149 pages Jun 20 19:02:23.847133 kernel: ftrace: allocated 149 pages with 4 groups Jun 20 19:02:23.847139 kernel: Dynamic Preempt: voluntary Jun 20 19:02:23.847145 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 20 19:02:23.847151 kernel: rcu: RCU event tracing is enabled. Jun 20 19:02:23.847157 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jun 20 19:02:23.847162 kernel: Trampoline variant of Tasks RCU enabled. Jun 20 19:02:23.847168 kernel: Rude variant of Tasks RCU enabled. Jun 20 19:02:23.847173 kernel: Tracing variant of Tasks RCU enabled. Jun 20 19:02:23.847179 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 20 19:02:23.847186 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jun 20 19:02:23.847192 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jun 20 19:02:23.847197 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 20 19:02:23.847203 kernel: Console: colour VGA+ 80x25 Jun 20 19:02:23.847208 kernel: printk: console [tty0] enabled Jun 20 19:02:23.847213 kernel: printk: console [ttyS0] enabled Jun 20 19:02:23.847219 kernel: ACPI: Core revision 20230628 Jun 20 19:02:23.847224 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jun 20 19:02:23.847230 kernel: APIC: Switch to symmetric I/O mode setup Jun 20 19:02:23.847236 kernel: x2apic enabled Jun 20 19:02:23.847243 kernel: APIC: Switched APIC routing to: physical x2apic Jun 20 19:02:23.847248 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jun 20 19:02:23.847254 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jun 20 19:02:23.847259 kernel: Calibrating delay loop (skipped) preset value.. 4890.80 BogoMIPS (lpj=2445404) Jun 20 19:02:23.847265 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jun 20 19:02:23.847271 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jun 20 19:02:23.847276 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jun 20 19:02:23.847287 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jun 20 19:02:23.847305 kernel: Spectre V2 : Mitigation: Retpolines Jun 20 19:02:23.847311 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jun 20 19:02:23.847317 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jun 20 19:02:23.847324 kernel: RETBleed: Mitigation: untrained return thunk Jun 20 19:02:23.847329 kernel: Spectre V2 : User space: Vulnerable Jun 20 19:02:23.847340 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jun 20 19:02:23.847346 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jun 20 19:02:23.847352 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jun 20 19:02:23.847359 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jun 20 19:02:23.847365 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jun 20 19:02:23.847371 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jun 20 19:02:23.847377 kernel: Freeing SMP alternatives memory: 32K Jun 20 19:02:23.847383 kernel: pid_max: default: 32768 minimum: 301 Jun 20 19:02:23.847388 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jun 20 19:02:23.847394 kernel: landlock: Up and running. Jun 20 19:02:23.847400 kernel: SELinux: Initializing. Jun 20 19:02:23.847406 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jun 20 19:02:23.847413 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jun 20 19:02:23.847419 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0) Jun 20 19:02:23.847425 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 20 19:02:23.847431 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 20 19:02:23.847436 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 20 19:02:23.847442 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jun 20 19:02:23.847448 kernel: ... version: 0 Jun 20 19:02:23.847454 kernel: ... bit width: 48 Jun 20 19:02:23.847459 kernel: ... generic registers: 6 Jun 20 19:02:23.847466 kernel: ... value mask: 0000ffffffffffff Jun 20 19:02:23.847472 kernel: ... max period: 00007fffffffffff Jun 20 19:02:23.847478 kernel: ... fixed-purpose events: 0 Jun 20 19:02:23.847484 kernel: ... event mask: 000000000000003f Jun 20 19:02:23.847490 kernel: signal: max sigframe size: 1776 Jun 20 19:02:23.847496 kernel: rcu: Hierarchical SRCU implementation. Jun 20 19:02:23.847501 kernel: rcu: Max phase no-delay instances is 400. Jun 20 19:02:23.847507 kernel: smp: Bringing up secondary CPUs ... Jun 20 19:02:23.847513 kernel: smpboot: x86: Booting SMP configuration: Jun 20 19:02:23.847520 kernel: .... node #0, CPUs: #1 Jun 20 19:02:23.847526 kernel: smp: Brought up 1 node, 2 CPUs Jun 20 19:02:23.847531 kernel: smpboot: Max logical packages: 1 Jun 20 19:02:23.847537 kernel: smpboot: Total of 2 processors activated (9781.61 BogoMIPS) Jun 20 19:02:23.847543 kernel: devtmpfs: initialized Jun 20 19:02:23.847549 kernel: x86/mm: Memory block size: 128MB Jun 20 19:02:23.847555 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 20 19:02:23.847561 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jun 20 19:02:23.847566 kernel: pinctrl core: initialized pinctrl subsystem Jun 20 19:02:23.847573 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 20 19:02:23.847579 kernel: audit: initializing netlink subsys (disabled) Jun 20 19:02:23.847585 kernel: audit: type=2000 audit(1750446143.400:1): state=initialized audit_enabled=0 res=1 Jun 20 19:02:23.847590 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 20 19:02:23.847596 kernel: thermal_sys: Registered thermal governor 'user_space' Jun 20 19:02:23.847602 kernel: cpuidle: using governor menu Jun 20 19:02:23.847608 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 20 19:02:23.847613 kernel: dca service started, version 1.12.1 Jun 20 19:02:23.847619 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jun 20 19:02:23.847626 kernel: PCI: Using configuration type 1 for base access Jun 20 19:02:23.847632 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jun 20 19:02:23.847638 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jun 20 19:02:23.847644 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jun 20 19:02:23.847649 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 20 19:02:23.847655 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jun 20 19:02:23.847661 kernel: ACPI: Added _OSI(Module Device) Jun 20 19:02:23.847667 kernel: ACPI: Added _OSI(Processor Device) Jun 20 19:02:23.847673 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 20 19:02:23.847680 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jun 20 19:02:23.847685 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jun 20 19:02:23.847691 kernel: ACPI: Interpreter enabled Jun 20 19:02:23.847697 kernel: ACPI: PM: (supports S0 S5) Jun 20 19:02:23.847703 kernel: ACPI: Using IOAPIC for interrupt routing Jun 20 19:02:23.847709 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jun 20 19:02:23.847715 kernel: PCI: Using E820 reservations for host bridge windows Jun 20 19:02:23.847720 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jun 20 19:02:23.847726 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jun 20 19:02:23.847838 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jun 20 19:02:23.847909 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jun 20 19:02:23.847988 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jun 20 19:02:23.847999 kernel: PCI host bridge to bus 0000:00 Jun 20 19:02:23.848100 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jun 20 19:02:23.848164 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jun 20 19:02:23.848227 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jun 20 19:02:23.848285 kernel: pci_bus 0000:00: root bus resource [mem 0x7d000000-0xafffffff window] Jun 20 19:02:23.848359 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jun 20 19:02:23.848417 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jun 20 19:02:23.848476 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jun 20 19:02:23.848557 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jun 20 19:02:23.848630 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 Jun 20 19:02:23.848701 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfb800000-0xfbffffff pref] Jun 20 19:02:23.848766 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfd200000-0xfd203fff 64bit pref] Jun 20 19:02:23.848830 kernel: pci 0000:00:01.0: reg 0x20: [mem 0xfea10000-0xfea10fff] Jun 20 19:02:23.848894 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea00000-0xfea0ffff pref] Jun 20 19:02:23.848956 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jun 20 19:02:23.849049 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jun 20 19:02:23.849122 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea11000-0xfea11fff] Jun 20 19:02:23.849192 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jun 20 19:02:23.849257 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea12000-0xfea12fff] Jun 20 19:02:23.849339 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jun 20 19:02:23.849406 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea13000-0xfea13fff] Jun 20 19:02:23.849476 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jun 20 19:02:23.849541 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea14000-0xfea14fff] Jun 20 19:02:23.849615 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jun 20 19:02:23.849680 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea15000-0xfea15fff] Jun 20 19:02:23.849756 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jun 20 19:02:23.849821 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea16000-0xfea16fff] Jun 20 19:02:23.849894 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jun 20 19:02:23.849958 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea17000-0xfea17fff] Jun 20 19:02:23.850067 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jun 20 19:02:23.850135 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea18000-0xfea18fff] Jun 20 19:02:23.850216 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Jun 20 19:02:23.850283 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfea19000-0xfea19fff] Jun 20 19:02:23.850372 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jun 20 19:02:23.850443 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jun 20 19:02:23.850519 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jun 20 19:02:23.850583 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc040-0xc05f] Jun 20 19:02:23.850646 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea1a000-0xfea1afff] Jun 20 19:02:23.850715 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jun 20 19:02:23.850780 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jun 20 19:02:23.850853 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Jun 20 19:02:23.850924 kernel: pci 0000:01:00.0: reg 0x14: [mem 0xfe880000-0xfe880fff] Jun 20 19:02:23.851012 kernel: pci 0000:01:00.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Jun 20 19:02:23.851082 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfe800000-0xfe87ffff pref] Jun 20 19:02:23.851146 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jun 20 19:02:23.851210 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Jun 20 19:02:23.851273 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Jun 20 19:02:23.851359 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Jun 20 19:02:23.851426 kernel: pci 0000:02:00.0: reg 0x10: [mem 0xfe600000-0xfe603fff 64bit] Jun 20 19:02:23.851496 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jun 20 19:02:23.851558 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Jun 20 19:02:23.851622 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jun 20 19:02:23.851697 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Jun 20 19:02:23.851763 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfe400000-0xfe400fff] Jun 20 19:02:23.851829 kernel: pci 0000:03:00.0: reg 0x20: [mem 0xfcc00000-0xfcc03fff 64bit pref] Jun 20 19:02:23.851899 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jun 20 19:02:23.851962 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Jun 20 19:02:23.852203 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jun 20 19:02:23.852327 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Jun 20 19:02:23.852399 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Jun 20 19:02:23.852464 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jun 20 19:02:23.852527 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Jun 20 19:02:23.852594 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jun 20 19:02:23.852664 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Jun 20 19:02:23.852729 kernel: pci 0000:05:00.0: reg 0x14: [mem 0xfe000000-0xfe000fff] Jun 20 19:02:23.852793 kernel: pci 0000:05:00.0: reg 0x20: [mem 0xfc800000-0xfc803fff 64bit pref] Jun 20 19:02:23.852865 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jun 20 19:02:23.852928 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Jun 20 19:02:23.853008 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jun 20 19:02:23.853082 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Jun 20 19:02:23.853153 kernel: pci 0000:06:00.0: reg 0x14: [mem 0xfde00000-0xfde00fff] Jun 20 19:02:23.853219 kernel: pci 0000:06:00.0: reg 0x20: [mem 0xfc600000-0xfc603fff 64bit pref] Jun 20 19:02:23.853282 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jun 20 19:02:23.853361 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Jun 20 19:02:23.853424 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jun 20 19:02:23.853433 kernel: acpiphp: Slot [0] registered Jun 20 19:02:23.853502 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Jun 20 19:02:23.853572 kernel: pci 0000:07:00.0: reg 0x14: [mem 0xfdc80000-0xfdc80fff] Jun 20 19:02:23.853636 kernel: pci 0000:07:00.0: reg 0x20: [mem 0xfc400000-0xfc403fff 64bit pref] Jun 20 19:02:23.853701 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfdc00000-0xfdc7ffff pref] Jun 20 19:02:23.853763 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jun 20 19:02:23.853825 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Jun 20 19:02:23.853887 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jun 20 19:02:23.853896 kernel: acpiphp: Slot [0-2] registered Jun 20 19:02:23.853957 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jun 20 19:02:23.854081 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Jun 20 19:02:23.854147 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jun 20 19:02:23.854156 kernel: acpiphp: Slot [0-3] registered Jun 20 19:02:23.854217 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jun 20 19:02:23.854279 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Jun 20 19:02:23.854355 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jun 20 19:02:23.854364 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jun 20 19:02:23.854370 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jun 20 19:02:23.854379 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jun 20 19:02:23.854385 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jun 20 19:02:23.854391 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jun 20 19:02:23.854403 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jun 20 19:02:23.854408 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jun 20 19:02:23.854414 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jun 20 19:02:23.854420 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jun 20 19:02:23.854426 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jun 20 19:02:23.854432 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jun 20 19:02:23.854439 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jun 20 19:02:23.854445 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jun 20 19:02:23.854451 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jun 20 19:02:23.854457 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jun 20 19:02:23.854463 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jun 20 19:02:23.854468 kernel: iommu: Default domain type: Translated Jun 20 19:02:23.854474 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jun 20 19:02:23.854480 kernel: PCI: Using ACPI for IRQ routing Jun 20 19:02:23.854486 kernel: PCI: pci_cache_line_size set to 64 bytes Jun 20 19:02:23.854493 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jun 20 19:02:23.854499 kernel: e820: reserve RAM buffer [mem 0x7cfdc000-0x7fffffff] Jun 20 19:02:23.854564 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jun 20 19:02:23.854627 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jun 20 19:02:23.854689 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jun 20 19:02:23.854697 kernel: vgaarb: loaded Jun 20 19:02:23.854703 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jun 20 19:02:23.854709 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jun 20 19:02:23.854715 kernel: clocksource: Switched to clocksource kvm-clock Jun 20 19:02:23.854723 kernel: VFS: Disk quotas dquot_6.6.0 Jun 20 19:02:23.854730 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 20 19:02:23.854735 kernel: pnp: PnP ACPI init Jun 20 19:02:23.854803 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jun 20 19:02:23.854813 kernel: pnp: PnP ACPI: found 5 devices Jun 20 19:02:23.854819 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jun 20 19:02:23.854825 kernel: NET: Registered PF_INET protocol family Jun 20 19:02:23.854831 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jun 20 19:02:23.854839 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jun 20 19:02:23.854845 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 20 19:02:23.854851 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jun 20 19:02:23.854857 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jun 20 19:02:23.854863 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jun 20 19:02:23.854869 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jun 20 19:02:23.854875 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jun 20 19:02:23.854881 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 20 19:02:23.854887 kernel: NET: Registered PF_XDP protocol family Jun 20 19:02:23.854952 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jun 20 19:02:23.855035 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jun 20 19:02:23.855101 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jun 20 19:02:23.855165 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x1000-0x1fff] Jun 20 19:02:23.855228 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x2000-0x2fff] Jun 20 19:02:23.855302 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x3000-0x3fff] Jun 20 19:02:23.855369 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jun 20 19:02:23.855437 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Jun 20 19:02:23.855500 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Jun 20 19:02:23.855562 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jun 20 19:02:23.855624 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Jun 20 19:02:23.855686 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jun 20 19:02:23.855747 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jun 20 19:02:23.855809 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Jun 20 19:02:23.855870 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jun 20 19:02:23.855936 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jun 20 19:02:23.856054 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Jun 20 19:02:23.856118 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jun 20 19:02:23.856179 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jun 20 19:02:23.856261 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Jun 20 19:02:23.856397 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jun 20 19:02:23.856523 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jun 20 19:02:23.856659 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Jun 20 19:02:23.856776 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jun 20 19:02:23.856881 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jun 20 19:02:23.857008 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Jun 20 19:02:23.857132 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Jun 20 19:02:23.857377 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jun 20 19:02:23.857505 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jun 20 19:02:23.857611 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Jun 20 19:02:23.857686 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Jun 20 19:02:23.857750 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jun 20 19:02:23.857817 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jun 20 19:02:23.857880 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Jun 20 19:02:23.857942 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Jun 20 19:02:23.858032 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jun 20 19:02:23.858094 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jun 20 19:02:23.858150 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jun 20 19:02:23.858210 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jun 20 19:02:23.858267 kernel: pci_bus 0000:00: resource 7 [mem 0x7d000000-0xafffffff window] Jun 20 19:02:23.858337 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jun 20 19:02:23.858393 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jun 20 19:02:23.858463 kernel: pci_bus 0000:01: resource 1 [mem 0xfe800000-0xfe9fffff] Jun 20 19:02:23.858522 kernel: pci_bus 0000:01: resource 2 [mem 0xfd000000-0xfd1fffff 64bit pref] Jun 20 19:02:23.858590 kernel: pci_bus 0000:02: resource 1 [mem 0xfe600000-0xfe7fffff] Jun 20 19:02:23.858648 kernel: pci_bus 0000:02: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Jun 20 19:02:23.858711 kernel: pci_bus 0000:03: resource 1 [mem 0xfe400000-0xfe5fffff] Jun 20 19:02:23.858770 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Jun 20 19:02:23.858836 kernel: pci_bus 0000:04: resource 1 [mem 0xfe200000-0xfe3fffff] Jun 20 19:02:23.858894 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Jun 20 19:02:23.858955 kernel: pci_bus 0000:05: resource 1 [mem 0xfe000000-0xfe1fffff] Jun 20 19:02:23.859191 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Jun 20 19:02:23.859261 kernel: pci_bus 0000:06: resource 1 [mem 0xfde00000-0xfdffffff] Jun 20 19:02:23.859380 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Jun 20 19:02:23.859452 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Jun 20 19:02:23.859511 kernel: pci_bus 0000:07: resource 1 [mem 0xfdc00000-0xfddfffff] Jun 20 19:02:23.859567 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Jun 20 19:02:23.859628 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Jun 20 19:02:23.859685 kernel: pci_bus 0000:08: resource 1 [mem 0xfda00000-0xfdbfffff] Jun 20 19:02:23.859740 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Jun 20 19:02:23.859801 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Jun 20 19:02:23.859861 kernel: pci_bus 0000:09: resource 1 [mem 0xfd800000-0xfd9fffff] Jun 20 19:02:23.859917 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Jun 20 19:02:23.859926 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jun 20 19:02:23.859932 kernel: PCI: CLS 0 bytes, default 64 Jun 20 19:02:23.859939 kernel: Initialise system trusted keyrings Jun 20 19:02:23.859945 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jun 20 19:02:23.859951 kernel: Key type asymmetric registered Jun 20 19:02:23.859957 kernel: Asymmetric key parser 'x509' registered Jun 20 19:02:23.859965 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jun 20 19:02:23.859985 kernel: io scheduler mq-deadline registered Jun 20 19:02:23.859992 kernel: io scheduler kyber registered Jun 20 19:02:23.859998 kernel: io scheduler bfq registered Jun 20 19:02:23.860088 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jun 20 19:02:23.860155 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jun 20 19:02:23.860218 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jun 20 19:02:23.860283 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jun 20 19:02:23.860410 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jun 20 19:02:23.860499 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jun 20 19:02:23.860563 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jun 20 19:02:23.860625 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jun 20 19:02:23.860688 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jun 20 19:02:23.860750 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jun 20 19:02:23.860813 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jun 20 19:02:23.860875 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jun 20 19:02:23.860937 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jun 20 19:02:23.861053 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jun 20 19:02:23.861119 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jun 20 19:02:23.861182 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jun 20 19:02:23.861192 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jun 20 19:02:23.861252 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Jun 20 19:02:23.861330 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Jun 20 19:02:23.861340 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jun 20 19:02:23.861347 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Jun 20 19:02:23.861353 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 20 19:02:23.861363 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jun 20 19:02:23.861369 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jun 20 19:02:23.861376 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jun 20 19:02:23.861382 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jun 20 19:02:23.861451 kernel: rtc_cmos 00:03: RTC can wake from S4 Jun 20 19:02:23.861461 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jun 20 19:02:23.861516 kernel: rtc_cmos 00:03: registered as rtc0 Jun 20 19:02:23.861572 kernel: rtc_cmos 00:03: setting system clock to 2025-06-20T19:02:23 UTC (1750446143) Jun 20 19:02:23.861663 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jun 20 19:02:23.861677 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jun 20 19:02:23.861684 kernel: NET: Registered PF_INET6 protocol family Jun 20 19:02:23.861691 kernel: Segment Routing with IPv6 Jun 20 19:02:23.861697 kernel: In-situ OAM (IOAM) with IPv6 Jun 20 19:02:23.861703 kernel: NET: Registered PF_PACKET protocol family Jun 20 19:02:23.861709 kernel: Key type dns_resolver registered Jun 20 19:02:23.861715 kernel: IPI shorthand broadcast: enabled Jun 20 19:02:23.861722 kernel: sched_clock: Marking stable (1052201974, 133797165)->(1194081152, -8082013) Jun 20 19:02:23.861731 kernel: registered taskstats version 1 Jun 20 19:02:23.861737 kernel: Loading compiled-in X.509 certificates Jun 20 19:02:23.861743 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.94-flatcar: 583832681762bbd3c2cbcca308896cbba88c4497' Jun 20 19:02:23.861749 kernel: Key type .fscrypt registered Jun 20 19:02:23.861755 kernel: Key type fscrypt-provisioning registered Jun 20 19:02:23.861761 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 20 19:02:23.861768 kernel: ima: Allocated hash algorithm: sha1 Jun 20 19:02:23.861774 kernel: ima: No architecture policies found Jun 20 19:02:23.861781 kernel: clk: Disabling unused clocks Jun 20 19:02:23.861787 kernel: Freeing unused kernel image (initmem) memory: 43488K Jun 20 19:02:23.861794 kernel: Write protecting the kernel read-only data: 38912k Jun 20 19:02:23.861800 kernel: Freeing unused kernel image (rodata/data gap) memory: 1704K Jun 20 19:02:23.861806 kernel: Run /init as init process Jun 20 19:02:23.861812 kernel: with arguments: Jun 20 19:02:23.861818 kernel: /init Jun 20 19:02:23.861824 kernel: with environment: Jun 20 19:02:23.861830 kernel: HOME=/ Jun 20 19:02:23.861836 kernel: TERM=linux Jun 20 19:02:23.861843 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 20 19:02:23.861850 systemd[1]: Successfully made /usr/ read-only. Jun 20 19:02:23.861860 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 20 19:02:23.861867 systemd[1]: Detected virtualization kvm. Jun 20 19:02:23.861874 systemd[1]: Detected architecture x86-64. Jun 20 19:02:23.861880 systemd[1]: Running in initrd. Jun 20 19:02:23.861887 systemd[1]: No hostname configured, using default hostname. Jun 20 19:02:23.861895 systemd[1]: Hostname set to . Jun 20 19:02:23.861901 systemd[1]: Initializing machine ID from VM UUID. Jun 20 19:02:23.861908 systemd[1]: Queued start job for default target initrd.target. Jun 20 19:02:23.861914 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 19:02:23.861921 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 19:02:23.861928 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jun 20 19:02:23.861935 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 20 19:02:23.861942 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jun 20 19:02:23.861951 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jun 20 19:02:23.861958 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jun 20 19:02:23.861966 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jun 20 19:02:23.861986 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 19:02:23.861993 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 20 19:02:23.862002 systemd[1]: Reached target paths.target - Path Units. Jun 20 19:02:23.862008 systemd[1]: Reached target slices.target - Slice Units. Jun 20 19:02:23.862017 systemd[1]: Reached target swap.target - Swaps. Jun 20 19:02:23.862039 systemd[1]: Reached target timers.target - Timer Units. Jun 20 19:02:23.862046 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 20 19:02:23.862053 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 20 19:02:23.862062 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 20 19:02:23.862068 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jun 20 19:02:23.862075 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 20 19:02:23.862082 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 20 19:02:23.862090 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 19:02:23.862097 systemd[1]: Reached target sockets.target - Socket Units. Jun 20 19:02:23.862103 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jun 20 19:02:23.862110 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 20 19:02:23.862117 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 20 19:02:23.862123 systemd[1]: Starting systemd-fsck-usr.service... Jun 20 19:02:23.862130 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 20 19:02:23.862137 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 20 19:02:23.862144 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:02:23.862151 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jun 20 19:02:23.862176 systemd-journald[188]: Collecting audit messages is disabled. Jun 20 19:02:23.862194 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 19:02:23.862203 systemd[1]: Finished systemd-fsck-usr.service. Jun 20 19:02:23.862210 systemd-journald[188]: Journal started Jun 20 19:02:23.862227 systemd-journald[188]: Runtime Journal (/run/log/journal/d5b063c9f6da4af8b12b82210e3d48e7) is 4.8M, max 38.3M, 33.5M free. Jun 20 19:02:23.859396 systemd-modules-load[189]: Inserted module 'overlay' Jun 20 19:02:23.905347 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 20 19:02:23.905363 kernel: Bridge firewalling registered Jun 20 19:02:23.905372 systemd[1]: Started systemd-journald.service - Journal Service. Jun 20 19:02:23.883449 systemd-modules-load[189]: Inserted module 'br_netfilter' Jun 20 19:02:23.906547 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 20 19:02:23.907823 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:02:23.913100 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 20 19:02:23.914892 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 19:02:23.917077 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 20 19:02:23.925159 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 20 19:02:23.926170 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:02:23.932200 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 20 19:02:23.933041 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 19:02:23.935219 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 19:02:23.941125 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 20 19:02:23.944067 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 20 19:02:23.945797 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 20 19:02:23.950423 dracut-cmdline[221]: dracut-dracut-053 Jun 20 19:02:23.952498 dracut-cmdline[221]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=c5ce7ee72c13e935b8a741ba19830125b417ea1672f46b6a215da9317cee8e17 Jun 20 19:02:23.966684 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 19:02:23.974256 systemd-resolved[223]: Positive Trust Anchors: Jun 20 19:02:23.974269 systemd-resolved[223]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 20 19:02:23.974303 systemd-resolved[223]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 20 19:02:23.977287 systemd-resolved[223]: Defaulting to hostname 'linux'. Jun 20 19:02:23.978026 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 20 19:02:23.978692 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 20 19:02:24.004022 kernel: SCSI subsystem initialized Jun 20 19:02:24.010994 kernel: Loading iSCSI transport class v2.0-870. Jun 20 19:02:24.020002 kernel: iscsi: registered transport (tcp) Jun 20 19:02:24.036024 kernel: iscsi: registered transport (qla4xxx) Jun 20 19:02:24.036065 kernel: QLogic iSCSI HBA Driver Jun 20 19:02:24.067261 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 20 19:02:24.074172 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 20 19:02:24.093557 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 20 19:02:24.093611 kernel: device-mapper: uevent: version 1.0.3 Jun 20 19:02:24.093621 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jun 20 19:02:24.130002 kernel: raid6: avx2x4 gen() 31135 MB/s Jun 20 19:02:24.147018 kernel: raid6: avx2x2 gen() 30133 MB/s Jun 20 19:02:24.164206 kernel: raid6: avx2x1 gen() 21372 MB/s Jun 20 19:02:24.164273 kernel: raid6: using algorithm avx2x4 gen() 31135 MB/s Jun 20 19:02:24.182202 kernel: raid6: .... xor() 4270 MB/s, rmw enabled Jun 20 19:02:24.182250 kernel: raid6: using avx2x2 recovery algorithm Jun 20 19:02:24.200021 kernel: xor: automatically using best checksumming function avx Jun 20 19:02:24.317011 kernel: Btrfs loaded, zoned=no, fsverity=no Jun 20 19:02:24.328933 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 20 19:02:24.337203 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 19:02:24.347819 systemd-udevd[409]: Using default interface naming scheme 'v255'. Jun 20 19:02:24.351079 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 19:02:24.359259 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 20 19:02:24.373666 dracut-pre-trigger[424]: rd.md=0: removing MD RAID activation Jun 20 19:02:24.395747 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 20 19:02:24.401106 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 20 19:02:24.444113 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 19:02:24.449182 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 20 19:02:24.464658 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 20 19:02:24.466417 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 20 19:02:24.467681 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 19:02:24.468869 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 20 19:02:24.474104 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 20 19:02:24.483507 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 20 19:02:24.513009 kernel: scsi host0: Virtio SCSI HBA Jun 20 19:02:24.518450 kernel: cryptd: max_cpu_qlen set to 1000 Jun 20 19:02:24.521994 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jun 20 19:02:24.537228 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 20 19:02:24.537346 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 19:02:24.569474 kernel: ACPI: bus type USB registered Jun 20 19:02:24.569500 kernel: usbcore: registered new interface driver usbfs Jun 20 19:02:24.569510 kernel: usbcore: registered new interface driver hub Jun 20 19:02:24.569517 kernel: usbcore: registered new device driver usb Jun 20 19:02:24.569531 kernel: libata version 3.00 loaded. Jun 20 19:02:24.537924 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 20 19:02:24.538932 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 19:02:24.539902 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:02:24.568650 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:02:24.579693 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:02:24.611004 kernel: AVX2 version of gcm_enc/dec engaged. Jun 20 19:02:24.611059 kernel: AES CTR mode by8 optimization enabled Jun 20 19:02:24.611075 kernel: ahci 0000:00:1f.2: version 3.0 Jun 20 19:02:24.613174 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jun 20 19:02:24.613203 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jun 20 19:02:24.613369 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jun 20 19:02:24.619153 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jun 20 19:02:24.619336 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Jun 20 19:02:24.619991 kernel: scsi host1: ahci Jun 20 19:02:24.620991 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jun 20 19:02:24.622004 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jun 20 19:02:24.622126 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Jun 20 19:02:24.622219 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Jun 20 19:02:24.623026 kernel: scsi host2: ahci Jun 20 19:02:24.623136 kernel: hub 1-0:1.0: USB hub found Jun 20 19:02:24.623236 kernel: hub 1-0:1.0: 4 ports detected Jun 20 19:02:24.627994 kernel: scsi host3: ahci Jun 20 19:02:24.631156 kernel: scsi host4: ahci Jun 20 19:02:24.632428 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jun 20 19:02:24.633143 kernel: scsi host5: ahci Jun 20 19:02:24.633253 kernel: hub 2-0:1.0: USB hub found Jun 20 19:02:24.633382 kernel: hub 2-0:1.0: 4 ports detected Jun 20 19:02:24.635037 kernel: scsi host6: ahci Jun 20 19:02:24.635164 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a100 irq 48 Jun 20 19:02:24.635175 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a180 irq 48 Jun 20 19:02:24.635183 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a200 irq 48 Jun 20 19:02:24.635190 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a280 irq 48 Jun 20 19:02:24.635197 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a300 irq 48 Jun 20 19:02:24.635205 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a380 irq 48 Jun 20 19:02:24.644364 kernel: sd 0:0:0:0: Power-on or device reset occurred Jun 20 19:02:24.644625 kernel: sd 0:0:0:0: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Jun 20 19:02:24.644782 kernel: sd 0:0:0:0: [sda] Write Protect is off Jun 20 19:02:24.644919 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Jun 20 19:02:24.645080 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jun 20 19:02:24.653546 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jun 20 19:02:24.653601 kernel: GPT:17805311 != 80003071 Jun 20 19:02:24.653615 kernel: GPT:Alternate GPT header not at the end of the disk. Jun 20 19:02:24.653628 kernel: GPT:17805311 != 80003071 Jun 20 19:02:24.653639 kernel: GPT: Use GNU Parted to correct GPT errors. Jun 20 19:02:24.653660 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 20 19:02:24.653674 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jun 20 19:02:24.685236 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:02:24.693154 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 20 19:02:24.703346 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 19:02:24.869006 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jun 20 19:02:24.949002 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jun 20 19:02:24.949092 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jun 20 19:02:24.949109 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jun 20 19:02:24.951029 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jun 20 19:02:24.951144 kernel: ata1.00: applying bridge limits Jun 20 19:02:24.955863 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jun 20 19:02:24.955929 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jun 20 19:02:24.955951 kernel: ata1.00: configured for UDMA/100 Jun 20 19:02:24.959632 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jun 20 19:02:24.964032 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jun 20 19:02:25.019470 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jun 20 19:02:25.019793 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jun 20 19:02:25.022025 kernel: hid: raw HID events driver (C) Jiri Kosina Jun 20 19:02:25.035046 kernel: usbcore: registered new interface driver usbhid Jun 20 19:02:25.035081 kernel: usbhid: USB HID core driver Jun 20 19:02:25.038998 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (466) Jun 20 19:02:25.045005 kernel: BTRFS: device fsid 5ff786f3-14e2-4689-ad32-ff903cf13f91 devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (459) Jun 20 19:02:25.045035 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Jun 20 19:02:25.055628 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input3 Jun 20 19:02:25.055688 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Jun 20 19:02:25.055549 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jun 20 19:02:25.071849 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jun 20 19:02:25.073148 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Jun 20 19:02:25.081126 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jun 20 19:02:25.088712 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jun 20 19:02:25.096089 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 20 19:02:25.101121 disk-uuid[581]: Primary Header is updated. Jun 20 19:02:25.101121 disk-uuid[581]: Secondary Entries is updated. Jun 20 19:02:25.101121 disk-uuid[581]: Secondary Header is updated. Jun 20 19:02:25.106005 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 20 19:02:26.117043 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 20 19:02:26.119180 disk-uuid[582]: The operation has completed successfully. Jun 20 19:02:26.190495 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 20 19:02:26.190609 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 20 19:02:26.227115 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 20 19:02:26.229752 sh[599]: Success Jun 20 19:02:26.240004 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jun 20 19:02:26.278471 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 20 19:02:26.288101 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 20 19:02:26.290527 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 20 19:02:26.305978 kernel: BTRFS info (device dm-0): first mount of filesystem 5ff786f3-14e2-4689-ad32-ff903cf13f91 Jun 20 19:02:26.306006 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jun 20 19:02:26.306015 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jun 20 19:02:26.308117 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jun 20 19:02:26.310692 kernel: BTRFS info (device dm-0): using free space tree Jun 20 19:02:26.317993 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jun 20 19:02:26.320225 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 20 19:02:26.321158 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 20 19:02:26.335081 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 20 19:02:26.336657 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 20 19:02:26.354438 kernel: BTRFS info (device sda6): first mount of filesystem 0d4ae0d2-6537-4cbd-8c37-7b929dcf3a9f Jun 20 19:02:26.354466 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jun 20 19:02:26.354476 kernel: BTRFS info (device sda6): using free space tree Jun 20 19:02:26.358858 kernel: BTRFS info (device sda6): enabling ssd optimizations Jun 20 19:02:26.358879 kernel: BTRFS info (device sda6): auto enabling async discard Jun 20 19:02:26.363002 kernel: BTRFS info (device sda6): last unmount of filesystem 0d4ae0d2-6537-4cbd-8c37-7b929dcf3a9f Jun 20 19:02:26.370096 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 20 19:02:26.374117 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 20 19:02:26.396139 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 20 19:02:26.404092 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 20 19:02:26.426507 systemd-networkd[777]: lo: Link UP Jun 20 19:02:26.426514 systemd-networkd[777]: lo: Gained carrier Jun 20 19:02:26.428112 systemd-networkd[777]: Enumeration completed Jun 20 19:02:26.428249 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 20 19:02:26.428907 systemd-networkd[777]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:02:26.428910 systemd-networkd[777]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 20 19:02:26.429661 systemd[1]: Reached target network.target - Network. Jun 20 19:02:26.430416 systemd-networkd[777]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:02:26.430419 systemd-networkd[777]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 20 19:02:26.431581 systemd-networkd[777]: eth0: Link UP Jun 20 19:02:26.431586 systemd-networkd[777]: eth0: Gained carrier Jun 20 19:02:26.431593 systemd-networkd[777]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:02:26.440386 systemd-networkd[777]: eth1: Link UP Jun 20 19:02:26.440390 systemd-networkd[777]: eth1: Gained carrier Jun 20 19:02:26.440399 systemd-networkd[777]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:02:26.448525 ignition[740]: Ignition 2.20.0 Jun 20 19:02:26.448537 ignition[740]: Stage: fetch-offline Jun 20 19:02:26.448574 ignition[740]: no configs at "/usr/lib/ignition/base.d" Jun 20 19:02:26.448586 ignition[740]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jun 20 19:02:26.450790 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 20 19:02:26.448700 ignition[740]: parsed url from cmdline: "" Jun 20 19:02:26.448705 ignition[740]: no config URL provided Jun 20 19:02:26.448713 ignition[740]: reading system config file "/usr/lib/ignition/user.ign" Jun 20 19:02:26.448724 ignition[740]: no config at "/usr/lib/ignition/user.ign" Jun 20 19:02:26.448730 ignition[740]: failed to fetch config: resource requires networking Jun 20 19:02:26.448914 ignition[740]: Ignition finished successfully Jun 20 19:02:26.457124 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jun 20 19:02:26.466021 systemd-networkd[777]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jun 20 19:02:26.467871 ignition[785]: Ignition 2.20.0 Jun 20 19:02:26.467881 ignition[785]: Stage: fetch Jun 20 19:02:26.468075 ignition[785]: no configs at "/usr/lib/ignition/base.d" Jun 20 19:02:26.468087 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jun 20 19:02:26.468173 ignition[785]: parsed url from cmdline: "" Jun 20 19:02:26.468178 ignition[785]: no config URL provided Jun 20 19:02:26.468185 ignition[785]: reading system config file "/usr/lib/ignition/user.ign" Jun 20 19:02:26.468196 ignition[785]: no config at "/usr/lib/ignition/user.ign" Jun 20 19:02:26.468222 ignition[785]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Jun 20 19:02:26.468385 ignition[785]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Jun 20 19:02:26.502011 systemd-networkd[777]: eth0: DHCPv4 address 157.180.24.181/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jun 20 19:02:26.669537 ignition[785]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Jun 20 19:02:26.674947 ignition[785]: GET result: OK Jun 20 19:02:26.675095 ignition[785]: parsing config with SHA512: 3cf4e246e206a51d03c10001e51472b6ff25b9051ec33c0d8c9012780156fe96e8af3a6d28756ce366894685ed864c3f8aae5c90946ce2dc2140325c5e23ed76 Jun 20 19:02:26.683944 unknown[785]: fetched base config from "system" Jun 20 19:02:26.683962 unknown[785]: fetched base config from "system" Jun 20 19:02:26.684515 ignition[785]: fetch: fetch complete Jun 20 19:02:26.683971 unknown[785]: fetched user config from "hetzner" Jun 20 19:02:26.684522 ignition[785]: fetch: fetch passed Jun 20 19:02:26.686120 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jun 20 19:02:26.684568 ignition[785]: Ignition finished successfully Jun 20 19:02:26.693134 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 20 19:02:26.715871 ignition[793]: Ignition 2.20.0 Jun 20 19:02:26.715892 ignition[793]: Stage: kargs Jun 20 19:02:26.716325 ignition[793]: no configs at "/usr/lib/ignition/base.d" Jun 20 19:02:26.716352 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jun 20 19:02:26.718726 ignition[793]: kargs: kargs passed Jun 20 19:02:26.718826 ignition[793]: Ignition finished successfully Jun 20 19:02:26.723309 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 20 19:02:26.730169 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 20 19:02:26.745504 ignition[800]: Ignition 2.20.0 Jun 20 19:02:26.745522 ignition[800]: Stage: disks Jun 20 19:02:26.745817 ignition[800]: no configs at "/usr/lib/ignition/base.d" Jun 20 19:02:26.745838 ignition[800]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jun 20 19:02:26.748911 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 20 19:02:26.747601 ignition[800]: disks: disks passed Jun 20 19:02:26.754110 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 20 19:02:26.747672 ignition[800]: Ignition finished successfully Jun 20 19:02:26.754895 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 20 19:02:26.755906 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 20 19:02:26.756788 systemd[1]: Reached target sysinit.target - System Initialization. Jun 20 19:02:26.757828 systemd[1]: Reached target basic.target - Basic System. Jun 20 19:02:26.765145 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 20 19:02:26.776533 systemd-fsck[808]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jun 20 19:02:26.778406 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 20 19:02:26.784099 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 20 19:02:26.845211 kernel: EXT4-fs (sda9): mounted filesystem 943f8432-3dc9-4e22-b9bd-c29bf6a1f5e1 r/w with ordered data mode. Quota mode: none. Jun 20 19:02:26.845878 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 20 19:02:26.846793 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 20 19:02:26.852076 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 20 19:02:26.854613 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 20 19:02:26.858148 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jun 20 19:02:26.858811 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 20 19:02:26.858843 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 20 19:02:26.867891 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (816) Jun 20 19:02:26.862791 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 20 19:02:26.868923 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 20 19:02:26.879965 kernel: BTRFS info (device sda6): first mount of filesystem 0d4ae0d2-6537-4cbd-8c37-7b929dcf3a9f Jun 20 19:02:26.880004 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jun 20 19:02:26.880020 kernel: BTRFS info (device sda6): using free space tree Jun 20 19:02:26.880034 kernel: BTRFS info (device sda6): enabling ssd optimizations Jun 20 19:02:26.880052 kernel: BTRFS info (device sda6): auto enabling async discard Jun 20 19:02:26.885531 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 20 19:02:26.926151 coreos-metadata[818]: Jun 20 19:02:26.926 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Jun 20 19:02:26.927361 coreos-metadata[818]: Jun 20 19:02:26.926 INFO Fetch successful Jun 20 19:02:26.929450 coreos-metadata[818]: Jun 20 19:02:26.928 INFO wrote hostname ci-4230-2-0-e-b360e0c6ec to /sysroot/etc/hostname Jun 20 19:02:26.931930 initrd-setup-root[844]: cut: /sysroot/etc/passwd: No such file or directory Jun 20 19:02:26.930356 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jun 20 19:02:26.934778 initrd-setup-root[852]: cut: /sysroot/etc/group: No such file or directory Jun 20 19:02:26.938092 initrd-setup-root[859]: cut: /sysroot/etc/shadow: No such file or directory Jun 20 19:02:26.942175 initrd-setup-root[866]: cut: /sysroot/etc/gshadow: No such file or directory Jun 20 19:02:27.001470 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 20 19:02:27.006058 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 20 19:02:27.010153 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 20 19:02:27.015025 kernel: BTRFS info (device sda6): last unmount of filesystem 0d4ae0d2-6537-4cbd-8c37-7b929dcf3a9f Jun 20 19:02:27.033258 ignition[933]: INFO : Ignition 2.20.0 Jun 20 19:02:27.034300 ignition[933]: INFO : Stage: mount Jun 20 19:02:27.035335 ignition[933]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 19:02:27.035335 ignition[933]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jun 20 19:02:27.038128 ignition[933]: INFO : mount: mount passed Jun 20 19:02:27.038128 ignition[933]: INFO : Ignition finished successfully Jun 20 19:02:27.038372 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 20 19:02:27.039386 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 20 19:02:27.045038 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 20 19:02:27.302567 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 20 19:02:27.307247 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 20 19:02:27.317444 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (946) Jun 20 19:02:27.317486 kernel: BTRFS info (device sda6): first mount of filesystem 0d4ae0d2-6537-4cbd-8c37-7b929dcf3a9f Jun 20 19:02:27.319066 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jun 20 19:02:27.321090 kernel: BTRFS info (device sda6): using free space tree Jun 20 19:02:27.325437 kernel: BTRFS info (device sda6): enabling ssd optimizations Jun 20 19:02:27.325468 kernel: BTRFS info (device sda6): auto enabling async discard Jun 20 19:02:27.327375 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 20 19:02:27.345100 ignition[963]: INFO : Ignition 2.20.0 Jun 20 19:02:27.345100 ignition[963]: INFO : Stage: files Jun 20 19:02:27.347257 ignition[963]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 19:02:27.347257 ignition[963]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jun 20 19:02:27.350213 ignition[963]: DEBUG : files: compiled without relabeling support, skipping Jun 20 19:02:27.350213 ignition[963]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 20 19:02:27.350213 ignition[963]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 20 19:02:27.354180 ignition[963]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 20 19:02:27.354180 ignition[963]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 20 19:02:27.354180 ignition[963]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 20 19:02:27.353107 unknown[963]: wrote ssh authorized keys file for user: core Jun 20 19:02:27.357488 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 20 19:02:27.357488 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jun 20 19:02:27.624493 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jun 20 19:02:27.940761 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 20 19:02:27.940761 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jun 20 19:02:27.940761 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jun 20 19:02:28.089317 systemd-networkd[777]: eth1: Gained IPv6LL Jun 20 19:02:28.153315 systemd-networkd[777]: eth0: Gained IPv6LL Jun 20 19:02:28.595196 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jun 20 19:02:28.697466 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jun 20 19:02:28.697466 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jun 20 19:02:28.699473 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jun 20 19:02:28.699473 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 20 19:02:28.699473 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 20 19:02:28.699473 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 20 19:02:28.699473 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 20 19:02:28.699473 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 20 19:02:28.699473 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 20 19:02:28.699473 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 20 19:02:28.699473 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 20 19:02:28.699473 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jun 20 19:02:28.699473 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jun 20 19:02:28.699473 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jun 20 19:02:28.699473 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Jun 20 19:02:29.437063 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jun 20 19:02:29.770268 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jun 20 19:02:29.770268 ignition[963]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jun 20 19:02:29.772469 ignition[963]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 20 19:02:29.772469 ignition[963]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 20 19:02:29.772469 ignition[963]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jun 20 19:02:29.772469 ignition[963]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jun 20 19:02:29.772469 ignition[963]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jun 20 19:02:29.772469 ignition[963]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jun 20 19:02:29.772469 ignition[963]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jun 20 19:02:29.772469 ignition[963]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jun 20 19:02:29.772469 ignition[963]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jun 20 19:02:29.772469 ignition[963]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 20 19:02:29.772469 ignition[963]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 20 19:02:29.772469 ignition[963]: INFO : files: files passed Jun 20 19:02:29.772469 ignition[963]: INFO : Ignition finished successfully Jun 20 19:02:29.773057 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 20 19:02:29.783070 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 20 19:02:29.787100 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 20 19:02:29.787940 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 20 19:02:29.788026 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 20 19:02:29.797420 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 20 19:02:29.797420 initrd-setup-root-after-ignition[992]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 20 19:02:29.799485 initrd-setup-root-after-ignition[996]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 20 19:02:29.799330 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 20 19:02:29.800186 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 20 19:02:29.808077 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 20 19:02:29.821468 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 20 19:02:29.821545 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 20 19:02:29.822680 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 20 19:02:29.823486 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 20 19:02:29.824499 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 20 19:02:29.825669 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 20 19:02:29.835043 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 20 19:02:29.840106 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 20 19:02:29.846660 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 20 19:02:29.847229 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 19:02:29.848314 systemd[1]: Stopped target timers.target - Timer Units. Jun 20 19:02:29.849389 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 20 19:02:29.849473 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 20 19:02:29.850654 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 20 19:02:29.851271 systemd[1]: Stopped target basic.target - Basic System. Jun 20 19:02:29.852209 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 20 19:02:29.853108 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 20 19:02:29.853953 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 20 19:02:29.854944 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 20 19:02:29.855908 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 20 19:02:29.856927 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 20 19:02:29.858027 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 20 19:02:29.859019 systemd[1]: Stopped target swap.target - Swaps. Jun 20 19:02:29.859908 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 20 19:02:29.860007 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 20 19:02:29.861129 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 20 19:02:29.861764 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 19:02:29.862639 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jun 20 19:02:29.864084 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 19:02:29.864873 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 20 19:02:29.864950 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 20 19:02:29.866330 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 20 19:02:29.866418 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 20 19:02:29.867049 systemd[1]: ignition-files.service: Deactivated successfully. Jun 20 19:02:29.867148 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 20 19:02:29.867869 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jun 20 19:02:29.867944 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jun 20 19:02:29.879405 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 20 19:02:29.881104 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 20 19:02:29.883266 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 20 19:02:29.883408 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 19:02:29.884761 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 20 19:02:29.884879 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 20 19:02:29.890999 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 20 19:02:29.891076 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 20 19:02:29.897864 ignition[1016]: INFO : Ignition 2.20.0 Jun 20 19:02:29.899366 ignition[1016]: INFO : Stage: umount Jun 20 19:02:29.899366 ignition[1016]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 19:02:29.899366 ignition[1016]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jun 20 19:02:29.899366 ignition[1016]: INFO : umount: umount passed Jun 20 19:02:29.899366 ignition[1016]: INFO : Ignition finished successfully Jun 20 19:02:29.901500 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 20 19:02:29.902398 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 20 19:02:29.902455 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 20 19:02:29.907998 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 20 19:02:29.908054 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 20 19:02:29.909094 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 20 19:02:29.909126 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 20 19:02:29.909580 systemd[1]: ignition-fetch.service: Deactivated successfully. Jun 20 19:02:29.909609 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jun 20 19:02:29.913409 systemd[1]: Stopped target network.target - Network. Jun 20 19:02:29.914256 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 20 19:02:29.914295 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 20 19:02:29.915150 systemd[1]: Stopped target paths.target - Path Units. Jun 20 19:02:29.919907 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 20 19:02:29.924016 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 19:02:29.929559 systemd[1]: Stopped target slices.target - Slice Units. Jun 20 19:02:29.930645 systemd[1]: Stopped target sockets.target - Socket Units. Jun 20 19:02:29.932010 systemd[1]: iscsid.socket: Deactivated successfully. Jun 20 19:02:29.932045 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 20 19:02:29.936059 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 20 19:02:29.936090 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 20 19:02:29.936529 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 20 19:02:29.936572 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 20 19:02:29.937022 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jun 20 19:02:29.937072 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jun 20 19:02:29.937706 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 20 19:02:29.939131 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 20 19:02:29.940460 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 20 19:02:29.940532 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 20 19:02:29.941400 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 20 19:02:29.941457 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 20 19:02:29.944530 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 20 19:02:29.944610 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 20 19:02:29.947876 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jun 20 19:02:29.948106 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 20 19:02:29.948181 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 20 19:02:29.949714 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jun 20 19:02:29.950088 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 20 19:02:29.950127 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 20 19:02:29.956070 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 20 19:02:29.957220 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 20 19:02:29.957273 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 20 19:02:29.958296 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 20 19:02:29.958346 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:02:29.959888 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 20 19:02:29.959924 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 20 19:02:29.960566 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 20 19:02:29.960598 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 19:02:29.961875 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 19:02:29.964059 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jun 20 19:02:29.964109 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jun 20 19:02:29.968907 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 20 19:02:29.968991 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 20 19:02:29.974429 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 20 19:02:29.974529 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 19:02:29.975695 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 20 19:02:29.975724 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 20 19:02:29.976587 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 20 19:02:29.976610 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 19:02:29.977536 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 20 19:02:29.977569 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 20 19:02:29.978898 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 20 19:02:29.978930 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 20 19:02:29.979913 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 20 19:02:29.979944 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 19:02:29.987065 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 20 19:02:29.988865 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 20 19:02:29.988904 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 19:02:29.990004 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jun 20 19:02:29.990036 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 20 19:02:29.991063 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 20 19:02:29.991100 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 19:02:29.991618 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 19:02:29.991649 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:02:29.993718 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jun 20 19:02:29.993759 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jun 20 19:02:29.993969 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 20 19:02:29.994051 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 20 19:02:29.995153 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 20 19:02:30.007089 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 20 19:02:30.012324 systemd[1]: Switching root. Jun 20 19:02:30.056764 systemd-journald[188]: Journal stopped Jun 20 19:02:30.832751 systemd-journald[188]: Received SIGTERM from PID 1 (systemd). Jun 20 19:02:30.832794 kernel: SELinux: policy capability network_peer_controls=1 Jun 20 19:02:30.832808 kernel: SELinux: policy capability open_perms=1 Jun 20 19:02:30.832816 kernel: SELinux: policy capability extended_socket_class=1 Jun 20 19:02:30.832824 kernel: SELinux: policy capability always_check_network=0 Jun 20 19:02:30.832837 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 20 19:02:30.832844 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 20 19:02:30.832852 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 20 19:02:30.832859 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 20 19:02:30.832867 kernel: audit: type=1403 audit(1750446150.174:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 20 19:02:30.832875 systemd[1]: Successfully loaded SELinux policy in 52.627ms. Jun 20 19:02:30.832891 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.429ms. Jun 20 19:02:30.832901 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 20 19:02:30.832909 systemd[1]: Detected virtualization kvm. Jun 20 19:02:30.832918 systemd[1]: Detected architecture x86-64. Jun 20 19:02:30.832928 systemd[1]: Detected first boot. Jun 20 19:02:30.832938 systemd[1]: Hostname set to . Jun 20 19:02:30.832946 systemd[1]: Initializing machine ID from VM UUID. Jun 20 19:02:30.832954 zram_generator::config[1061]: No configuration found. Jun 20 19:02:30.832967 kernel: Guest personality initialized and is inactive Jun 20 19:02:30.834260 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jun 20 19:02:30.834274 kernel: Initialized host personality Jun 20 19:02:30.834283 kernel: NET: Registered PF_VSOCK protocol family Jun 20 19:02:30.834292 systemd[1]: Populated /etc with preset unit settings. Jun 20 19:02:30.834313 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jun 20 19:02:30.834329 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jun 20 19:02:30.834345 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jun 20 19:02:30.834365 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jun 20 19:02:30.834381 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 20 19:02:30.834395 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 20 19:02:30.834409 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 20 19:02:30.834421 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 20 19:02:30.834435 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 20 19:02:30.834484 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 20 19:02:30.834502 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 20 19:02:30.834518 systemd[1]: Created slice user.slice - User and Session Slice. Jun 20 19:02:30.834537 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 19:02:30.834553 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 19:02:30.834562 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 20 19:02:30.834571 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 20 19:02:30.834579 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 20 19:02:30.834588 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 20 19:02:30.834598 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jun 20 19:02:30.834606 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 19:02:30.834615 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jun 20 19:02:30.834623 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jun 20 19:02:30.834631 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jun 20 19:02:30.834640 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 20 19:02:30.834654 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 19:02:30.834670 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 20 19:02:30.834685 systemd[1]: Reached target slices.target - Slice Units. Jun 20 19:02:30.834707 systemd[1]: Reached target swap.target - Swaps. Jun 20 19:02:30.834728 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 20 19:02:30.834740 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 20 19:02:30.834751 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jun 20 19:02:30.834760 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 20 19:02:30.834768 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 20 19:02:30.834778 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 19:02:30.834786 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 20 19:02:30.834801 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 20 19:02:30.834816 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 20 19:02:30.834829 systemd[1]: Mounting media.mount - External Media Directory... Jun 20 19:02:30.834838 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:02:30.834847 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 20 19:02:30.834855 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 20 19:02:30.834863 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 20 19:02:30.834877 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 20 19:02:30.834892 systemd[1]: Reached target machines.target - Containers. Jun 20 19:02:30.834908 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 20 19:02:30.834919 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 19:02:30.834928 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 20 19:02:30.834936 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 20 19:02:30.834944 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 19:02:30.834953 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 20 19:02:30.834963 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 19:02:30.834985 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 20 19:02:30.834994 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 19:02:30.835006 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 20 19:02:30.835021 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jun 20 19:02:30.835037 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jun 20 19:02:30.835052 kernel: loop: module loaded Jun 20 19:02:30.835062 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jun 20 19:02:30.835071 systemd[1]: Stopped systemd-fsck-usr.service. Jun 20 19:02:30.835084 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 19:02:30.835092 kernel: ACPI: bus type drm_connector registered Jun 20 19:02:30.835100 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 20 19:02:30.835108 kernel: fuse: init (API version 7.39) Jun 20 19:02:30.835116 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 20 19:02:30.835124 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 20 19:02:30.835132 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 20 19:02:30.835145 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jun 20 19:02:30.835163 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 20 19:02:30.835174 systemd[1]: verity-setup.service: Deactivated successfully. Jun 20 19:02:30.835182 systemd[1]: Stopped verity-setup.service. Jun 20 19:02:30.835208 systemd-journald[1145]: Collecting audit messages is disabled. Jun 20 19:02:30.835257 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:02:30.835277 systemd-journald[1145]: Journal started Jun 20 19:02:30.835508 systemd-journald[1145]: Runtime Journal (/run/log/journal/d5b063c9f6da4af8b12b82210e3d48e7) is 4.8M, max 38.3M, 33.5M free. Jun 20 19:02:30.596353 systemd[1]: Queued start job for default target multi-user.target. Jun 20 19:02:30.838025 systemd[1]: Started systemd-journald.service - Journal Service. Jun 20 19:02:30.603951 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jun 20 19:02:30.604305 systemd[1]: systemd-journald.service: Deactivated successfully. Jun 20 19:02:30.839202 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 20 19:02:30.839730 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 20 19:02:30.840250 systemd[1]: Mounted media.mount - External Media Directory. Jun 20 19:02:30.840708 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 20 19:02:30.841379 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 20 19:02:30.842011 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 20 19:02:30.842632 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 20 19:02:30.843295 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 19:02:30.843912 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 20 19:02:30.844051 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 20 19:02:30.844858 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 19:02:30.844966 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 19:02:30.845735 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 20 19:02:30.845847 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 20 19:02:30.846562 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 19:02:30.846671 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 19:02:30.847677 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 20 19:02:30.847875 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 20 19:02:30.848729 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 19:02:30.848914 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 19:02:30.849663 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 20 19:02:30.850484 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 20 19:02:30.851292 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 20 19:02:30.852080 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jun 20 19:02:30.859084 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 20 19:02:30.864696 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 20 19:02:30.868010 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 20 19:02:30.868795 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 20 19:02:30.868877 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 20 19:02:30.871833 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jun 20 19:02:30.878416 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jun 20 19:02:30.883093 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 20 19:02:30.884284 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 19:02:30.885632 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 20 19:02:30.895769 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 20 19:02:30.896866 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 20 19:02:30.897886 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jun 20 19:02:30.898750 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 20 19:02:30.900326 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 19:02:30.904082 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 20 19:02:30.906079 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 20 19:02:30.907882 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 20 19:02:30.913210 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 19:02:30.914362 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 20 19:02:30.915903 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jun 20 19:02:30.918411 systemd-journald[1145]: Time spent on flushing to /var/log/journal/d5b063c9f6da4af8b12b82210e3d48e7 is 24.256ms for 1149 entries. Jun 20 19:02:30.918411 systemd-journald[1145]: System Journal (/var/log/journal/d5b063c9f6da4af8b12b82210e3d48e7) is 8M, max 584.8M, 576.8M free. Jun 20 19:02:30.962949 systemd-journald[1145]: Received client request to flush runtime journal. Jun 20 19:02:30.963046 kernel: loop0: detected capacity change from 0 to 138176 Jun 20 19:02:30.918451 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jun 20 19:02:30.921452 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 20 19:02:30.930685 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jun 20 19:02:30.934107 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jun 20 19:02:30.950993 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:02:30.964926 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 20 19:02:30.969477 udevadm[1196]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jun 20 19:02:30.975894 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jun 20 19:02:30.985858 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. Jun 20 19:02:30.985872 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. Jun 20 19:02:30.998020 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 20 19:02:30.998447 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 20 19:02:31.005448 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 20 19:02:31.017009 kernel: loop1: detected capacity change from 0 to 8 Jun 20 19:02:31.035846 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 20 19:02:31.041008 kernel: loop2: detected capacity change from 0 to 147912 Jun 20 19:02:31.044122 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 20 19:02:31.052110 systemd-tmpfiles[1210]: ACLs are not supported, ignoring. Jun 20 19:02:31.052379 systemd-tmpfiles[1210]: ACLs are not supported, ignoring. Jun 20 19:02:31.056098 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 19:02:31.094225 kernel: loop3: detected capacity change from 0 to 221472 Jun 20 19:02:31.130035 kernel: loop4: detected capacity change from 0 to 138176 Jun 20 19:02:31.149995 kernel: loop5: detected capacity change from 0 to 8 Jun 20 19:02:31.152127 kernel: loop6: detected capacity change from 0 to 147912 Jun 20 19:02:31.173005 kernel: loop7: detected capacity change from 0 to 221472 Jun 20 19:02:31.198028 (sd-merge)[1215]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Jun 20 19:02:31.198479 (sd-merge)[1215]: Merged extensions into '/usr'. Jun 20 19:02:31.202926 systemd[1]: Reload requested from client PID 1187 ('systemd-sysext') (unit systemd-sysext.service)... Jun 20 19:02:31.202939 systemd[1]: Reloading... Jun 20 19:02:31.293006 zram_generator::config[1249]: No configuration found. Jun 20 19:02:31.371719 ldconfig[1182]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 20 19:02:31.385137 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:02:31.439046 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 20 19:02:31.439483 systemd[1]: Reloading finished in 236 ms. Jun 20 19:02:31.453947 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 20 19:02:31.454722 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 20 19:02:31.464106 systemd[1]: Starting ensure-sysext.service... Jun 20 19:02:31.466861 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 20 19:02:31.477113 systemd[1]: Reload requested from client PID 1286 ('systemctl') (unit ensure-sysext.service)... Jun 20 19:02:31.477124 systemd[1]: Reloading... Jun 20 19:02:31.489225 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 20 19:02:31.489675 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 20 19:02:31.490300 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 20 19:02:31.490548 systemd-tmpfiles[1287]: ACLs are not supported, ignoring. Jun 20 19:02:31.490644 systemd-tmpfiles[1287]: ACLs are not supported, ignoring. Jun 20 19:02:31.492961 systemd-tmpfiles[1287]: Detected autofs mount point /boot during canonicalization of boot. Jun 20 19:02:31.493054 systemd-tmpfiles[1287]: Skipping /boot Jun 20 19:02:31.499908 systemd-tmpfiles[1287]: Detected autofs mount point /boot during canonicalization of boot. Jun 20 19:02:31.499969 systemd-tmpfiles[1287]: Skipping /boot Jun 20 19:02:31.534995 zram_generator::config[1316]: No configuration found. Jun 20 19:02:31.615900 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:02:31.676049 systemd[1]: Reloading finished in 198 ms. Jun 20 19:02:31.689292 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 20 19:02:31.696465 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 19:02:31.708148 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 20 19:02:31.712804 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 20 19:02:31.715761 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 20 19:02:31.722501 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 20 19:02:31.725247 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 19:02:31.728607 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 20 19:02:31.733251 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:02:31.733382 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 19:02:31.735633 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 19:02:31.738151 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 19:02:31.745144 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 19:02:31.745753 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 19:02:31.745899 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 19:02:31.752945 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 20 19:02:31.755010 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:02:31.755949 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 19:02:31.756096 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 19:02:31.764188 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:02:31.764415 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 19:02:31.770184 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 19:02:31.770731 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 19:02:31.770850 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 19:02:31.770963 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:02:31.771760 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 20 19:02:31.773717 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 19:02:31.774324 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 19:02:31.775859 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 19:02:31.776247 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 19:02:31.784513 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 20 19:02:31.787176 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 20 19:02:31.787869 systemd-udevd[1371]: Using default interface naming scheme 'v255'. Jun 20 19:02:31.790023 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 20 19:02:31.791668 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 19:02:31.792245 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 19:02:31.799733 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:02:31.799927 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 19:02:31.811451 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 19:02:31.813789 augenrules[1398]: No rules Jun 20 19:02:31.817016 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 20 19:02:31.819145 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 19:02:31.823299 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 19:02:31.823851 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 19:02:31.823961 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 19:02:31.824246 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:02:31.825051 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 20 19:02:31.827036 systemd[1]: audit-rules.service: Deactivated successfully. Jun 20 19:02:31.828157 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 20 19:02:31.829286 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 20 19:02:31.830873 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 20 19:02:31.832588 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 19:02:31.832794 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 19:02:31.833837 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 20 19:02:31.834404 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 20 19:02:31.836761 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 19:02:31.837693 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 19:02:31.840069 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 19:02:31.840189 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 19:02:31.845526 systemd[1]: Finished ensure-sysext.service. Jun 20 19:02:31.852153 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 20 19:02:31.852211 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 20 19:02:31.861484 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jun 20 19:02:31.862518 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 20 19:02:31.863643 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 19:02:31.878783 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 20 19:02:31.914357 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jun 20 19:02:31.945796 systemd-resolved[1368]: Positive Trust Anchors: Jun 20 19:02:31.947139 systemd-resolved[1368]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 20 19:02:31.947212 systemd-resolved[1368]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 20 19:02:31.957910 systemd-resolved[1368]: Using system hostname 'ci-4230-2-0-e-b360e0c6ec'. Jun 20 19:02:31.965558 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 20 19:02:31.968088 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 20 19:02:31.982842 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jun 20 19:02:31.986253 systemd[1]: Reached target time-set.target - System Time Set. Jun 20 19:02:32.005115 systemd-networkd[1436]: lo: Link UP Jun 20 19:02:32.005122 systemd-networkd[1436]: lo: Gained carrier Jun 20 19:02:32.008147 systemd-networkd[1436]: Enumeration completed Jun 20 19:02:32.008301 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 20 19:02:32.008725 systemd-networkd[1436]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:02:32.008777 systemd-networkd[1436]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 20 19:02:32.010663 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1423) Jun 20 19:02:32.010267 systemd[1]: Reached target network.target - Network. Jun 20 19:02:32.011167 systemd-networkd[1436]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:02:32.011501 systemd-networkd[1436]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 20 19:02:32.014096 systemd-networkd[1436]: eth0: Link UP Jun 20 19:02:32.014155 systemd-networkd[1436]: eth0: Gained carrier Jun 20 19:02:32.014202 systemd-networkd[1436]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:02:32.017495 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jun 20 19:02:32.018138 systemd-networkd[1436]: eth1: Link UP Jun 20 19:02:32.019097 systemd-networkd[1436]: eth1: Gained carrier Jun 20 19:02:32.019165 systemd-networkd[1436]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:02:32.020616 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 20 19:02:32.034292 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jun 20 19:02:32.048079 systemd-networkd[1436]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jun 20 19:02:32.049172 systemd-timesyncd[1421]: Network configuration changed, trying to establish connection. Jun 20 19:02:32.055999 kernel: mousedev: PS/2 mouse device common for all mice Jun 20 19:02:32.062999 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jun 20 19:02:32.072319 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jun 20 19:02:32.078623 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 20 19:02:32.079091 systemd-networkd[1436]: eth0: DHCPv4 address 157.180.24.181/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jun 20 19:02:32.080144 systemd-timesyncd[1421]: Network configuration changed, trying to establish connection. Jun 20 19:02:32.082535 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Jun 20 19:02:32.082586 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:02:32.082714 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 19:02:32.087128 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 19:02:32.090314 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 19:02:32.091020 kernel: ACPI: button: Power Button [PWRF] Jun 20 19:02:32.093400 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 19:02:32.094268 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 19:02:32.094298 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 19:02:32.094322 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 20 19:02:32.094337 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:02:32.096303 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 19:02:32.096474 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 19:02:32.100574 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 20 19:02:32.111206 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 19:02:32.112576 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 19:02:32.114438 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 19:02:32.114613 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 19:02:32.117056 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 20 19:02:32.117114 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 20 19:02:32.119318 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Jun 20 19:02:32.119349 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Jun 20 19:02:32.122018 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jun 20 19:02:32.122156 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jun 20 19:02:32.122284 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jun 20 19:02:32.123166 kernel: Console: switching to colour dummy device 80x25 Jun 20 19:02:32.126942 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jun 20 19:02:32.126990 kernel: [drm] features: -context_init Jun 20 19:02:32.129006 kernel: [drm] number of scanouts: 1 Jun 20 19:02:32.129041 kernel: [drm] number of cap sets: 0 Jun 20 19:02:32.136222 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Jun 20 19:02:32.136265 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input5 Jun 20 19:02:32.147205 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jun 20 19:02:32.147273 kernel: Console: switching to colour frame buffer device 160x50 Jun 20 19:02:32.171193 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jun 20 19:02:32.171420 kernel: EDAC MC: Ver: 3.0.0 Jun 20 19:02:32.177262 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:02:32.190399 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 19:02:32.190956 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:02:32.198280 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:02:32.209605 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 19:02:32.209771 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:02:32.212768 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jun 20 19:02:32.217167 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:02:32.258293 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:02:32.335854 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jun 20 19:02:32.340105 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jun 20 19:02:32.349446 lvm[1486]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 20 19:02:32.373809 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jun 20 19:02:32.377487 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 20 19:02:32.377603 systemd[1]: Reached target sysinit.target - System Initialization. Jun 20 19:02:32.377805 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 20 19:02:32.377900 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 20 19:02:32.378154 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 20 19:02:32.378318 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 20 19:02:32.378394 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 20 19:02:32.378457 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 20 19:02:32.378486 systemd[1]: Reached target paths.target - Path Units. Jun 20 19:02:32.378543 systemd[1]: Reached target timers.target - Timer Units. Jun 20 19:02:32.382616 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 20 19:02:32.383776 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 20 19:02:32.386373 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jun 20 19:02:32.388183 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jun 20 19:02:32.388689 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jun 20 19:02:32.391205 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 20 19:02:32.391887 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jun 20 19:02:32.394170 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jun 20 19:02:32.397931 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 20 19:02:32.398472 systemd[1]: Reached target sockets.target - Socket Units. Jun 20 19:02:32.398860 systemd[1]: Reached target basic.target - Basic System. Jun 20 19:02:32.401406 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 20 19:02:32.401469 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 20 19:02:32.402293 lvm[1490]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 20 19:02:32.409061 systemd[1]: Starting containerd.service - containerd container runtime... Jun 20 19:02:32.412407 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jun 20 19:02:32.415175 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 20 19:02:32.421270 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 20 19:02:32.425102 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 20 19:02:32.427404 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 20 19:02:32.431096 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 20 19:02:32.432679 jq[1494]: false Jun 20 19:02:32.433207 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 20 19:02:32.437091 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Jun 20 19:02:32.440314 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 20 19:02:32.444135 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 20 19:02:32.460992 coreos-metadata[1492]: Jun 20 19:02:32.460 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Jun 20 19:02:32.460992 coreos-metadata[1492]: Jun 20 19:02:32.460 INFO Fetch successful Jun 20 19:02:32.460992 coreos-metadata[1492]: Jun 20 19:02:32.460 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Jun 20 19:02:32.460992 coreos-metadata[1492]: Jun 20 19:02:32.460 INFO Fetch successful Jun 20 19:02:32.478012 extend-filesystems[1497]: Found loop4 Jun 20 19:02:32.478012 extend-filesystems[1497]: Found loop5 Jun 20 19:02:32.478012 extend-filesystems[1497]: Found loop6 Jun 20 19:02:32.478012 extend-filesystems[1497]: Found loop7 Jun 20 19:02:32.478012 extend-filesystems[1497]: Found sda Jun 20 19:02:32.478012 extend-filesystems[1497]: Found sda1 Jun 20 19:02:32.478012 extend-filesystems[1497]: Found sda2 Jun 20 19:02:32.478012 extend-filesystems[1497]: Found sda3 Jun 20 19:02:32.478012 extend-filesystems[1497]: Found usr Jun 20 19:02:32.478012 extend-filesystems[1497]: Found sda4 Jun 20 19:02:32.478012 extend-filesystems[1497]: Found sda6 Jun 20 19:02:32.478012 extend-filesystems[1497]: Found sda7 Jun 20 19:02:32.478012 extend-filesystems[1497]: Found sda9 Jun 20 19:02:32.478012 extend-filesystems[1497]: Checking size of /dev/sda9 Jun 20 19:02:32.519736 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Jun 20 19:02:32.463112 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 20 19:02:32.476284 dbus-daemon[1493]: [system] SELinux support is enabled Jun 20 19:02:32.520775 extend-filesystems[1497]: Resized partition /dev/sda9 Jun 20 19:02:32.466544 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 20 19:02:32.521761 extend-filesystems[1515]: resize2fs 1.47.1 (20-May-2024) Jun 20 19:02:32.469341 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 20 19:02:32.473104 systemd[1]: Starting update-engine.service - Update Engine... Jun 20 19:02:32.475609 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 20 19:02:32.476446 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 20 19:02:32.484502 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jun 20 19:02:32.528796 jq[1511]: true Jun 20 19:02:32.500925 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 20 19:02:32.501098 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 20 19:02:32.503271 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 20 19:02:32.503412 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 20 19:02:32.520954 systemd[1]: motdgen.service: Deactivated successfully. Jun 20 19:02:32.521171 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 20 19:02:32.557676 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1424) Jun 20 19:02:32.556291 (ntainerd)[1527]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jun 20 19:02:32.559349 tar[1523]: linux-amd64/helm Jun 20 19:02:32.557428 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 20 19:02:32.557461 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 20 19:02:32.560554 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 20 19:02:32.560577 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 20 19:02:32.579332 update_engine[1509]: I20250620 19:02:32.579169 1509 main.cc:92] Flatcar Update Engine starting Jun 20 19:02:32.586023 jq[1530]: true Jun 20 19:02:32.596510 update_engine[1509]: I20250620 19:02:32.596457 1509 update_check_scheduler.cc:74] Next update check in 10m59s Jun 20 19:02:32.597191 systemd[1]: Started update-engine.service - Update Engine. Jun 20 19:02:32.611126 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 20 19:02:32.650551 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jun 20 19:02:32.655132 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 20 19:02:32.662043 systemd-logind[1504]: New seat seat0. Jun 20 19:02:32.668901 systemd-logind[1504]: Watching system buttons on /dev/input/event2 (Power Button) Jun 20 19:02:32.671219 systemd-logind[1504]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jun 20 19:02:32.671398 systemd[1]: Started systemd-logind.service - User Login Management. Jun 20 19:02:32.712816 bash[1567]: Updated "/home/core/.ssh/authorized_keys" Jun 20 19:02:32.715150 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 20 19:02:32.725829 systemd[1]: Starting sshkeys.service... Jun 20 19:02:32.745560 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Jun 20 19:02:32.749345 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jun 20 19:02:32.753521 locksmithd[1545]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 20 19:02:32.757309 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jun 20 19:02:32.777062 extend-filesystems[1515]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jun 20 19:02:32.777062 extend-filesystems[1515]: old_desc_blocks = 1, new_desc_blocks = 5 Jun 20 19:02:32.777062 extend-filesystems[1515]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Jun 20 19:02:32.776835 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 20 19:02:32.778539 extend-filesystems[1497]: Resized filesystem in /dev/sda9 Jun 20 19:02:32.778539 extend-filesystems[1497]: Found sr0 Jun 20 19:02:32.779611 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 20 19:02:32.813336 coreos-metadata[1574]: Jun 20 19:02:32.812 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Jun 20 19:02:32.819455 coreos-metadata[1574]: Jun 20 19:02:32.817 INFO Fetch successful Jun 20 19:02:32.821336 unknown[1574]: wrote ssh authorized keys file for user: core Jun 20 19:02:32.827500 containerd[1527]: time="2025-06-20T19:02:32.824122847Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jun 20 19:02:32.851538 update-ssh-keys[1581]: Updated "/home/core/.ssh/authorized_keys" Jun 20 19:02:32.852051 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jun 20 19:02:32.858625 systemd[1]: Finished sshkeys.service. Jun 20 19:02:32.870270 containerd[1527]: time="2025-06-20T19:02:32.870219912Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jun 20 19:02:32.874061 containerd[1527]: time="2025-06-20T19:02:32.872865875Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.94-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jun 20 19:02:32.874061 containerd[1527]: time="2025-06-20T19:02:32.872888688Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jun 20 19:02:32.874061 containerd[1527]: time="2025-06-20T19:02:32.872902904Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jun 20 19:02:32.874061 containerd[1527]: time="2025-06-20T19:02:32.873040763Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jun 20 19:02:32.874061 containerd[1527]: time="2025-06-20T19:02:32.873056142Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jun 20 19:02:32.874061 containerd[1527]: time="2025-06-20T19:02:32.873101407Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jun 20 19:02:32.874061 containerd[1527]: time="2025-06-20T19:02:32.873110905Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jun 20 19:02:32.874061 containerd[1527]: time="2025-06-20T19:02:32.873276064Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 20 19:02:32.874061 containerd[1527]: time="2025-06-20T19:02:32.873288157Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jun 20 19:02:32.874061 containerd[1527]: time="2025-06-20T19:02:32.873298396Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jun 20 19:02:32.874061 containerd[1527]: time="2025-06-20T19:02:32.873304978Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jun 20 19:02:32.874259 containerd[1527]: time="2025-06-20T19:02:32.873362867Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jun 20 19:02:32.874259 containerd[1527]: time="2025-06-20T19:02:32.873518028Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jun 20 19:02:32.874259 containerd[1527]: time="2025-06-20T19:02:32.873616232Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 20 19:02:32.874259 containerd[1527]: time="2025-06-20T19:02:32.873626421Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jun 20 19:02:32.874259 containerd[1527]: time="2025-06-20T19:02:32.873689350Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jun 20 19:02:32.874259 containerd[1527]: time="2025-06-20T19:02:32.873724886Z" level=info msg="metadata content store policy set" policy=shared Jun 20 19:02:32.878467 containerd[1527]: time="2025-06-20T19:02:32.878450209Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jun 20 19:02:32.878634 containerd[1527]: time="2025-06-20T19:02:32.878620948Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jun 20 19:02:32.878739 containerd[1527]: time="2025-06-20T19:02:32.878727639Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jun 20 19:02:32.879168 containerd[1527]: time="2025-06-20T19:02:32.879022081Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jun 20 19:02:32.879168 containerd[1527]: time="2025-06-20T19:02:32.879037751Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jun 20 19:02:32.879168 containerd[1527]: time="2025-06-20T19:02:32.879129753Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jun 20 19:02:32.879680 containerd[1527]: time="2025-06-20T19:02:32.879657483Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jun 20 19:02:32.879805 containerd[1527]: time="2025-06-20T19:02:32.879790773Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jun 20 19:02:32.880412 containerd[1527]: time="2025-06-20T19:02:32.880088241Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jun 20 19:02:32.880412 containerd[1527]: time="2025-06-20T19:02:32.880112736Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jun 20 19:02:32.880412 containerd[1527]: time="2025-06-20T19:02:32.880126512Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jun 20 19:02:32.880412 containerd[1527]: time="2025-06-20T19:02:32.880137052Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jun 20 19:02:32.880412 containerd[1527]: time="2025-06-20T19:02:32.880145598Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jun 20 19:02:32.880412 containerd[1527]: time="2025-06-20T19:02:32.880155577Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jun 20 19:02:32.880412 containerd[1527]: time="2025-06-20T19:02:32.880166157Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jun 20 19:02:32.880412 containerd[1527]: time="2025-06-20T19:02:32.880175003Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jun 20 19:02:32.880412 containerd[1527]: time="2025-06-20T19:02:32.880185423Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jun 20 19:02:32.880412 containerd[1527]: time="2025-06-20T19:02:32.880193999Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jun 20 19:02:32.880412 containerd[1527]: time="2025-06-20T19:02:32.880209969Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jun 20 19:02:32.880412 containerd[1527]: time="2025-06-20T19:02:32.880220148Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jun 20 19:02:32.880412 containerd[1527]: time="2025-06-20T19:02:32.880240837Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jun 20 19:02:32.880412 containerd[1527]: time="2025-06-20T19:02:32.880250054Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jun 20 19:02:32.880623 containerd[1527]: time="2025-06-20T19:02:32.880258921Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jun 20 19:02:32.880623 containerd[1527]: time="2025-06-20T19:02:32.880268489Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jun 20 19:02:32.880623 containerd[1527]: time="2025-06-20T19:02:32.880277365Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jun 20 19:02:32.880623 containerd[1527]: time="2025-06-20T19:02:32.880286062Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jun 20 19:02:32.880623 containerd[1527]: time="2025-06-20T19:02:32.880295219Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jun 20 19:02:32.880623 containerd[1527]: time="2025-06-20T19:02:32.880306590Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jun 20 19:02:32.880623 containerd[1527]: time="2025-06-20T19:02:32.880315046Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jun 20 19:02:32.880623 containerd[1527]: time="2025-06-20T19:02:32.880323442Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jun 20 19:02:32.880623 containerd[1527]: time="2025-06-20T19:02:32.880333030Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jun 20 19:02:32.880623 containerd[1527]: time="2025-06-20T19:02:32.880343008Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jun 20 19:02:32.880623 containerd[1527]: time="2025-06-20T19:02:32.880360030Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jun 20 19:02:32.880623 containerd[1527]: time="2025-06-20T19:02:32.880369187Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jun 20 19:02:32.880623 containerd[1527]: time="2025-06-20T19:02:32.880376992Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jun 20 19:02:32.881365 containerd[1527]: time="2025-06-20T19:02:32.881172644Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jun 20 19:02:32.881365 containerd[1527]: time="2025-06-20T19:02:32.881192722Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jun 20 19:02:32.881365 containerd[1527]: time="2025-06-20T19:02:32.881201038Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jun 20 19:02:32.881365 containerd[1527]: time="2025-06-20T19:02:32.881209935Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jun 20 19:02:32.881365 containerd[1527]: time="2025-06-20T19:02:32.881278372Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jun 20 19:02:32.881365 containerd[1527]: time="2025-06-20T19:02:32.881289593Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jun 20 19:02:32.881365 containerd[1527]: time="2025-06-20T19:02:32.881298049Z" level=info msg="NRI interface is disabled by configuration." Jun 20 19:02:32.881365 containerd[1527]: time="2025-06-20T19:02:32.881306305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jun 20 19:02:32.883182 containerd[1527]: time="2025-06-20T19:02:32.881894759Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jun 20 19:02:32.883182 containerd[1527]: time="2025-06-20T19:02:32.881938631Z" level=info msg="Connect containerd service" Jun 20 19:02:32.883182 containerd[1527]: time="2025-06-20T19:02:32.881965081Z" level=info msg="using legacy CRI server" Jun 20 19:02:32.883182 containerd[1527]: time="2025-06-20T19:02:32.881989186Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 20 19:02:32.883182 containerd[1527]: time="2025-06-20T19:02:32.882069627Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jun 20 19:02:32.883182 containerd[1527]: time="2025-06-20T19:02:32.882567060Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 20 19:02:32.883644 containerd[1527]: time="2025-06-20T19:02:32.883620025Z" level=info msg="Start subscribing containerd event" Jun 20 19:02:32.883876 containerd[1527]: time="2025-06-20T19:02:32.883865735Z" level=info msg="Start recovering state" Jun 20 19:02:32.883953 containerd[1527]: time="2025-06-20T19:02:32.883942609Z" level=info msg="Start event monitor" Jun 20 19:02:32.884014 containerd[1527]: time="2025-06-20T19:02:32.884004235Z" level=info msg="Start snapshots syncer" Jun 20 19:02:32.884055 containerd[1527]: time="2025-06-20T19:02:32.884047386Z" level=info msg="Start cni network conf syncer for default" Jun 20 19:02:32.884089 containerd[1527]: time="2025-06-20T19:02:32.884082041Z" level=info msg="Start streaming server" Jun 20 19:02:32.886006 containerd[1527]: time="2025-06-20T19:02:32.884918570Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 20 19:02:32.886006 containerd[1527]: time="2025-06-20T19:02:32.885032323Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 20 19:02:32.886241 sshd_keygen[1526]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 20 19:02:32.886746 containerd[1527]: time="2025-06-20T19:02:32.886732512Z" level=info msg="containerd successfully booted in 0.066050s" Jun 20 19:02:32.886818 systemd[1]: Started containerd.service - containerd container runtime. Jun 20 19:02:32.908868 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 20 19:02:32.918392 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 20 19:02:32.924253 systemd[1]: issuegen.service: Deactivated successfully. Jun 20 19:02:32.924425 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 20 19:02:32.935051 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 20 19:02:32.943673 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 20 19:02:32.957400 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 20 19:02:32.960293 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jun 20 19:02:32.963664 systemd[1]: Reached target getty.target - Login Prompts. Jun 20 19:02:33.156615 tar[1523]: linux-amd64/LICENSE Jun 20 19:02:33.156615 tar[1523]: linux-amd64/README.md Jun 20 19:02:33.165831 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 20 19:02:33.465198 systemd-networkd[1436]: eth0: Gained IPv6LL Jun 20 19:02:33.465778 systemd-timesyncd[1421]: Network configuration changed, trying to establish connection. Jun 20 19:02:33.468295 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 20 19:02:33.469876 systemd[1]: Reached target network-online.target - Network is Online. Jun 20 19:02:33.477299 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:02:33.479924 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 20 19:02:33.501304 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 20 19:02:34.041168 systemd-networkd[1436]: eth1: Gained IPv6LL Jun 20 19:02:34.041689 systemd-timesyncd[1421]: Network configuration changed, trying to establish connection. Jun 20 19:02:34.307950 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:02:34.313840 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 20 19:02:34.315252 (kubelet)[1623]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:02:34.316508 systemd[1]: Startup finished in 1.166s (kernel) + 6.491s (initrd) + 4.192s (userspace) = 11.850s. Jun 20 19:02:34.872130 kubelet[1623]: E0620 19:02:34.872062 1623 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:02:34.874357 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:02:34.874477 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:02:34.874729 systemd[1]: kubelet.service: Consumed 855ms CPU time, 268.6M memory peak. Jun 20 19:02:45.125173 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 20 19:02:45.130387 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:02:45.210245 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:02:45.212904 (kubelet)[1641]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:02:45.247400 kubelet[1641]: E0620 19:02:45.247338 1641 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:02:45.252275 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:02:45.252405 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:02:45.252824 systemd[1]: kubelet.service: Consumed 124ms CPU time, 110.9M memory peak. Jun 20 19:02:55.503085 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jun 20 19:02:55.508167 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:02:55.591881 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:02:55.594550 (kubelet)[1657]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:02:55.622734 kubelet[1657]: E0620 19:02:55.622684 1657 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:02:55.624808 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:02:55.625085 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:02:55.625356 systemd[1]: kubelet.service: Consumed 101ms CPU time, 112.5M memory peak. Jun 20 19:03:05.189502 systemd-timesyncd[1421]: Contacted time server 88.198.7.62:123 (2.flatcar.pool.ntp.org). Jun 20 19:03:05.189571 systemd-timesyncd[1421]: Initial clock synchronization to Fri 2025-06-20 19:03:05.189291 UTC. Jun 20 19:03:05.189707 systemd-resolved[1368]: Clock change detected. Flushing caches. Jun 20 19:03:06.681495 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jun 20 19:03:06.685874 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:03:06.769046 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:03:06.772496 (kubelet)[1673]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:03:06.806460 kubelet[1673]: E0620 19:03:06.806407 1673 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:03:06.808403 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:03:06.808544 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:03:06.808851 systemd[1]: kubelet.service: Consumed 114ms CPU time, 110.3M memory peak. Jun 20 19:03:16.931462 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jun 20 19:03:16.936911 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:03:17.027158 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:03:17.030135 (kubelet)[1688]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:03:17.060351 kubelet[1688]: E0620 19:03:17.060296 1688 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:03:17.062377 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:03:17.062495 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:03:17.062750 systemd[1]: kubelet.service: Consumed 106ms CPU time, 110.5M memory peak. Jun 20 19:03:18.351159 update_engine[1509]: I20250620 19:03:18.350978 1509 update_attempter.cc:509] Updating boot flags... Jun 20 19:03:18.415759 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1705) Jun 20 19:03:18.465738 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1704) Jun 20 19:03:18.512774 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1704) Jun 20 19:03:27.181554 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jun 20 19:03:27.193102 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:03:27.276116 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:03:27.278903 (kubelet)[1725]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:03:27.321326 kubelet[1725]: E0620 19:03:27.321252 1725 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:03:27.323769 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:03:27.323896 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:03:27.324143 systemd[1]: kubelet.service: Consumed 123ms CPU time, 109.7M memory peak. Jun 20 19:03:37.431417 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jun 20 19:03:37.436874 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:03:37.521473 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:03:37.524284 (kubelet)[1741]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:03:37.553161 kubelet[1741]: E0620 19:03:37.553111 1741 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:03:37.555650 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:03:37.555804 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:03:37.556100 systemd[1]: kubelet.service: Consumed 112ms CPU time, 112.2M memory peak. Jun 20 19:03:47.682052 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jun 20 19:03:47.693060 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:03:47.800295 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:03:47.804995 (kubelet)[1757]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:03:47.837084 kubelet[1757]: E0620 19:03:47.837027 1757 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:03:47.839100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:03:47.839231 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:03:47.839471 systemd[1]: kubelet.service: Consumed 129ms CPU time, 108.2M memory peak. Jun 20 19:03:57.931369 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jun 20 19:03:57.937075 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:03:58.030297 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:03:58.037987 (kubelet)[1772]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:03:58.073502 kubelet[1772]: E0620 19:03:58.073392 1772 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:03:58.074884 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:03:58.075076 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:03:58.075413 systemd[1]: kubelet.service: Consumed 121ms CPU time, 110.1M memory peak. Jun 20 19:04:08.181526 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Jun 20 19:04:08.186937 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:04:08.282480 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:04:08.292014 (kubelet)[1788]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:04:08.325184 kubelet[1788]: E0620 19:04:08.325126 1788 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:04:08.326774 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:04:08.326984 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:04:08.327276 systemd[1]: kubelet.service: Consumed 116ms CPU time, 110.3M memory peak. Jun 20 19:04:18.431417 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Jun 20 19:04:18.435879 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:04:18.517776 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:04:18.537018 (kubelet)[1804]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:04:18.565853 kubelet[1804]: E0620 19:04:18.565804 1804 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:04:18.567694 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:04:18.567843 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:04:18.568073 systemd[1]: kubelet.service: Consumed 108ms CPU time, 110.3M memory peak. Jun 20 19:04:23.963929 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 20 19:04:23.969091 systemd[1]: Started sshd@0-157.180.24.181:22-139.178.68.195:34062.service - OpenSSH per-connection server daemon (139.178.68.195:34062). Jun 20 19:04:24.968888 sshd[1812]: Accepted publickey for core from 139.178.68.195 port 34062 ssh2: RSA SHA256:ttetsIDeDmoOcWQonGIv9sNpOX/TzQSVP7aimMY1zAQ Jun 20 19:04:24.970864 sshd-session[1812]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:04:24.980039 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 20 19:04:24.984921 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 20 19:04:24.988415 systemd-logind[1504]: New session 1 of user core. Jun 20 19:04:24.995076 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 20 19:04:25.001094 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 20 19:04:25.003965 (systemd)[1816]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 20 19:04:25.006085 systemd-logind[1504]: New session c1 of user core. Jun 20 19:04:25.146031 systemd[1816]: Queued start job for default target default.target. Jun 20 19:04:25.156483 systemd[1816]: Created slice app.slice - User Application Slice. Jun 20 19:04:25.156506 systemd[1816]: Reached target paths.target - Paths. Jun 20 19:04:25.156540 systemd[1816]: Reached target timers.target - Timers. Jun 20 19:04:25.157549 systemd[1816]: Starting dbus.socket - D-Bus User Message Bus Socket... Jun 20 19:04:25.166880 systemd[1816]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jun 20 19:04:25.166921 systemd[1816]: Reached target sockets.target - Sockets. Jun 20 19:04:25.166954 systemd[1816]: Reached target basic.target - Basic System. Jun 20 19:04:25.166982 systemd[1816]: Reached target default.target - Main User Target. Jun 20 19:04:25.167001 systemd[1816]: Startup finished in 155ms. Jun 20 19:04:25.167284 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 20 19:04:25.169172 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 20 19:04:25.853944 systemd[1]: Started sshd@1-157.180.24.181:22-139.178.68.195:34064.service - OpenSSH per-connection server daemon (139.178.68.195:34064). Jun 20 19:04:26.834385 sshd[1827]: Accepted publickey for core from 139.178.68.195 port 34064 ssh2: RSA SHA256:ttetsIDeDmoOcWQonGIv9sNpOX/TzQSVP7aimMY1zAQ Jun 20 19:04:26.835482 sshd-session[1827]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:04:26.838925 systemd-logind[1504]: New session 2 of user core. Jun 20 19:04:26.848910 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 20 19:04:27.505180 sshd[1829]: Connection closed by 139.178.68.195 port 34064 Jun 20 19:04:27.505710 sshd-session[1827]: pam_unix(sshd:session): session closed for user core Jun 20 19:04:27.508419 systemd[1]: sshd@1-157.180.24.181:22-139.178.68.195:34064.service: Deactivated successfully. Jun 20 19:04:27.509941 systemd[1]: session-2.scope: Deactivated successfully. Jun 20 19:04:27.511008 systemd-logind[1504]: Session 2 logged out. Waiting for processes to exit. Jun 20 19:04:27.512072 systemd-logind[1504]: Removed session 2. Jun 20 19:04:27.675931 systemd[1]: Started sshd@2-157.180.24.181:22-139.178.68.195:34072.service - OpenSSH per-connection server daemon (139.178.68.195:34072). Jun 20 19:04:28.638938 sshd[1835]: Accepted publickey for core from 139.178.68.195 port 34072 ssh2: RSA SHA256:ttetsIDeDmoOcWQonGIv9sNpOX/TzQSVP7aimMY1zAQ Jun 20 19:04:28.640130 sshd-session[1835]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:04:28.641178 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Jun 20 19:04:28.646874 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:04:28.651737 systemd-logind[1504]: New session 3 of user core. Jun 20 19:04:28.652518 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 20 19:04:28.730742 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:04:28.733493 (kubelet)[1846]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:04:28.760524 kubelet[1846]: E0620 19:04:28.760483 1846 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:04:28.762126 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:04:28.762263 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:04:28.762517 systemd[1]: kubelet.service: Consumed 102ms CPU time, 110.2M memory peak. Jun 20 19:04:29.309610 sshd[1840]: Connection closed by 139.178.68.195 port 34072 Jun 20 19:04:29.310270 sshd-session[1835]: pam_unix(sshd:session): session closed for user core Jun 20 19:04:29.312798 systemd[1]: sshd@2-157.180.24.181:22-139.178.68.195:34072.service: Deactivated successfully. Jun 20 19:04:29.314816 systemd-logind[1504]: Session 3 logged out. Waiting for processes to exit. Jun 20 19:04:29.315153 systemd[1]: session-3.scope: Deactivated successfully. Jun 20 19:04:29.316281 systemd-logind[1504]: Removed session 3. Jun 20 19:04:29.479973 systemd[1]: Started sshd@3-157.180.24.181:22-139.178.68.195:34082.service - OpenSSH per-connection server daemon (139.178.68.195:34082). Jun 20 19:04:30.443548 sshd[1858]: Accepted publickey for core from 139.178.68.195 port 34082 ssh2: RSA SHA256:ttetsIDeDmoOcWQonGIv9sNpOX/TzQSVP7aimMY1zAQ Jun 20 19:04:30.444838 sshd-session[1858]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:04:30.449397 systemd-logind[1504]: New session 4 of user core. Jun 20 19:04:30.456855 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 20 19:04:31.117925 sshd[1860]: Connection closed by 139.178.68.195 port 34082 Jun 20 19:04:31.118415 sshd-session[1858]: pam_unix(sshd:session): session closed for user core Jun 20 19:04:31.121491 systemd-logind[1504]: Session 4 logged out. Waiting for processes to exit. Jun 20 19:04:31.121742 systemd[1]: sshd@3-157.180.24.181:22-139.178.68.195:34082.service: Deactivated successfully. Jun 20 19:04:31.123352 systemd[1]: session-4.scope: Deactivated successfully. Jun 20 19:04:31.124138 systemd-logind[1504]: Removed session 4. Jun 20 19:04:31.289042 systemd[1]: Started sshd@4-157.180.24.181:22-139.178.68.195:34092.service - OpenSSH per-connection server daemon (139.178.68.195:34092). Jun 20 19:04:32.257500 sshd[1866]: Accepted publickey for core from 139.178.68.195 port 34092 ssh2: RSA SHA256:ttetsIDeDmoOcWQonGIv9sNpOX/TzQSVP7aimMY1zAQ Jun 20 19:04:32.259094 sshd-session[1866]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:04:32.264419 systemd-logind[1504]: New session 5 of user core. Jun 20 19:04:32.273935 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 20 19:04:32.781880 sudo[1869]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 20 19:04:32.782165 sudo[1869]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 19:04:32.797133 sudo[1869]: pam_unix(sudo:session): session closed for user root Jun 20 19:04:32.954437 sshd[1868]: Connection closed by 139.178.68.195 port 34092 Jun 20 19:04:32.955201 sshd-session[1866]: pam_unix(sshd:session): session closed for user core Jun 20 19:04:32.958768 systemd[1]: sshd@4-157.180.24.181:22-139.178.68.195:34092.service: Deactivated successfully. Jun 20 19:04:32.961066 systemd[1]: session-5.scope: Deactivated successfully. Jun 20 19:04:32.962506 systemd-logind[1504]: Session 5 logged out. Waiting for processes to exit. Jun 20 19:04:32.963630 systemd-logind[1504]: Removed session 5. Jun 20 19:04:33.126332 systemd[1]: Started sshd@5-157.180.24.181:22-139.178.68.195:34096.service - OpenSSH per-connection server daemon (139.178.68.195:34096). Jun 20 19:04:34.094401 sshd[1875]: Accepted publickey for core from 139.178.68.195 port 34096 ssh2: RSA SHA256:ttetsIDeDmoOcWQonGIv9sNpOX/TzQSVP7aimMY1zAQ Jun 20 19:04:34.095689 sshd-session[1875]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:04:34.100665 systemd-logind[1504]: New session 6 of user core. Jun 20 19:04:34.107919 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 20 19:04:34.610148 sudo[1879]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 20 19:04:34.610427 sudo[1879]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 19:04:34.613927 sudo[1879]: pam_unix(sudo:session): session closed for user root Jun 20 19:04:34.618339 sudo[1878]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jun 20 19:04:34.618576 sudo[1878]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 19:04:34.630984 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 20 19:04:34.655215 augenrules[1901]: No rules Jun 20 19:04:34.656292 systemd[1]: audit-rules.service: Deactivated successfully. Jun 20 19:04:34.656711 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 20 19:04:34.657942 sudo[1878]: pam_unix(sudo:session): session closed for user root Jun 20 19:04:34.815707 sshd[1877]: Connection closed by 139.178.68.195 port 34096 Jun 20 19:04:34.816234 sshd-session[1875]: pam_unix(sshd:session): session closed for user core Jun 20 19:04:34.818839 systemd[1]: sshd@5-157.180.24.181:22-139.178.68.195:34096.service: Deactivated successfully. Jun 20 19:04:34.820259 systemd[1]: session-6.scope: Deactivated successfully. Jun 20 19:04:34.821387 systemd-logind[1504]: Session 6 logged out. Waiting for processes to exit. Jun 20 19:04:34.822436 systemd-logind[1504]: Removed session 6. Jun 20 19:04:34.986964 systemd[1]: Started sshd@6-157.180.24.181:22-139.178.68.195:55544.service - OpenSSH per-connection server daemon (139.178.68.195:55544). Jun 20 19:04:35.954891 sshd[1910]: Accepted publickey for core from 139.178.68.195 port 55544 ssh2: RSA SHA256:ttetsIDeDmoOcWQonGIv9sNpOX/TzQSVP7aimMY1zAQ Jun 20 19:04:35.956048 sshd-session[1910]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:04:35.960225 systemd-logind[1504]: New session 7 of user core. Jun 20 19:04:35.968880 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 20 19:04:36.471782 sudo[1913]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 20 19:04:36.472033 sudo[1913]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 19:04:36.718933 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 20 19:04:36.719212 (dockerd)[1931]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jun 20 19:04:36.949384 dockerd[1931]: time="2025-06-20T19:04:36.949326574Z" level=info msg="Starting up" Jun 20 19:04:37.030095 dockerd[1931]: time="2025-06-20T19:04:37.030050278Z" level=info msg="Loading containers: start." Jun 20 19:04:37.154745 kernel: Initializing XFRM netlink socket Jun 20 19:04:37.212442 systemd-networkd[1436]: docker0: Link UP Jun 20 19:04:37.232978 dockerd[1931]: time="2025-06-20T19:04:37.232929171Z" level=info msg="Loading containers: done." Jun 20 19:04:37.245072 dockerd[1931]: time="2025-06-20T19:04:37.245026612Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 20 19:04:37.245206 dockerd[1931]: time="2025-06-20T19:04:37.245129485Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jun 20 19:04:37.245280 dockerd[1931]: time="2025-06-20T19:04:37.245253027Z" level=info msg="Daemon has completed initialization" Jun 20 19:04:37.271992 dockerd[1931]: time="2025-06-20T19:04:37.271890188Z" level=info msg="API listen on /run/docker.sock" Jun 20 19:04:37.272055 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 20 19:04:38.263368 containerd[1527]: time="2025-06-20T19:04:38.263309062Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jun 20 19:04:38.829700 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Jun 20 19:04:38.835947 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:04:38.844147 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount284248149.mount: Deactivated successfully. Jun 20 19:04:38.933190 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:04:38.943938 (kubelet)[2136]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:04:38.975394 kubelet[2136]: E0620 19:04:38.975216 2136 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:04:38.976516 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:04:38.976649 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:04:38.976913 systemd[1]: kubelet.service: Consumed 108ms CPU time, 110.1M memory peak. Jun 20 19:04:39.748153 containerd[1527]: time="2025-06-20T19:04:39.748096633Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:04:39.749134 containerd[1527]: time="2025-06-20T19:04:39.749084625Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=28077838" Jun 20 19:04:39.750056 containerd[1527]: time="2025-06-20T19:04:39.749992177Z" level=info msg="ImageCreate event name:\"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:04:39.753184 containerd[1527]: time="2025-06-20T19:04:39.753109512Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:04:39.754075 containerd[1527]: time="2025-06-20T19:04:39.753907397Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"28074544\" in 1.490546367s" Jun 20 19:04:39.754075 containerd[1527]: time="2025-06-20T19:04:39.753941792Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\"" Jun 20 19:04:39.754530 containerd[1527]: time="2025-06-20T19:04:39.754495881Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jun 20 19:04:40.927330 containerd[1527]: time="2025-06-20T19:04:40.927276880Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:04:40.928224 containerd[1527]: time="2025-06-20T19:04:40.928184193Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=24713316" Jun 20 19:04:40.930053 containerd[1527]: time="2025-06-20T19:04:40.930005187Z" level=info msg="ImageCreate event name:\"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:04:40.932749 containerd[1527]: time="2025-06-20T19:04:40.932705199Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:04:40.933576 containerd[1527]: time="2025-06-20T19:04:40.933457700Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"26315128\" in 1.178818722s" Jun 20 19:04:40.933576 containerd[1527]: time="2025-06-20T19:04:40.933484210Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\"" Jun 20 19:04:40.934321 containerd[1527]: time="2025-06-20T19:04:40.934305780Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jun 20 19:04:41.947501 containerd[1527]: time="2025-06-20T19:04:41.947458725Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:04:41.948459 containerd[1527]: time="2025-06-20T19:04:41.948424549Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=18783693" Jun 20 19:04:41.949392 containerd[1527]: time="2025-06-20T19:04:41.949359113Z" level=info msg="ImageCreate event name:\"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:04:41.954181 containerd[1527]: time="2025-06-20T19:04:41.953785068Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:04:41.954868 containerd[1527]: time="2025-06-20T19:04:41.954850178Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"20385523\" in 1.020471932s" Jun 20 19:04:41.954937 containerd[1527]: time="2025-06-20T19:04:41.954925399Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\"" Jun 20 19:04:41.955829 containerd[1527]: time="2025-06-20T19:04:41.955805290Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jun 20 19:04:42.894166 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount55982790.mount: Deactivated successfully. Jun 20 19:04:43.153480 containerd[1527]: time="2025-06-20T19:04:43.153357984Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:04:43.154348 containerd[1527]: time="2025-06-20T19:04:43.154316426Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=30383971" Jun 20 19:04:43.155221 containerd[1527]: time="2025-06-20T19:04:43.155182012Z" level=info msg="ImageCreate event name:\"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:04:43.156810 containerd[1527]: time="2025-06-20T19:04:43.156786957Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:04:43.157626 containerd[1527]: time="2025-06-20T19:04:43.157472223Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"30382962\" in 1.201643079s" Jun 20 19:04:43.157626 containerd[1527]: time="2025-06-20T19:04:43.157498222Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\"" Jun 20 19:04:43.158273 containerd[1527]: time="2025-06-20T19:04:43.157968342Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jun 20 19:04:43.630691 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount465085988.mount: Deactivated successfully. Jun 20 19:04:44.403574 containerd[1527]: time="2025-06-20T19:04:44.403509065Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:04:44.404448 containerd[1527]: time="2025-06-20T19:04:44.404415579Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565335" Jun 20 19:04:44.405249 containerd[1527]: time="2025-06-20T19:04:44.405215331Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:04:44.407247 containerd[1527]: time="2025-06-20T19:04:44.407210934Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:04:44.408249 containerd[1527]: time="2025-06-20T19:04:44.408127186Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.250137434s" Jun 20 19:04:44.408249 containerd[1527]: time="2025-06-20T19:04:44.408150791Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jun 20 19:04:44.408760 containerd[1527]: time="2025-06-20T19:04:44.408684540Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jun 20 19:04:44.847761 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount892912228.mount: Deactivated successfully. Jun 20 19:04:44.853122 containerd[1527]: time="2025-06-20T19:04:44.853066571Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:04:44.853939 containerd[1527]: time="2025-06-20T19:04:44.853879288Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321160" Jun 20 19:04:44.854817 containerd[1527]: time="2025-06-20T19:04:44.854773337Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:04:44.856923 containerd[1527]: time="2025-06-20T19:04:44.856882115Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:04:44.857949 containerd[1527]: time="2025-06-20T19:04:44.857497338Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 448.656693ms" Jun 20 19:04:44.857949 containerd[1527]: time="2025-06-20T19:04:44.857535510Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jun 20 19:04:44.858134 containerd[1527]: time="2025-06-20T19:04:44.858081342Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jun 20 19:04:45.386432 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3406197311.mount: Deactivated successfully. Jun 20 19:04:46.756232 containerd[1527]: time="2025-06-20T19:04:46.756179042Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:04:46.757292 containerd[1527]: time="2025-06-20T19:04:46.757254524Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780083" Jun 20 19:04:46.758257 containerd[1527]: time="2025-06-20T19:04:46.758218336Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:04:46.760647 containerd[1527]: time="2025-06-20T19:04:46.760601359Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:04:46.761902 containerd[1527]: time="2025-06-20T19:04:46.761338543Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 1.903228077s" Jun 20 19:04:46.761902 containerd[1527]: time="2025-06-20T19:04:46.761361055Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jun 20 19:04:49.143009 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Jun 20 19:04:49.153013 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:04:49.166194 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jun 20 19:04:49.166302 systemd[1]: kubelet.service: Failed with result 'signal'. Jun 20 19:04:49.166955 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:04:49.181905 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:04:49.204054 systemd[1]: Reload requested from client PID 2346 ('systemctl') (unit session-7.scope)... Jun 20 19:04:49.204068 systemd[1]: Reloading... Jun 20 19:04:49.315814 zram_generator::config[2391]: No configuration found. Jun 20 19:04:49.410860 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:04:49.501102 systemd[1]: Reloading finished in 296 ms. Jun 20 19:04:49.544022 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:04:49.547793 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:04:49.549399 systemd[1]: kubelet.service: Deactivated successfully. Jun 20 19:04:49.549574 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:04:49.549626 systemd[1]: kubelet.service: Consumed 82ms CPU time, 97.6M memory peak. Jun 20 19:04:49.554053 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:04:49.647763 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:04:49.651801 (kubelet)[2447]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 20 19:04:49.690146 kubelet[2447]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:04:49.690146 kubelet[2447]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 20 19:04:49.690146 kubelet[2447]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:04:49.690546 kubelet[2447]: I0620 19:04:49.690170 2447 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 20 19:04:50.102468 kubelet[2447]: I0620 19:04:50.102337 2447 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jun 20 19:04:50.102468 kubelet[2447]: I0620 19:04:50.102366 2447 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 20 19:04:50.102628 kubelet[2447]: I0620 19:04:50.102566 2447 server.go:934] "Client rotation is on, will bootstrap in background" Jun 20 19:04:50.126140 kubelet[2447]: E0620 19:04:50.125961 2447 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://157.180.24.181:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 157.180.24.181:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:04:50.128690 kubelet[2447]: I0620 19:04:50.126965 2447 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 20 19:04:50.135989 kubelet[2447]: E0620 19:04:50.135940 2447 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jun 20 19:04:50.135989 kubelet[2447]: I0620 19:04:50.135984 2447 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jun 20 19:04:50.142890 kubelet[2447]: I0620 19:04:50.142859 2447 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 20 19:04:50.144899 kubelet[2447]: I0620 19:04:50.144860 2447 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jun 20 19:04:50.145046 kubelet[2447]: I0620 19:04:50.145005 2447 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 20 19:04:50.145256 kubelet[2447]: I0620 19:04:50.145037 2447 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-2-0-e-b360e0c6ec","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 20 19:04:50.145327 kubelet[2447]: I0620 19:04:50.145260 2447 topology_manager.go:138] "Creating topology manager with none policy" Jun 20 19:04:50.145327 kubelet[2447]: I0620 19:04:50.145273 2447 container_manager_linux.go:300] "Creating device plugin manager" Jun 20 19:04:50.145429 kubelet[2447]: I0620 19:04:50.145398 2447 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:04:50.148772 kubelet[2447]: I0620 19:04:50.148740 2447 kubelet.go:408] "Attempting to sync node with API server" Jun 20 19:04:50.148772 kubelet[2447]: I0620 19:04:50.148779 2447 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 20 19:04:50.148866 kubelet[2447]: I0620 19:04:50.148816 2447 kubelet.go:314] "Adding apiserver pod source" Jun 20 19:04:50.148866 kubelet[2447]: I0620 19:04:50.148834 2447 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 20 19:04:50.151106 kubelet[2447]: W0620 19:04:50.150965 2447 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://157.180.24.181:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-2-0-e-b360e0c6ec&limit=500&resourceVersion=0": dial tcp 157.180.24.181:6443: connect: connection refused Jun 20 19:04:50.151106 kubelet[2447]: E0620 19:04:50.151024 2447 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://157.180.24.181:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-2-0-e-b360e0c6ec&limit=500&resourceVersion=0\": dial tcp 157.180.24.181:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:04:50.152068 kubelet[2447]: W0620 19:04:50.151956 2447 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://157.180.24.181:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 157.180.24.181:6443: connect: connection refused Jun 20 19:04:50.152068 kubelet[2447]: E0620 19:04:50.151992 2447 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://157.180.24.181:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 157.180.24.181:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:04:50.152514 kubelet[2447]: I0620 19:04:50.152413 2447 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jun 20 19:04:50.155084 kubelet[2447]: I0620 19:04:50.154996 2447 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 20 19:04:50.155908 kubelet[2447]: W0620 19:04:50.155489 2447 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 20 19:04:50.157155 kubelet[2447]: I0620 19:04:50.157044 2447 server.go:1274] "Started kubelet" Jun 20 19:04:50.158274 kubelet[2447]: I0620 19:04:50.157830 2447 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jun 20 19:04:50.158664 kubelet[2447]: I0620 19:04:50.158635 2447 server.go:449] "Adding debug handlers to kubelet server" Jun 20 19:04:50.162215 kubelet[2447]: I0620 19:04:50.161756 2447 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 20 19:04:50.162215 kubelet[2447]: I0620 19:04:50.162011 2447 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 20 19:04:50.163397 kubelet[2447]: I0620 19:04:50.163046 2447 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 20 19:04:50.163545 kubelet[2447]: E0620 19:04:50.162155 2447 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://157.180.24.181:6443/api/v1/namespaces/default/events\": dial tcp 157.180.24.181:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230-2-0-e-b360e0c6ec.184ad5a06b35b7c7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230-2-0-e-b360e0c6ec,UID:ci-4230-2-0-e-b360e0c6ec,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230-2-0-e-b360e0c6ec,},FirstTimestamp:2025-06-20 19:04:50.157025223 +0000 UTC m=+0.502221257,LastTimestamp:2025-06-20 19:04:50.157025223 +0000 UTC m=+0.502221257,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-2-0-e-b360e0c6ec,}" Jun 20 19:04:50.164461 kubelet[2447]: I0620 19:04:50.164334 2447 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 20 19:04:50.168950 kubelet[2447]: I0620 19:04:50.168938 2447 volume_manager.go:289] "Starting Kubelet Volume Manager" Jun 20 19:04:50.171039 kubelet[2447]: I0620 19:04:50.171027 2447 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jun 20 19:04:50.171987 kubelet[2447]: I0620 19:04:50.171138 2447 reconciler.go:26] "Reconciler: start to sync state" Jun 20 19:04:50.171987 kubelet[2447]: W0620 19:04:50.171451 2447 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://157.180.24.181:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 157.180.24.181:6443: connect: connection refused Jun 20 19:04:50.171987 kubelet[2447]: E0620 19:04:50.171486 2447 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://157.180.24.181:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 157.180.24.181:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:04:50.171987 kubelet[2447]: E0620 19:04:50.171875 2447 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230-2-0-e-b360e0c6ec\" not found" Jun 20 19:04:50.171987 kubelet[2447]: E0620 19:04:50.171944 2447 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://157.180.24.181:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-2-0-e-b360e0c6ec?timeout=10s\": dial tcp 157.180.24.181:6443: connect: connection refused" interval="200ms" Jun 20 19:04:50.174242 kubelet[2447]: I0620 19:04:50.174223 2447 factory.go:221] Registration of the systemd container factory successfully Jun 20 19:04:50.174496 kubelet[2447]: I0620 19:04:50.174381 2447 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 20 19:04:50.177765 kubelet[2447]: I0620 19:04:50.177317 2447 factory.go:221] Registration of the containerd container factory successfully Jun 20 19:04:50.188349 kubelet[2447]: I0620 19:04:50.188309 2447 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 20 19:04:50.189329 kubelet[2447]: I0620 19:04:50.189285 2447 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 20 19:04:50.189329 kubelet[2447]: I0620 19:04:50.189315 2447 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 20 19:04:50.189329 kubelet[2447]: I0620 19:04:50.189334 2447 kubelet.go:2321] "Starting kubelet main sync loop" Jun 20 19:04:50.189499 kubelet[2447]: E0620 19:04:50.189367 2447 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 20 19:04:50.194698 kubelet[2447]: W0620 19:04:50.194510 2447 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://157.180.24.181:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 157.180.24.181:6443: connect: connection refused Jun 20 19:04:50.194698 kubelet[2447]: E0620 19:04:50.194594 2447 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://157.180.24.181:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 157.180.24.181:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:04:50.200678 kubelet[2447]: I0620 19:04:50.200642 2447 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 20 19:04:50.200678 kubelet[2447]: I0620 19:04:50.200662 2447 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 20 19:04:50.200799 kubelet[2447]: I0620 19:04:50.200686 2447 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:04:50.202257 kubelet[2447]: I0620 19:04:50.202232 2447 policy_none.go:49] "None policy: Start" Jun 20 19:04:50.202918 kubelet[2447]: I0620 19:04:50.202895 2447 memory_manager.go:170] "Starting memorymanager" policy="None" Jun 20 19:04:50.202965 kubelet[2447]: I0620 19:04:50.202936 2447 state_mem.go:35] "Initializing new in-memory state store" Jun 20 19:04:50.208136 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jun 20 19:04:50.218370 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jun 20 19:04:50.221306 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jun 20 19:04:50.229330 kubelet[2447]: I0620 19:04:50.229306 2447 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 20 19:04:50.229662 kubelet[2447]: I0620 19:04:50.229641 2447 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 20 19:04:50.229700 kubelet[2447]: I0620 19:04:50.229652 2447 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 20 19:04:50.229939 kubelet[2447]: I0620 19:04:50.229916 2447 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 20 19:04:50.232387 kubelet[2447]: E0620 19:04:50.232311 2447 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230-2-0-e-b360e0c6ec\" not found" Jun 20 19:04:50.302772 systemd[1]: Created slice kubepods-burstable-podb44ce924c631e7cfbb5db97ff4ee288a.slice - libcontainer container kubepods-burstable-podb44ce924c631e7cfbb5db97ff4ee288a.slice. Jun 20 19:04:50.325946 systemd[1]: Created slice kubepods-burstable-pod924566e082f8e01f31232e0955590eef.slice - libcontainer container kubepods-burstable-pod924566e082f8e01f31232e0955590eef.slice. Jun 20 19:04:50.332072 kubelet[2447]: I0620 19:04:50.331804 2447 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230-2-0-e-b360e0c6ec" Jun 20 19:04:50.332279 kubelet[2447]: E0620 19:04:50.332257 2447 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://157.180.24.181:6443/api/v1/nodes\": dial tcp 157.180.24.181:6443: connect: connection refused" node="ci-4230-2-0-e-b360e0c6ec" Jun 20 19:04:50.337195 systemd[1]: Created slice kubepods-burstable-pod53bd1a1f90e4e424d1bc201f1c55253d.slice - libcontainer container kubepods-burstable-pod53bd1a1f90e4e424d1bc201f1c55253d.slice. Jun 20 19:04:50.373200 kubelet[2447]: E0620 19:04:50.373071 2447 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://157.180.24.181:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-2-0-e-b360e0c6ec?timeout=10s\": dial tcp 157.180.24.181:6443: connect: connection refused" interval="400ms" Jun 20 19:04:50.471766 kubelet[2447]: I0620 19:04:50.471685 2447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/924566e082f8e01f31232e0955590eef-k8s-certs\") pod \"kube-controller-manager-ci-4230-2-0-e-b360e0c6ec\" (UID: \"924566e082f8e01f31232e0955590eef\") " pod="kube-system/kube-controller-manager-ci-4230-2-0-e-b360e0c6ec" Jun 20 19:04:50.471766 kubelet[2447]: I0620 19:04:50.471752 2447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b44ce924c631e7cfbb5db97ff4ee288a-k8s-certs\") pod \"kube-apiserver-ci-4230-2-0-e-b360e0c6ec\" (UID: \"b44ce924c631e7cfbb5db97ff4ee288a\") " pod="kube-system/kube-apiserver-ci-4230-2-0-e-b360e0c6ec" Jun 20 19:04:50.471766 kubelet[2447]: I0620 19:04:50.471772 2447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b44ce924c631e7cfbb5db97ff4ee288a-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-2-0-e-b360e0c6ec\" (UID: \"b44ce924c631e7cfbb5db97ff4ee288a\") " pod="kube-system/kube-apiserver-ci-4230-2-0-e-b360e0c6ec" Jun 20 19:04:50.471961 kubelet[2447]: I0620 19:04:50.471788 2447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/924566e082f8e01f31232e0955590eef-ca-certs\") pod \"kube-controller-manager-ci-4230-2-0-e-b360e0c6ec\" (UID: \"924566e082f8e01f31232e0955590eef\") " pod="kube-system/kube-controller-manager-ci-4230-2-0-e-b360e0c6ec" Jun 20 19:04:50.471961 kubelet[2447]: I0620 19:04:50.471803 2447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/924566e082f8e01f31232e0955590eef-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-2-0-e-b360e0c6ec\" (UID: \"924566e082f8e01f31232e0955590eef\") " pod="kube-system/kube-controller-manager-ci-4230-2-0-e-b360e0c6ec" Jun 20 19:04:50.471961 kubelet[2447]: I0620 19:04:50.471818 2447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/924566e082f8e01f31232e0955590eef-kubeconfig\") pod \"kube-controller-manager-ci-4230-2-0-e-b360e0c6ec\" (UID: \"924566e082f8e01f31232e0955590eef\") " pod="kube-system/kube-controller-manager-ci-4230-2-0-e-b360e0c6ec" Jun 20 19:04:50.471961 kubelet[2447]: I0620 19:04:50.471833 2447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/924566e082f8e01f31232e0955590eef-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-2-0-e-b360e0c6ec\" (UID: \"924566e082f8e01f31232e0955590eef\") " pod="kube-system/kube-controller-manager-ci-4230-2-0-e-b360e0c6ec" Jun 20 19:04:50.471961 kubelet[2447]: I0620 19:04:50.471848 2447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/53bd1a1f90e4e424d1bc201f1c55253d-kubeconfig\") pod \"kube-scheduler-ci-4230-2-0-e-b360e0c6ec\" (UID: \"53bd1a1f90e4e424d1bc201f1c55253d\") " pod="kube-system/kube-scheduler-ci-4230-2-0-e-b360e0c6ec" Jun 20 19:04:50.472078 kubelet[2447]: I0620 19:04:50.471862 2447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b44ce924c631e7cfbb5db97ff4ee288a-ca-certs\") pod \"kube-apiserver-ci-4230-2-0-e-b360e0c6ec\" (UID: \"b44ce924c631e7cfbb5db97ff4ee288a\") " pod="kube-system/kube-apiserver-ci-4230-2-0-e-b360e0c6ec" Jun 20 19:04:50.535050 kubelet[2447]: I0620 19:04:50.535012 2447 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230-2-0-e-b360e0c6ec" Jun 20 19:04:50.535433 kubelet[2447]: E0620 19:04:50.535373 2447 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://157.180.24.181:6443/api/v1/nodes\": dial tcp 157.180.24.181:6443: connect: connection refused" node="ci-4230-2-0-e-b360e0c6ec" Jun 20 19:04:50.625014 containerd[1527]: time="2025-06-20T19:04:50.624869993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-2-0-e-b360e0c6ec,Uid:b44ce924c631e7cfbb5db97ff4ee288a,Namespace:kube-system,Attempt:0,}" Jun 20 19:04:50.634711 containerd[1527]: time="2025-06-20T19:04:50.634637396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-2-0-e-b360e0c6ec,Uid:924566e082f8e01f31232e0955590eef,Namespace:kube-system,Attempt:0,}" Jun 20 19:04:50.640364 containerd[1527]: time="2025-06-20T19:04:50.640333622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-2-0-e-b360e0c6ec,Uid:53bd1a1f90e4e424d1bc201f1c55253d,Namespace:kube-system,Attempt:0,}" Jun 20 19:04:50.773554 kubelet[2447]: E0620 19:04:50.773472 2447 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://157.180.24.181:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-2-0-e-b360e0c6ec?timeout=10s\": dial tcp 157.180.24.181:6443: connect: connection refused" interval="800ms" Jun 20 19:04:50.937886 kubelet[2447]: I0620 19:04:50.937840 2447 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230-2-0-e-b360e0c6ec" Jun 20 19:04:50.938154 kubelet[2447]: E0620 19:04:50.938123 2447 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://157.180.24.181:6443/api/v1/nodes\": dial tcp 157.180.24.181:6443: connect: connection refused" node="ci-4230-2-0-e-b360e0c6ec" Jun 20 19:04:51.073830 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount383726908.mount: Deactivated successfully. Jun 20 19:04:51.078682 containerd[1527]: time="2025-06-20T19:04:51.078636646Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 19:04:51.080701 containerd[1527]: time="2025-06-20T19:04:51.080593491Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312078" Jun 20 19:04:51.081457 containerd[1527]: time="2025-06-20T19:04:51.081397939Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 19:04:51.082648 containerd[1527]: time="2025-06-20T19:04:51.082597424Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 19:04:51.083412 containerd[1527]: time="2025-06-20T19:04:51.083345055Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 19:04:51.084208 containerd[1527]: time="2025-06-20T19:04:51.084148462Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 20 19:04:51.084790 containerd[1527]: time="2025-06-20T19:04:51.084756721Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 20 19:04:51.086421 containerd[1527]: time="2025-06-20T19:04:51.086381799Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 19:04:51.087538 containerd[1527]: time="2025-06-20T19:04:51.087306895Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 462.333668ms" Jun 20 19:04:51.089626 containerd[1527]: time="2025-06-20T19:04:51.089589996Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 449.189025ms" Jun 20 19:04:51.093451 containerd[1527]: time="2025-06-20T19:04:51.093416059Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 458.709993ms" Jun 20 19:04:51.101012 kubelet[2447]: W0620 19:04:51.100950 2447 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://157.180.24.181:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 157.180.24.181:6443: connect: connection refused Jun 20 19:04:51.101012 kubelet[2447]: E0620 19:04:51.100991 2447 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://157.180.24.181:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 157.180.24.181:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:04:51.184766 containerd[1527]: time="2025-06-20T19:04:51.183675563Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 19:04:51.184766 containerd[1527]: time="2025-06-20T19:04:51.183875481Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 19:04:51.184766 containerd[1527]: time="2025-06-20T19:04:51.183890789Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:04:51.184766 containerd[1527]: time="2025-06-20T19:04:51.184016737Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:04:51.186208 containerd[1527]: time="2025-06-20T19:04:51.183272502Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 19:04:51.186208 containerd[1527]: time="2025-06-20T19:04:51.186085132Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 19:04:51.186208 containerd[1527]: time="2025-06-20T19:04:51.186096283Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:04:51.186208 containerd[1527]: time="2025-06-20T19:04:51.186148902Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:04:51.190304 containerd[1527]: time="2025-06-20T19:04:51.189630205Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 19:04:51.190304 containerd[1527]: time="2025-06-20T19:04:51.189678707Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 19:04:51.190304 containerd[1527]: time="2025-06-20T19:04:51.189693094Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:04:51.190304 containerd[1527]: time="2025-06-20T19:04:51.189760671Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:04:51.204849 systemd[1]: Started cri-containerd-5b6c6c47dc462d6da5977909332041037c8c2ed11c1cc618171753fcb629bd0e.scope - libcontainer container 5b6c6c47dc462d6da5977909332041037c8c2ed11c1cc618171753fcb629bd0e. Jun 20 19:04:51.208799 systemd[1]: Started cri-containerd-0754e4d011bc7d45afdb77a61b95f8192afdd4ac95943acbc1842aa647ade7f4.scope - libcontainer container 0754e4d011bc7d45afdb77a61b95f8192afdd4ac95943acbc1842aa647ade7f4. Jun 20 19:04:51.212814 systemd[1]: Started cri-containerd-8406b3d0e3dd99edccfa2e37152700be27224e17343b09f32b6517d66767c929.scope - libcontainer container 8406b3d0e3dd99edccfa2e37152700be27224e17343b09f32b6517d66767c929. Jun 20 19:04:51.245509 containerd[1527]: time="2025-06-20T19:04:51.245405815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-2-0-e-b360e0c6ec,Uid:53bd1a1f90e4e424d1bc201f1c55253d,Namespace:kube-system,Attempt:0,} returns sandbox id \"5b6c6c47dc462d6da5977909332041037c8c2ed11c1cc618171753fcb629bd0e\"" Jun 20 19:04:51.248848 containerd[1527]: time="2025-06-20T19:04:51.248793612Z" level=info msg="CreateContainer within sandbox \"5b6c6c47dc462d6da5977909332041037c8c2ed11c1cc618171753fcb629bd0e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 20 19:04:51.264072 containerd[1527]: time="2025-06-20T19:04:51.263978109Z" level=info msg="CreateContainer within sandbox \"5b6c6c47dc462d6da5977909332041037c8c2ed11c1cc618171753fcb629bd0e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5ebdc801950a7c803ccff5428052acfba48e11ef20d6c77fbaaa356f74792fb7\"" Jun 20 19:04:51.266207 containerd[1527]: time="2025-06-20T19:04:51.265849732Z" level=info msg="StartContainer for \"5ebdc801950a7c803ccff5428052acfba48e11ef20d6c77fbaaa356f74792fb7\"" Jun 20 19:04:51.273662 containerd[1527]: time="2025-06-20T19:04:51.273630893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-2-0-e-b360e0c6ec,Uid:b44ce924c631e7cfbb5db97ff4ee288a,Namespace:kube-system,Attempt:0,} returns sandbox id \"8406b3d0e3dd99edccfa2e37152700be27224e17343b09f32b6517d66767c929\"" Jun 20 19:04:51.274702 containerd[1527]: time="2025-06-20T19:04:51.274678861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-2-0-e-b360e0c6ec,Uid:924566e082f8e01f31232e0955590eef,Namespace:kube-system,Attempt:0,} returns sandbox id \"0754e4d011bc7d45afdb77a61b95f8192afdd4ac95943acbc1842aa647ade7f4\"" Jun 20 19:04:51.278888 containerd[1527]: time="2025-06-20T19:04:51.278870285Z" level=info msg="CreateContainer within sandbox \"0754e4d011bc7d45afdb77a61b95f8192afdd4ac95943acbc1842aa647ade7f4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 20 19:04:51.280541 containerd[1527]: time="2025-06-20T19:04:51.280519067Z" level=info msg="CreateContainer within sandbox \"8406b3d0e3dd99edccfa2e37152700be27224e17343b09f32b6517d66767c929\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 20 19:04:51.296851 systemd[1]: Started cri-containerd-5ebdc801950a7c803ccff5428052acfba48e11ef20d6c77fbaaa356f74792fb7.scope - libcontainer container 5ebdc801950a7c803ccff5428052acfba48e11ef20d6c77fbaaa356f74792fb7. Jun 20 19:04:51.299670 containerd[1527]: time="2025-06-20T19:04:51.299602506Z" level=info msg="CreateContainer within sandbox \"8406b3d0e3dd99edccfa2e37152700be27224e17343b09f32b6517d66767c929\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1afc26b5734b2c8153995b773d8a9a358c2d3cd26ee6683e4700bb6c961d5b22\"" Jun 20 19:04:51.302959 containerd[1527]: time="2025-06-20T19:04:51.301532118Z" level=info msg="CreateContainer within sandbox \"0754e4d011bc7d45afdb77a61b95f8192afdd4ac95943acbc1842aa647ade7f4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0a11f25d4530498330f8549c0ec7d2c06f890cad087ddf003c33ecc6507e4161\"" Jun 20 19:04:51.302959 containerd[1527]: time="2025-06-20T19:04:51.302227762Z" level=info msg="StartContainer for \"1afc26b5734b2c8153995b773d8a9a358c2d3cd26ee6683e4700bb6c961d5b22\"" Jun 20 19:04:51.308003 containerd[1527]: time="2025-06-20T19:04:51.307977246Z" level=info msg="StartContainer for \"0a11f25d4530498330f8549c0ec7d2c06f890cad087ddf003c33ecc6507e4161\"" Jun 20 19:04:51.339918 containerd[1527]: time="2025-06-20T19:04:51.339865960Z" level=info msg="StartContainer for \"5ebdc801950a7c803ccff5428052acfba48e11ef20d6c77fbaaa356f74792fb7\" returns successfully" Jun 20 19:04:51.340053 systemd[1]: Started cri-containerd-1afc26b5734b2c8153995b773d8a9a358c2d3cd26ee6683e4700bb6c961d5b22.scope - libcontainer container 1afc26b5734b2c8153995b773d8a9a358c2d3cd26ee6683e4700bb6c961d5b22. Jun 20 19:04:51.351984 systemd[1]: Started cri-containerd-0a11f25d4530498330f8549c0ec7d2c06f890cad087ddf003c33ecc6507e4161.scope - libcontainer container 0a11f25d4530498330f8549c0ec7d2c06f890cad087ddf003c33ecc6507e4161. Jun 20 19:04:51.399561 containerd[1527]: time="2025-06-20T19:04:51.399525203Z" level=info msg="StartContainer for \"1afc26b5734b2c8153995b773d8a9a358c2d3cd26ee6683e4700bb6c961d5b22\" returns successfully" Jun 20 19:04:51.399682 containerd[1527]: time="2025-06-20T19:04:51.399586538Z" level=info msg="StartContainer for \"0a11f25d4530498330f8549c0ec7d2c06f890cad087ddf003c33ecc6507e4161\" returns successfully" Jun 20 19:04:51.429646 kubelet[2447]: W0620 19:04:51.429575 2447 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://157.180.24.181:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-2-0-e-b360e0c6ec&limit=500&resourceVersion=0": dial tcp 157.180.24.181:6443: connect: connection refused Jun 20 19:04:51.429822 kubelet[2447]: E0620 19:04:51.429655 2447 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://157.180.24.181:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-2-0-e-b360e0c6ec&limit=500&resourceVersion=0\": dial tcp 157.180.24.181:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:04:51.529857 kubelet[2447]: W0620 19:04:51.529708 2447 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://157.180.24.181:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 157.180.24.181:6443: connect: connection refused Jun 20 19:04:51.529857 kubelet[2447]: E0620 19:04:51.529799 2447 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://157.180.24.181:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 157.180.24.181:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:04:51.574755 kubelet[2447]: E0620 19:04:51.574689 2447 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://157.180.24.181:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-2-0-e-b360e0c6ec?timeout=10s\": dial tcp 157.180.24.181:6443: connect: connection refused" interval="1.6s" Jun 20 19:04:51.740522 kubelet[2447]: I0620 19:04:51.740491 2447 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230-2-0-e-b360e0c6ec" Jun 20 19:04:52.811743 kubelet[2447]: I0620 19:04:52.810342 2447 kubelet_node_status.go:75] "Successfully registered node" node="ci-4230-2-0-e-b360e0c6ec" Jun 20 19:04:52.811743 kubelet[2447]: E0620 19:04:52.810381 2447 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4230-2-0-e-b360e0c6ec\": node \"ci-4230-2-0-e-b360e0c6ec\" not found" Jun 20 19:04:52.824950 kubelet[2447]: E0620 19:04:52.824914 2447 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230-2-0-e-b360e0c6ec\" not found" Jun 20 19:04:52.925657 kubelet[2447]: E0620 19:04:52.925570 2447 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230-2-0-e-b360e0c6ec\" not found" Jun 20 19:04:53.026361 kubelet[2447]: E0620 19:04:53.026308 2447 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230-2-0-e-b360e0c6ec\" not found" Jun 20 19:04:53.127139 kubelet[2447]: E0620 19:04:53.127021 2447 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230-2-0-e-b360e0c6ec\" not found" Jun 20 19:04:53.227565 kubelet[2447]: E0620 19:04:53.227519 2447 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230-2-0-e-b360e0c6ec\" not found" Jun 20 19:04:53.328319 kubelet[2447]: E0620 19:04:53.328264 2447 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230-2-0-e-b360e0c6ec\" not found" Jun 20 19:04:53.429101 kubelet[2447]: E0620 19:04:53.428942 2447 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230-2-0-e-b360e0c6ec\" not found" Jun 20 19:04:54.154951 kubelet[2447]: I0620 19:04:54.154899 2447 apiserver.go:52] "Watching apiserver" Jun 20 19:04:54.172061 kubelet[2447]: I0620 19:04:54.171966 2447 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jun 20 19:04:54.607604 systemd[1]: Reload requested from client PID 2722 ('systemctl') (unit session-7.scope)... Jun 20 19:04:54.607641 systemd[1]: Reloading... Jun 20 19:04:54.697741 zram_generator::config[2764]: No configuration found. Jun 20 19:04:54.782654 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:04:54.876521 systemd[1]: Reloading finished in 268 ms. Jun 20 19:04:54.895923 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:04:54.906363 systemd[1]: kubelet.service: Deactivated successfully. Jun 20 19:04:54.906539 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:04:54.906582 systemd[1]: kubelet.service: Consumed 808ms CPU time, 126.1M memory peak. Jun 20 19:04:54.911948 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:04:54.997202 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:04:55.001069 (kubelet)[2818]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 20 19:04:55.030979 kubelet[2818]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:04:55.031997 kubelet[2818]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 20 19:04:55.031997 kubelet[2818]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:04:55.031997 kubelet[2818]: I0620 19:04:55.031088 2818 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 20 19:04:55.036970 kubelet[2818]: I0620 19:04:55.036945 2818 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jun 20 19:04:55.037089 kubelet[2818]: I0620 19:04:55.037076 2818 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 20 19:04:55.037351 kubelet[2818]: I0620 19:04:55.037333 2818 server.go:934] "Client rotation is on, will bootstrap in background" Jun 20 19:04:55.039145 kubelet[2818]: I0620 19:04:55.039131 2818 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jun 20 19:04:55.046883 kubelet[2818]: I0620 19:04:55.046867 2818 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 20 19:04:55.052812 kubelet[2818]: E0620 19:04:55.052761 2818 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jun 20 19:04:55.052812 kubelet[2818]: I0620 19:04:55.052800 2818 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jun 20 19:04:55.054764 kubelet[2818]: I0620 19:04:55.054745 2818 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 20 19:04:55.054842 kubelet[2818]: I0620 19:04:55.054824 2818 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jun 20 19:04:55.054935 kubelet[2818]: I0620 19:04:55.054911 2818 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 20 19:04:55.055061 kubelet[2818]: I0620 19:04:55.054929 2818 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-2-0-e-b360e0c6ec","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 20 19:04:55.055061 kubelet[2818]: I0620 19:04:55.055054 2818 topology_manager.go:138] "Creating topology manager with none policy" Jun 20 19:04:55.055061 kubelet[2818]: I0620 19:04:55.055060 2818 container_manager_linux.go:300] "Creating device plugin manager" Jun 20 19:04:55.055208 kubelet[2818]: I0620 19:04:55.055080 2818 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:04:55.055208 kubelet[2818]: I0620 19:04:55.055148 2818 kubelet.go:408] "Attempting to sync node with API server" Jun 20 19:04:55.055208 kubelet[2818]: I0620 19:04:55.055156 2818 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 20 19:04:55.055362 kubelet[2818]: I0620 19:04:55.055332 2818 kubelet.go:314] "Adding apiserver pod source" Jun 20 19:04:55.055362 kubelet[2818]: I0620 19:04:55.055348 2818 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 20 19:04:55.056212 kubelet[2818]: I0620 19:04:55.056163 2818 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jun 20 19:04:55.056748 kubelet[2818]: I0620 19:04:55.056562 2818 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 20 19:04:55.060883 kubelet[2818]: I0620 19:04:55.060864 2818 server.go:1274] "Started kubelet" Jun 20 19:04:55.062229 kubelet[2818]: I0620 19:04:55.062192 2818 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jun 20 19:04:55.064266 kubelet[2818]: I0620 19:04:55.062195 2818 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 20 19:04:55.064614 kubelet[2818]: I0620 19:04:55.064420 2818 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 20 19:04:55.065410 kubelet[2818]: I0620 19:04:55.065394 2818 server.go:449] "Adding debug handlers to kubelet server" Jun 20 19:04:55.066547 kubelet[2818]: I0620 19:04:55.066532 2818 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 20 19:04:55.067735 kubelet[2818]: I0620 19:04:55.067589 2818 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 20 19:04:55.074484 kubelet[2818]: I0620 19:04:55.074335 2818 volume_manager.go:289] "Starting Kubelet Volume Manager" Jun 20 19:04:55.074484 kubelet[2818]: I0620 19:04:55.074409 2818 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jun 20 19:04:55.075298 kubelet[2818]: I0620 19:04:55.075094 2818 factory.go:221] Registration of the systemd container factory successfully Jun 20 19:04:55.075298 kubelet[2818]: I0620 19:04:55.075154 2818 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 20 19:04:55.076222 kubelet[2818]: I0620 19:04:55.076212 2818 reconciler.go:26] "Reconciler: start to sync state" Jun 20 19:04:55.077896 kubelet[2818]: E0620 19:04:55.077872 2818 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 20 19:04:55.078071 kubelet[2818]: I0620 19:04:55.078005 2818 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 20 19:04:55.079013 kubelet[2818]: I0620 19:04:55.078942 2818 factory.go:221] Registration of the containerd container factory successfully Jun 20 19:04:55.079272 kubelet[2818]: I0620 19:04:55.079204 2818 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 20 19:04:55.079272 kubelet[2818]: I0620 19:04:55.079220 2818 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 20 19:04:55.079272 kubelet[2818]: I0620 19:04:55.079240 2818 kubelet.go:2321] "Starting kubelet main sync loop" Jun 20 19:04:55.079466 kubelet[2818]: E0620 19:04:55.079378 2818 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 20 19:04:55.115734 kubelet[2818]: I0620 19:04:55.114133 2818 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 20 19:04:55.115734 kubelet[2818]: I0620 19:04:55.114150 2818 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 20 19:04:55.115734 kubelet[2818]: I0620 19:04:55.114164 2818 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:04:55.115734 kubelet[2818]: I0620 19:04:55.114295 2818 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 20 19:04:55.115734 kubelet[2818]: I0620 19:04:55.114304 2818 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 20 19:04:55.115734 kubelet[2818]: I0620 19:04:55.114319 2818 policy_none.go:49] "None policy: Start" Jun 20 19:04:55.116864 kubelet[2818]: I0620 19:04:55.116849 2818 memory_manager.go:170] "Starting memorymanager" policy="None" Jun 20 19:04:55.116926 kubelet[2818]: I0620 19:04:55.116869 2818 state_mem.go:35] "Initializing new in-memory state store" Jun 20 19:04:55.117004 kubelet[2818]: I0620 19:04:55.116992 2818 state_mem.go:75] "Updated machine memory state" Jun 20 19:04:55.126233 kubelet[2818]: I0620 19:04:55.126216 2818 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 20 19:04:55.126351 kubelet[2818]: I0620 19:04:55.126335 2818 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 20 19:04:55.126396 kubelet[2818]: I0620 19:04:55.126349 2818 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 20 19:04:55.126535 kubelet[2818]: I0620 19:04:55.126521 2818 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 20 19:04:55.235406 kubelet[2818]: I0620 19:04:55.235372 2818 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230-2-0-e-b360e0c6ec" Jun 20 19:04:55.242875 kubelet[2818]: I0620 19:04:55.242845 2818 kubelet_node_status.go:111] "Node was previously registered" node="ci-4230-2-0-e-b360e0c6ec" Jun 20 19:04:55.243174 kubelet[2818]: I0620 19:04:55.243153 2818 kubelet_node_status.go:75] "Successfully registered node" node="ci-4230-2-0-e-b360e0c6ec" Jun 20 19:04:55.277363 kubelet[2818]: I0620 19:04:55.277328 2818 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/924566e082f8e01f31232e0955590eef-kubeconfig\") pod \"kube-controller-manager-ci-4230-2-0-e-b360e0c6ec\" (UID: \"924566e082f8e01f31232e0955590eef\") " pod="kube-system/kube-controller-manager-ci-4230-2-0-e-b360e0c6ec" Jun 20 19:04:55.277363 kubelet[2818]: I0620 19:04:55.277358 2818 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/924566e082f8e01f31232e0955590eef-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-2-0-e-b360e0c6ec\" (UID: \"924566e082f8e01f31232e0955590eef\") " pod="kube-system/kube-controller-manager-ci-4230-2-0-e-b360e0c6ec" Jun 20 19:04:55.277475 kubelet[2818]: I0620 19:04:55.277377 2818 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b44ce924c631e7cfbb5db97ff4ee288a-ca-certs\") pod \"kube-apiserver-ci-4230-2-0-e-b360e0c6ec\" (UID: \"b44ce924c631e7cfbb5db97ff4ee288a\") " pod="kube-system/kube-apiserver-ci-4230-2-0-e-b360e0c6ec" Jun 20 19:04:55.277475 kubelet[2818]: I0620 19:04:55.277393 2818 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/924566e082f8e01f31232e0955590eef-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-2-0-e-b360e0c6ec\" (UID: \"924566e082f8e01f31232e0955590eef\") " pod="kube-system/kube-controller-manager-ci-4230-2-0-e-b360e0c6ec" Jun 20 19:04:55.277475 kubelet[2818]: I0620 19:04:55.277411 2818 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/924566e082f8e01f31232e0955590eef-k8s-certs\") pod \"kube-controller-manager-ci-4230-2-0-e-b360e0c6ec\" (UID: \"924566e082f8e01f31232e0955590eef\") " pod="kube-system/kube-controller-manager-ci-4230-2-0-e-b360e0c6ec" Jun 20 19:04:55.277475 kubelet[2818]: I0620 19:04:55.277424 2818 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/924566e082f8e01f31232e0955590eef-ca-certs\") pod \"kube-controller-manager-ci-4230-2-0-e-b360e0c6ec\" (UID: \"924566e082f8e01f31232e0955590eef\") " pod="kube-system/kube-controller-manager-ci-4230-2-0-e-b360e0c6ec" Jun 20 19:04:55.277475 kubelet[2818]: I0620 19:04:55.277438 2818 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/53bd1a1f90e4e424d1bc201f1c55253d-kubeconfig\") pod \"kube-scheduler-ci-4230-2-0-e-b360e0c6ec\" (UID: \"53bd1a1f90e4e424d1bc201f1c55253d\") " pod="kube-system/kube-scheduler-ci-4230-2-0-e-b360e0c6ec" Jun 20 19:04:55.277587 kubelet[2818]: I0620 19:04:55.277451 2818 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b44ce924c631e7cfbb5db97ff4ee288a-k8s-certs\") pod \"kube-apiserver-ci-4230-2-0-e-b360e0c6ec\" (UID: \"b44ce924c631e7cfbb5db97ff4ee288a\") " pod="kube-system/kube-apiserver-ci-4230-2-0-e-b360e0c6ec" Jun 20 19:04:55.277587 kubelet[2818]: I0620 19:04:55.277467 2818 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b44ce924c631e7cfbb5db97ff4ee288a-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-2-0-e-b360e0c6ec\" (UID: \"b44ce924c631e7cfbb5db97ff4ee288a\") " pod="kube-system/kube-apiserver-ci-4230-2-0-e-b360e0c6ec" Jun 20 19:04:55.621971 sudo[2850]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jun 20 19:04:55.622486 sudo[2850]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jun 20 19:04:56.061287 kubelet[2818]: I0620 19:04:56.061238 2818 apiserver.go:52] "Watching apiserver" Jun 20 19:04:56.074743 kubelet[2818]: I0620 19:04:56.074542 2818 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jun 20 19:04:56.109640 kubelet[2818]: E0620 19:04:56.109556 2818 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4230-2-0-e-b360e0c6ec\" already exists" pod="kube-system/kube-apiserver-ci-4230-2-0-e-b360e0c6ec" Jun 20 19:04:56.122209 sudo[2850]: pam_unix(sudo:session): session closed for user root Jun 20 19:04:56.133700 kubelet[2818]: I0620 19:04:56.133641 2818 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230-2-0-e-b360e0c6ec" podStartSLOduration=1.1334064719999999 podStartE2EDuration="1.133406472s" podCreationTimestamp="2025-06-20 19:04:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:04:56.125883362 +0000 UTC m=+1.121022377" watchObservedRunningTime="2025-06-20 19:04:56.133406472 +0000 UTC m=+1.128545487" Jun 20 19:04:56.134248 kubelet[2818]: I0620 19:04:56.134118 2818 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230-2-0-e-b360e0c6ec" podStartSLOduration=1.134108878 podStartE2EDuration="1.134108878s" podCreationTimestamp="2025-06-20 19:04:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:04:56.133222475 +0000 UTC m=+1.128361491" watchObservedRunningTime="2025-06-20 19:04:56.134108878 +0000 UTC m=+1.129247893" Jun 20 19:04:57.473857 sudo[1913]: pam_unix(sudo:session): session closed for user root Jun 20 19:04:57.631123 sshd[1912]: Connection closed by 139.178.68.195 port 55544 Jun 20 19:04:57.632936 sshd-session[1910]: pam_unix(sshd:session): session closed for user core Jun 20 19:04:57.637621 systemd[1]: sshd@6-157.180.24.181:22-139.178.68.195:55544.service: Deactivated successfully. Jun 20 19:04:57.641630 systemd[1]: session-7.scope: Deactivated successfully. Jun 20 19:04:57.641969 systemd[1]: session-7.scope: Consumed 3.765s CPU time, 210.5M memory peak. Jun 20 19:04:57.644610 systemd-logind[1504]: Session 7 logged out. Waiting for processes to exit. Jun 20 19:04:57.646797 systemd-logind[1504]: Removed session 7. Jun 20 19:04:59.485735 kubelet[2818]: I0620 19:04:59.485610 2818 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 20 19:04:59.486551 containerd[1527]: time="2025-06-20T19:04:59.485995281Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 20 19:04:59.486903 kubelet[2818]: I0620 19:04:59.486854 2818 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 20 19:05:00.329219 kubelet[2818]: I0620 19:05:00.327784 2818 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230-2-0-e-b360e0c6ec" podStartSLOduration=5.327766363 podStartE2EDuration="5.327766363s" podCreationTimestamp="2025-06-20 19:04:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:04:56.14102317 +0000 UTC m=+1.136162185" watchObservedRunningTime="2025-06-20 19:05:00.327766363 +0000 UTC m=+5.322905388" Jun 20 19:05:00.342305 systemd[1]: Created slice kubepods-besteffort-pod71d45dd8_09d5_44e6_be48_74f3010a8718.slice - libcontainer container kubepods-besteffort-pod71d45dd8_09d5_44e6_be48_74f3010a8718.slice. Jun 20 19:05:00.357506 systemd[1]: Created slice kubepods-burstable-poddd412fae_ddb2_4651_be6d_e666b34abd34.slice - libcontainer container kubepods-burstable-poddd412fae_ddb2_4651_be6d_e666b34abd34.slice. Jun 20 19:05:00.410044 kubelet[2818]: I0620 19:05:00.410015 2818 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/71d45dd8-09d5-44e6-be48-74f3010a8718-xtables-lock\") pod \"kube-proxy-54z2b\" (UID: \"71d45dd8-09d5-44e6-be48-74f3010a8718\") " pod="kube-system/kube-proxy-54z2b" Jun 20 19:05:00.410444 kubelet[2818]: I0620 19:05:00.410304 2818 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dd412fae-ddb2-4651-be6d-e666b34abd34-cilium-run\") pod \"cilium-s2ngr\" (UID: \"dd412fae-ddb2-4651-be6d-e666b34abd34\") " pod="kube-system/cilium-s2ngr" Jun 20 19:05:00.410444 kubelet[2818]: I0620 19:05:00.410326 2818 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dd412fae-ddb2-4651-be6d-e666b34abd34-hostproc\") pod \"cilium-s2ngr\" (UID: \"dd412fae-ddb2-4651-be6d-e666b34abd34\") " pod="kube-system/cilium-s2ngr" Jun 20 19:05:00.410618 kubelet[2818]: I0620 19:05:00.410342 2818 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/71d45dd8-09d5-44e6-be48-74f3010a8718-kube-proxy\") pod \"kube-proxy-54z2b\" (UID: \"71d45dd8-09d5-44e6-be48-74f3010a8718\") " pod="kube-system/kube-proxy-54z2b" Jun 20 19:05:00.410618 kubelet[2818]: I0620 19:05:00.410572 2818 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/71d45dd8-09d5-44e6-be48-74f3010a8718-lib-modules\") pod \"kube-proxy-54z2b\" (UID: \"71d45dd8-09d5-44e6-be48-74f3010a8718\") " pod="kube-system/kube-proxy-54z2b" Jun 20 19:05:00.410874 kubelet[2818]: I0620 19:05:00.410594 2818 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dd412fae-ddb2-4651-be6d-e666b34abd34-clustermesh-secrets\") pod \"cilium-s2ngr\" (UID: \"dd412fae-ddb2-4651-be6d-e666b34abd34\") " pod="kube-system/cilium-s2ngr" Jun 20 19:05:00.410874 kubelet[2818]: I0620 19:05:00.410821 2818 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dd412fae-ddb2-4651-be6d-e666b34abd34-cilium-config-path\") pod \"cilium-s2ngr\" (UID: \"dd412fae-ddb2-4651-be6d-e666b34abd34\") " pod="kube-system/cilium-s2ngr" Jun 20 19:05:00.411049 kubelet[2818]: I0620 19:05:00.410844 2818 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvdfz\" (UniqueName: \"kubernetes.io/projected/71d45dd8-09d5-44e6-be48-74f3010a8718-kube-api-access-vvdfz\") pod \"kube-proxy-54z2b\" (UID: \"71d45dd8-09d5-44e6-be48-74f3010a8718\") " pod="kube-system/kube-proxy-54z2b" Jun 20 19:05:00.411049 kubelet[2818]: I0620 19:05:00.410994 2818 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dd412fae-ddb2-4651-be6d-e666b34abd34-cilium-cgroup\") pod \"cilium-s2ngr\" (UID: \"dd412fae-ddb2-4651-be6d-e666b34abd34\") " pod="kube-system/cilium-s2ngr" Jun 20 19:05:00.411218 kubelet[2818]: I0620 19:05:00.411133 2818 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dd412fae-ddb2-4651-be6d-e666b34abd34-etc-cni-netd\") pod \"cilium-s2ngr\" (UID: \"dd412fae-ddb2-4651-be6d-e666b34abd34\") " pod="kube-system/cilium-s2ngr" Jun 20 19:05:00.411218 kubelet[2818]: I0620 19:05:00.411154 2818 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dd412fae-ddb2-4651-be6d-e666b34abd34-host-proc-sys-kernel\") pod \"cilium-s2ngr\" (UID: \"dd412fae-ddb2-4651-be6d-e666b34abd34\") " pod="kube-system/cilium-s2ngr" Jun 20 19:05:00.411445 kubelet[2818]: I0620 19:05:00.411166 2818 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dd412fae-ddb2-4651-be6d-e666b34abd34-hubble-tls\") pod \"cilium-s2ngr\" (UID: \"dd412fae-ddb2-4651-be6d-e666b34abd34\") " pod="kube-system/cilium-s2ngr" Jun 20 19:05:00.411445 kubelet[2818]: I0620 19:05:00.411398 2818 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hvd9\" (UniqueName: \"kubernetes.io/projected/dd412fae-ddb2-4651-be6d-e666b34abd34-kube-api-access-7hvd9\") pod \"cilium-s2ngr\" (UID: \"dd412fae-ddb2-4651-be6d-e666b34abd34\") " pod="kube-system/cilium-s2ngr" Jun 20 19:05:00.411445 kubelet[2818]: I0620 19:05:00.411411 2818 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dd412fae-ddb2-4651-be6d-e666b34abd34-bpf-maps\") pod \"cilium-s2ngr\" (UID: \"dd412fae-ddb2-4651-be6d-e666b34abd34\") " pod="kube-system/cilium-s2ngr" Jun 20 19:05:00.411445 kubelet[2818]: I0620 19:05:00.411422 2818 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dd412fae-ddb2-4651-be6d-e666b34abd34-host-proc-sys-net\") pod \"cilium-s2ngr\" (UID: \"dd412fae-ddb2-4651-be6d-e666b34abd34\") " pod="kube-system/cilium-s2ngr" Jun 20 19:05:00.411723 kubelet[2818]: I0620 19:05:00.411566 2818 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dd412fae-ddb2-4651-be6d-e666b34abd34-cni-path\") pod \"cilium-s2ngr\" (UID: \"dd412fae-ddb2-4651-be6d-e666b34abd34\") " pod="kube-system/cilium-s2ngr" Jun 20 19:05:00.411723 kubelet[2818]: I0620 19:05:00.411585 2818 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dd412fae-ddb2-4651-be6d-e666b34abd34-lib-modules\") pod \"cilium-s2ngr\" (UID: \"dd412fae-ddb2-4651-be6d-e666b34abd34\") " pod="kube-system/cilium-s2ngr" Jun 20 19:05:00.411723 kubelet[2818]: I0620 19:05:00.411596 2818 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dd412fae-ddb2-4651-be6d-e666b34abd34-xtables-lock\") pod \"cilium-s2ngr\" (UID: \"dd412fae-ddb2-4651-be6d-e666b34abd34\") " pod="kube-system/cilium-s2ngr" Jun 20 19:05:00.430834 systemd[1]: Created slice kubepods-besteffort-pod53e9b112_cb3a_4942_b001_e0ca1da04070.slice - libcontainer container kubepods-besteffort-pod53e9b112_cb3a_4942_b001_e0ca1da04070.slice. Jun 20 19:05:00.512802 kubelet[2818]: I0620 19:05:00.512752 2818 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/53e9b112-cb3a-4942-b001-e0ca1da04070-cilium-config-path\") pod \"cilium-operator-5d85765b45-kwz5m\" (UID: \"53e9b112-cb3a-4942-b001-e0ca1da04070\") " pod="kube-system/cilium-operator-5d85765b45-kwz5m" Jun 20 19:05:00.514024 kubelet[2818]: I0620 19:05:00.512891 2818 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldz48\" (UniqueName: \"kubernetes.io/projected/53e9b112-cb3a-4942-b001-e0ca1da04070-kube-api-access-ldz48\") pod \"cilium-operator-5d85765b45-kwz5m\" (UID: \"53e9b112-cb3a-4942-b001-e0ca1da04070\") " pod="kube-system/cilium-operator-5d85765b45-kwz5m" Jun 20 19:05:00.654996 containerd[1527]: time="2025-06-20T19:05:00.654815203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-54z2b,Uid:71d45dd8-09d5-44e6-be48-74f3010a8718,Namespace:kube-system,Attempt:0,}" Jun 20 19:05:00.663360 containerd[1527]: time="2025-06-20T19:05:00.663096024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-s2ngr,Uid:dd412fae-ddb2-4651-be6d-e666b34abd34,Namespace:kube-system,Attempt:0,}" Jun 20 19:05:00.677904 containerd[1527]: time="2025-06-20T19:05:00.677814884Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 19:05:00.677904 containerd[1527]: time="2025-06-20T19:05:00.677868876Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 19:05:00.677904 containerd[1527]: time="2025-06-20T19:05:00.677884866Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:05:00.682132 containerd[1527]: time="2025-06-20T19:05:00.678584805Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:05:00.690659 containerd[1527]: time="2025-06-20T19:05:00.690397125Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 19:05:00.690659 containerd[1527]: time="2025-06-20T19:05:00.690436569Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 19:05:00.690659 containerd[1527]: time="2025-06-20T19:05:00.690445726Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:05:00.690659 containerd[1527]: time="2025-06-20T19:05:00.690496272Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:05:00.700941 systemd[1]: Started cri-containerd-0dc5eeb3ba894e16e5d599f1ae9c6138d6c28b2bbea2d2e79c35a4c5d55df806.scope - libcontainer container 0dc5eeb3ba894e16e5d599f1ae9c6138d6c28b2bbea2d2e79c35a4c5d55df806. Jun 20 19:05:00.716832 systemd[1]: Started cri-containerd-6d4ce5a6abeaadb881fc3a99c383213b99e50e69a1a6a66388e7817e3e57f248.scope - libcontainer container 6d4ce5a6abeaadb881fc3a99c383213b99e50e69a1a6a66388e7817e3e57f248. Jun 20 19:05:00.734931 containerd[1527]: time="2025-06-20T19:05:00.734696150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-54z2b,Uid:71d45dd8-09d5-44e6-be48-74f3010a8718,Namespace:kube-system,Attempt:0,} returns sandbox id \"0dc5eeb3ba894e16e5d599f1ae9c6138d6c28b2bbea2d2e79c35a4c5d55df806\"" Jun 20 19:05:00.739775 containerd[1527]: time="2025-06-20T19:05:00.739521949Z" level=info msg="CreateContainer within sandbox \"0dc5eeb3ba894e16e5d599f1ae9c6138d6c28b2bbea2d2e79c35a4c5d55df806\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 20 19:05:00.741113 containerd[1527]: time="2025-06-20T19:05:00.740865011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-kwz5m,Uid:53e9b112-cb3a-4942-b001-e0ca1da04070,Namespace:kube-system,Attempt:0,}" Jun 20 19:05:00.756823 containerd[1527]: time="2025-06-20T19:05:00.756777741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-s2ngr,Uid:dd412fae-ddb2-4651-be6d-e666b34abd34,Namespace:kube-system,Attempt:0,} returns sandbox id \"6d4ce5a6abeaadb881fc3a99c383213b99e50e69a1a6a66388e7817e3e57f248\"" Jun 20 19:05:00.758805 containerd[1527]: time="2025-06-20T19:05:00.758788241Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jun 20 19:05:00.760146 containerd[1527]: time="2025-06-20T19:05:00.760072152Z" level=info msg="CreateContainer within sandbox \"0dc5eeb3ba894e16e5d599f1ae9c6138d6c28b2bbea2d2e79c35a4c5d55df806\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"65f9e639b1fc7ee8e857bd725ad311994592572d7be5fc57fa76286244a9af69\"" Jun 20 19:05:00.760514 containerd[1527]: time="2025-06-20T19:05:00.760385983Z" level=info msg="StartContainer for \"65f9e639b1fc7ee8e857bd725ad311994592572d7be5fc57fa76286244a9af69\"" Jun 20 19:05:00.779189 containerd[1527]: time="2025-06-20T19:05:00.778944642Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 19:05:00.779189 containerd[1527]: time="2025-06-20T19:05:00.778992181Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 19:05:00.779189 containerd[1527]: time="2025-06-20T19:05:00.779004354Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:05:00.779189 containerd[1527]: time="2025-06-20T19:05:00.779066151Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:05:00.789848 systemd[1]: Started cri-containerd-65f9e639b1fc7ee8e857bd725ad311994592572d7be5fc57fa76286244a9af69.scope - libcontainer container 65f9e639b1fc7ee8e857bd725ad311994592572d7be5fc57fa76286244a9af69. Jun 20 19:05:00.793852 systemd[1]: Started cri-containerd-78921f17c74ca3c6549d5a27565bfd72661ed2a1a07e334865974cf201c554df.scope - libcontainer container 78921f17c74ca3c6549d5a27565bfd72661ed2a1a07e334865974cf201c554df. Jun 20 19:05:00.823708 containerd[1527]: time="2025-06-20T19:05:00.823673739Z" level=info msg="StartContainer for \"65f9e639b1fc7ee8e857bd725ad311994592572d7be5fc57fa76286244a9af69\" returns successfully" Jun 20 19:05:00.836249 containerd[1527]: time="2025-06-20T19:05:00.836219803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-kwz5m,Uid:53e9b112-cb3a-4942-b001-e0ca1da04070,Namespace:kube-system,Attempt:0,} returns sandbox id \"78921f17c74ca3c6549d5a27565bfd72661ed2a1a07e334865974cf201c554df\"" Jun 20 19:05:02.193290 kubelet[2818]: I0620 19:05:02.193223 2818 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-54z2b" podStartSLOduration=2.193202337 podStartE2EDuration="2.193202337s" podCreationTimestamp="2025-06-20 19:05:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:05:01.128105799 +0000 UTC m=+6.123244834" watchObservedRunningTime="2025-06-20 19:05:02.193202337 +0000 UTC m=+7.188341363" Jun 20 19:05:04.765026 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3235396254.mount: Deactivated successfully. Jun 20 19:05:06.097973 containerd[1527]: time="2025-06-20T19:05:06.090225302Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jun 20 19:05:06.099610 containerd[1527]: time="2025-06-20T19:05:06.098534856Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 5.339652207s" Jun 20 19:05:06.099610 containerd[1527]: time="2025-06-20T19:05:06.098562137Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jun 20 19:05:06.100635 containerd[1527]: time="2025-06-20T19:05:06.099829063Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jun 20 19:05:06.116749 containerd[1527]: time="2025-06-20T19:05:06.116357952Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:05:06.117369 containerd[1527]: time="2025-06-20T19:05:06.117351344Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:05:06.118072 containerd[1527]: time="2025-06-20T19:05:06.118056513Z" level=info msg="CreateContainer within sandbox \"6d4ce5a6abeaadb881fc3a99c383213b99e50e69a1a6a66388e7817e3e57f248\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jun 20 19:05:06.173823 containerd[1527]: time="2025-06-20T19:05:06.173775912Z" level=info msg="CreateContainer within sandbox \"6d4ce5a6abeaadb881fc3a99c383213b99e50e69a1a6a66388e7817e3e57f248\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7f7342bba161968239a40ea34b04d17166e62405a2e3a1c9152413b501c52b71\"" Jun 20 19:05:06.174858 containerd[1527]: time="2025-06-20T19:05:06.174248703Z" level=info msg="StartContainer for \"7f7342bba161968239a40ea34b04d17166e62405a2e3a1c9152413b501c52b71\"" Jun 20 19:05:06.245822 systemd[1]: Started cri-containerd-7f7342bba161968239a40ea34b04d17166e62405a2e3a1c9152413b501c52b71.scope - libcontainer container 7f7342bba161968239a40ea34b04d17166e62405a2e3a1c9152413b501c52b71. Jun 20 19:05:06.262879 containerd[1527]: time="2025-06-20T19:05:06.261977034Z" level=info msg="StartContainer for \"7f7342bba161968239a40ea34b04d17166e62405a2e3a1c9152413b501c52b71\" returns successfully" Jun 20 19:05:06.270414 systemd[1]: cri-containerd-7f7342bba161968239a40ea34b04d17166e62405a2e3a1c9152413b501c52b71.scope: Deactivated successfully. Jun 20 19:05:06.360689 containerd[1527]: time="2025-06-20T19:05:06.348669684Z" level=info msg="shim disconnected" id=7f7342bba161968239a40ea34b04d17166e62405a2e3a1c9152413b501c52b71 namespace=k8s.io Jun 20 19:05:06.360689 containerd[1527]: time="2025-06-20T19:05:06.360605841Z" level=warning msg="cleaning up after shim disconnected" id=7f7342bba161968239a40ea34b04d17166e62405a2e3a1c9152413b501c52b71 namespace=k8s.io Jun 20 19:05:06.360689 containerd[1527]: time="2025-06-20T19:05:06.360619717Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:05:07.136074 containerd[1527]: time="2025-06-20T19:05:07.135993613Z" level=info msg="CreateContainer within sandbox \"6d4ce5a6abeaadb881fc3a99c383213b99e50e69a1a6a66388e7817e3e57f248\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jun 20 19:05:07.151163 containerd[1527]: time="2025-06-20T19:05:07.151019277Z" level=info msg="CreateContainer within sandbox \"6d4ce5a6abeaadb881fc3a99c383213b99e50e69a1a6a66388e7817e3e57f248\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d4cf972954c94ae4740844db4ce3ecc9adea3146e83a9d275e3c407f2ab8fd49\"" Jun 20 19:05:07.152166 containerd[1527]: time="2025-06-20T19:05:07.151616382Z" level=info msg="StartContainer for \"d4cf972954c94ae4740844db4ce3ecc9adea3146e83a9d275e3c407f2ab8fd49\"" Jun 20 19:05:07.165619 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7f7342bba161968239a40ea34b04d17166e62405a2e3a1c9152413b501c52b71-rootfs.mount: Deactivated successfully. Jun 20 19:05:07.205884 systemd[1]: Started cri-containerd-d4cf972954c94ae4740844db4ce3ecc9adea3146e83a9d275e3c407f2ab8fd49.scope - libcontainer container d4cf972954c94ae4740844db4ce3ecc9adea3146e83a9d275e3c407f2ab8fd49. Jun 20 19:05:07.231367 containerd[1527]: time="2025-06-20T19:05:07.231293368Z" level=info msg="StartContainer for \"d4cf972954c94ae4740844db4ce3ecc9adea3146e83a9d275e3c407f2ab8fd49\" returns successfully" Jun 20 19:05:07.245863 systemd[1]: cri-containerd-d4cf972954c94ae4740844db4ce3ecc9adea3146e83a9d275e3c407f2ab8fd49.scope: Deactivated successfully. Jun 20 19:05:07.246303 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 20 19:05:07.246416 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:05:07.246974 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jun 20 19:05:07.254011 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 19:05:07.256323 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jun 20 19:05:07.266902 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d4cf972954c94ae4740844db4ce3ecc9adea3146e83a9d275e3c407f2ab8fd49-rootfs.mount: Deactivated successfully. Jun 20 19:05:07.274365 containerd[1527]: time="2025-06-20T19:05:07.274234366Z" level=info msg="shim disconnected" id=d4cf972954c94ae4740844db4ce3ecc9adea3146e83a9d275e3c407f2ab8fd49 namespace=k8s.io Jun 20 19:05:07.274365 containerd[1527]: time="2025-06-20T19:05:07.274293207Z" level=warning msg="cleaning up after shim disconnected" id=d4cf972954c94ae4740844db4ce3ecc9adea3146e83a9d275e3c407f2ab8fd49 namespace=k8s.io Jun 20 19:05:07.274365 containerd[1527]: time="2025-06-20T19:05:07.274301052Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:05:07.276530 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:05:07.930098 containerd[1527]: time="2025-06-20T19:05:07.930048066Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:05:07.931183 containerd[1527]: time="2025-06-20T19:05:07.931125225Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jun 20 19:05:07.932261 containerd[1527]: time="2025-06-20T19:05:07.932205571Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:05:07.933340 containerd[1527]: time="2025-06-20T19:05:07.933121245Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.833267534s" Jun 20 19:05:07.933340 containerd[1527]: time="2025-06-20T19:05:07.933147113Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jun 20 19:05:07.934911 containerd[1527]: time="2025-06-20T19:05:07.934789017Z" level=info msg="CreateContainer within sandbox \"78921f17c74ca3c6549d5a27565bfd72661ed2a1a07e334865974cf201c554df\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jun 20 19:05:07.960093 containerd[1527]: time="2025-06-20T19:05:07.960015765Z" level=info msg="CreateContainer within sandbox \"78921f17c74ca3c6549d5a27565bfd72661ed2a1a07e334865974cf201c554df\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"8f464ffdf1f7a36cab3428da921bd18d261e7d36d52c39e372a678e5b653ac7a\"" Jun 20 19:05:07.960888 containerd[1527]: time="2025-06-20T19:05:07.960863392Z" level=info msg="StartContainer for \"8f464ffdf1f7a36cab3428da921bd18d261e7d36d52c39e372a678e5b653ac7a\"" Jun 20 19:05:07.987865 systemd[1]: Started cri-containerd-8f464ffdf1f7a36cab3428da921bd18d261e7d36d52c39e372a678e5b653ac7a.scope - libcontainer container 8f464ffdf1f7a36cab3428da921bd18d261e7d36d52c39e372a678e5b653ac7a. Jun 20 19:05:08.008648 containerd[1527]: time="2025-06-20T19:05:08.008618216Z" level=info msg="StartContainer for \"8f464ffdf1f7a36cab3428da921bd18d261e7d36d52c39e372a678e5b653ac7a\" returns successfully" Jun 20 19:05:08.145413 containerd[1527]: time="2025-06-20T19:05:08.145094277Z" level=info msg="CreateContainer within sandbox \"6d4ce5a6abeaadb881fc3a99c383213b99e50e69a1a6a66388e7817e3e57f248\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jun 20 19:05:08.203621 containerd[1527]: time="2025-06-20T19:05:08.203567792Z" level=info msg="CreateContainer within sandbox \"6d4ce5a6abeaadb881fc3a99c383213b99e50e69a1a6a66388e7817e3e57f248\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b7e3ff326315474cf9c7d4cebd765f60019d2f58db5efa245ee80d66964134eb\"" Jun 20 19:05:08.206477 containerd[1527]: time="2025-06-20T19:05:08.206449589Z" level=info msg="StartContainer for \"b7e3ff326315474cf9c7d4cebd765f60019d2f58db5efa245ee80d66964134eb\"" Jun 20 19:05:08.248591 systemd[1]: Started cri-containerd-b7e3ff326315474cf9c7d4cebd765f60019d2f58db5efa245ee80d66964134eb.scope - libcontainer container b7e3ff326315474cf9c7d4cebd765f60019d2f58db5efa245ee80d66964134eb. Jun 20 19:05:08.254212 kubelet[2818]: I0620 19:05:08.252241 2818 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-kwz5m" podStartSLOduration=1.155516961 podStartE2EDuration="8.252218455s" podCreationTimestamp="2025-06-20 19:05:00 +0000 UTC" firstStartedPulling="2025-06-20 19:05:00.83714167 +0000 UTC m=+5.832280686" lastFinishedPulling="2025-06-20 19:05:07.933843164 +0000 UTC m=+12.928982180" observedRunningTime="2025-06-20 19:05:08.251544235 +0000 UTC m=+13.246683251" watchObservedRunningTime="2025-06-20 19:05:08.252218455 +0000 UTC m=+13.247357470" Jun 20 19:05:08.296954 containerd[1527]: time="2025-06-20T19:05:08.296446398Z" level=info msg="StartContainer for \"b7e3ff326315474cf9c7d4cebd765f60019d2f58db5efa245ee80d66964134eb\" returns successfully" Jun 20 19:05:08.307460 systemd[1]: cri-containerd-b7e3ff326315474cf9c7d4cebd765f60019d2f58db5efa245ee80d66964134eb.scope: Deactivated successfully. Jun 20 19:05:08.308116 systemd[1]: cri-containerd-b7e3ff326315474cf9c7d4cebd765f60019d2f58db5efa245ee80d66964134eb.scope: Consumed 16ms CPU time, 3.6M memory peak, 1M read from disk. Jun 20 19:05:08.334096 containerd[1527]: time="2025-06-20T19:05:08.333919969Z" level=info msg="shim disconnected" id=b7e3ff326315474cf9c7d4cebd765f60019d2f58db5efa245ee80d66964134eb namespace=k8s.io Jun 20 19:05:08.334096 containerd[1527]: time="2025-06-20T19:05:08.334088756Z" level=warning msg="cleaning up after shim disconnected" id=b7e3ff326315474cf9c7d4cebd765f60019d2f58db5efa245ee80d66964134eb namespace=k8s.io Jun 20 19:05:08.334096 containerd[1527]: time="2025-06-20T19:05:08.334097974Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:05:09.148662 containerd[1527]: time="2025-06-20T19:05:09.148524720Z" level=info msg="CreateContainer within sandbox \"6d4ce5a6abeaadb881fc3a99c383213b99e50e69a1a6a66388e7817e3e57f248\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jun 20 19:05:09.160419 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b7e3ff326315474cf9c7d4cebd765f60019d2f58db5efa245ee80d66964134eb-rootfs.mount: Deactivated successfully. Jun 20 19:05:09.167400 containerd[1527]: time="2025-06-20T19:05:09.167252792Z" level=info msg="CreateContainer within sandbox \"6d4ce5a6abeaadb881fc3a99c383213b99e50e69a1a6a66388e7817e3e57f248\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4984c7e27817cc10a5a6ebd93c74036bff96a77d5e88de574566b3390c032636\"" Jun 20 19:05:09.169559 containerd[1527]: time="2025-06-20T19:05:09.168154049Z" level=info msg="StartContainer for \"4984c7e27817cc10a5a6ebd93c74036bff96a77d5e88de574566b3390c032636\"" Jun 20 19:05:09.198836 systemd[1]: Started cri-containerd-4984c7e27817cc10a5a6ebd93c74036bff96a77d5e88de574566b3390c032636.scope - libcontainer container 4984c7e27817cc10a5a6ebd93c74036bff96a77d5e88de574566b3390c032636. Jun 20 19:05:09.215525 systemd[1]: cri-containerd-4984c7e27817cc10a5a6ebd93c74036bff96a77d5e88de574566b3390c032636.scope: Deactivated successfully. Jun 20 19:05:09.218179 containerd[1527]: time="2025-06-20T19:05:09.218145047Z" level=info msg="StartContainer for \"4984c7e27817cc10a5a6ebd93c74036bff96a77d5e88de574566b3390c032636\" returns successfully" Jun 20 19:05:09.232524 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4984c7e27817cc10a5a6ebd93c74036bff96a77d5e88de574566b3390c032636-rootfs.mount: Deactivated successfully. Jun 20 19:05:09.236997 containerd[1527]: time="2025-06-20T19:05:09.236941295Z" level=info msg="shim disconnected" id=4984c7e27817cc10a5a6ebd93c74036bff96a77d5e88de574566b3390c032636 namespace=k8s.io Jun 20 19:05:09.237156 containerd[1527]: time="2025-06-20T19:05:09.237137715Z" level=warning msg="cleaning up after shim disconnected" id=4984c7e27817cc10a5a6ebd93c74036bff96a77d5e88de574566b3390c032636 namespace=k8s.io Jun 20 19:05:09.237156 containerd[1527]: time="2025-06-20T19:05:09.237154006Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:05:10.153389 containerd[1527]: time="2025-06-20T19:05:10.153324818Z" level=info msg="CreateContainer within sandbox \"6d4ce5a6abeaadb881fc3a99c383213b99e50e69a1a6a66388e7817e3e57f248\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jun 20 19:05:10.169983 containerd[1527]: time="2025-06-20T19:05:10.168992703Z" level=info msg="CreateContainer within sandbox \"6d4ce5a6abeaadb881fc3a99c383213b99e50e69a1a6a66388e7817e3e57f248\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"791489ed28d47be69e55ad258f08cbb3a5b1e2fab822657dd5aa6feb244c2cc2\"" Jun 20 19:05:10.170130 containerd[1527]: time="2025-06-20T19:05:10.170088637Z" level=info msg="StartContainer for \"791489ed28d47be69e55ad258f08cbb3a5b1e2fab822657dd5aa6feb244c2cc2\"" Jun 20 19:05:10.201444 systemd[1]: run-containerd-runc-k8s.io-791489ed28d47be69e55ad258f08cbb3a5b1e2fab822657dd5aa6feb244c2cc2-runc.sA9q88.mount: Deactivated successfully. Jun 20 19:05:10.212914 systemd[1]: Started cri-containerd-791489ed28d47be69e55ad258f08cbb3a5b1e2fab822657dd5aa6feb244c2cc2.scope - libcontainer container 791489ed28d47be69e55ad258f08cbb3a5b1e2fab822657dd5aa6feb244c2cc2. Jun 20 19:05:10.242996 containerd[1527]: time="2025-06-20T19:05:10.242946235Z" level=info msg="StartContainer for \"791489ed28d47be69e55ad258f08cbb3a5b1e2fab822657dd5aa6feb244c2cc2\" returns successfully" Jun 20 19:05:10.411424 kubelet[2818]: I0620 19:05:10.410633 2818 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jun 20 19:05:10.451365 systemd[1]: Created slice kubepods-burstable-pod8a9dcfa1_5487_4fe2_ba37_6af92d696a6f.slice - libcontainer container kubepods-burstable-pod8a9dcfa1_5487_4fe2_ba37_6af92d696a6f.slice. Jun 20 19:05:10.456148 systemd[1]: Created slice kubepods-burstable-podb5712643_f39c_47bb_9084_f5e4a6fe7c14.slice - libcontainer container kubepods-burstable-podb5712643_f39c_47bb_9084_f5e4a6fe7c14.slice. Jun 20 19:05:10.486115 kubelet[2818]: I0620 19:05:10.486061 2818 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8a9dcfa1-5487-4fe2-ba37-6af92d696a6f-config-volume\") pod \"coredns-7c65d6cfc9-6zn4h\" (UID: \"8a9dcfa1-5487-4fe2-ba37-6af92d696a6f\") " pod="kube-system/coredns-7c65d6cfc9-6zn4h" Jun 20 19:05:10.486115 kubelet[2818]: I0620 19:05:10.486103 2818 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b5712643-f39c-47bb-9084-f5e4a6fe7c14-config-volume\") pod \"coredns-7c65d6cfc9-q8k6j\" (UID: \"b5712643-f39c-47bb-9084-f5e4a6fe7c14\") " pod="kube-system/coredns-7c65d6cfc9-q8k6j" Jun 20 19:05:10.486115 kubelet[2818]: I0620 19:05:10.486120 2818 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkl4r\" (UniqueName: \"kubernetes.io/projected/8a9dcfa1-5487-4fe2-ba37-6af92d696a6f-kube-api-access-dkl4r\") pod \"coredns-7c65d6cfc9-6zn4h\" (UID: \"8a9dcfa1-5487-4fe2-ba37-6af92d696a6f\") " pod="kube-system/coredns-7c65d6cfc9-6zn4h" Jun 20 19:05:10.486332 kubelet[2818]: I0620 19:05:10.486135 2818 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mctjc\" (UniqueName: \"kubernetes.io/projected/b5712643-f39c-47bb-9084-f5e4a6fe7c14-kube-api-access-mctjc\") pod \"coredns-7c65d6cfc9-q8k6j\" (UID: \"b5712643-f39c-47bb-9084-f5e4a6fe7c14\") " pod="kube-system/coredns-7c65d6cfc9-q8k6j" Jun 20 19:05:10.762713 containerd[1527]: time="2025-06-20T19:05:10.762356041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-q8k6j,Uid:b5712643-f39c-47bb-9084-f5e4a6fe7c14,Namespace:kube-system,Attempt:0,}" Jun 20 19:05:10.763137 containerd[1527]: time="2025-06-20T19:05:10.763116913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-6zn4h,Uid:8a9dcfa1-5487-4fe2-ba37-6af92d696a6f,Namespace:kube-system,Attempt:0,}" Jun 20 19:05:11.186280 kubelet[2818]: I0620 19:05:11.186231 2818 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-s2ngr" podStartSLOduration=5.844580778 podStartE2EDuration="11.18621538s" podCreationTimestamp="2025-06-20 19:05:00 +0000 UTC" firstStartedPulling="2025-06-20 19:05:00.757595513 +0000 UTC m=+5.752734528" lastFinishedPulling="2025-06-20 19:05:06.099230115 +0000 UTC m=+11.094369130" observedRunningTime="2025-06-20 19:05:11.183159005 +0000 UTC m=+16.178298040" watchObservedRunningTime="2025-06-20 19:05:11.18621538 +0000 UTC m=+16.181354396" Jun 20 19:05:12.420053 systemd-networkd[1436]: cilium_host: Link UP Jun 20 19:05:12.420192 systemd-networkd[1436]: cilium_net: Link UP Jun 20 19:05:12.420195 systemd-networkd[1436]: cilium_net: Gained carrier Jun 20 19:05:12.420343 systemd-networkd[1436]: cilium_host: Gained carrier Jun 20 19:05:12.420499 systemd-networkd[1436]: cilium_host: Gained IPv6LL Jun 20 19:05:12.495966 systemd-networkd[1436]: cilium_net: Gained IPv6LL Jun 20 19:05:12.521373 systemd-networkd[1436]: cilium_vxlan: Link UP Jun 20 19:05:12.521384 systemd-networkd[1436]: cilium_vxlan: Gained carrier Jun 20 19:05:12.857762 kernel: NET: Registered PF_ALG protocol family Jun 20 19:05:13.419092 systemd-networkd[1436]: lxc_health: Link UP Jun 20 19:05:13.427864 systemd-networkd[1436]: lxc_health: Gained carrier Jun 20 19:05:13.825313 systemd-networkd[1436]: lxc3dba3d6520d7: Link UP Jun 20 19:05:13.830888 kernel: eth0: renamed from tmp88600 Jun 20 19:05:13.842821 kernel: eth0: renamed from tmpec722 Jun 20 19:05:13.850276 systemd-networkd[1436]: lxcb5cc5c133784: Link UP Jun 20 19:05:13.850748 systemd-networkd[1436]: lxcb5cc5c133784: Gained carrier Jun 20 19:05:13.855180 systemd-networkd[1436]: lxc3dba3d6520d7: Gained carrier Jun 20 19:05:14.142880 systemd-networkd[1436]: cilium_vxlan: Gained IPv6LL Jun 20 19:05:15.166926 systemd-networkd[1436]: lxc3dba3d6520d7: Gained IPv6LL Jun 20 19:05:15.486884 systemd-networkd[1436]: lxc_health: Gained IPv6LL Jun 20 19:05:15.550878 systemd-networkd[1436]: lxcb5cc5c133784: Gained IPv6LL Jun 20 19:05:16.954038 containerd[1527]: time="2025-06-20T19:05:16.953691064Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 19:05:16.954038 containerd[1527]: time="2025-06-20T19:05:16.953807353Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 19:05:16.954634 containerd[1527]: time="2025-06-20T19:05:16.953884508Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:05:16.955214 containerd[1527]: time="2025-06-20T19:05:16.955176700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:05:16.980396 systemd[1]: run-containerd-runc-k8s.io-886009484e9b42bf5a8d08f1633bd30db1144459656a9f626352aa6f3dc67d6b-runc.oYi9Kq.mount: Deactivated successfully. Jun 20 19:05:16.991116 containerd[1527]: time="2025-06-20T19:05:16.990844561Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 19:05:16.991116 containerd[1527]: time="2025-06-20T19:05:16.991027666Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 19:05:16.991116 containerd[1527]: time="2025-06-20T19:05:16.991089171Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:05:16.993861 systemd[1]: Started cri-containerd-886009484e9b42bf5a8d08f1633bd30db1144459656a9f626352aa6f3dc67d6b.scope - libcontainer container 886009484e9b42bf5a8d08f1633bd30db1144459656a9f626352aa6f3dc67d6b. Jun 20 19:05:16.995200 containerd[1527]: time="2025-06-20T19:05:16.994097092Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:05:17.025848 systemd[1]: Started cri-containerd-ec72282fcc93fc51152efeaf4aec9f0bc797cbf03e44e67265e70ce31e3702d5.scope - libcontainer container ec72282fcc93fc51152efeaf4aec9f0bc797cbf03e44e67265e70ce31e3702d5. Jun 20 19:05:17.067136 containerd[1527]: time="2025-06-20T19:05:17.066918139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-6zn4h,Uid:8a9dcfa1-5487-4fe2-ba37-6af92d696a6f,Namespace:kube-system,Attempt:0,} returns sandbox id \"886009484e9b42bf5a8d08f1633bd30db1144459656a9f626352aa6f3dc67d6b\"" Jun 20 19:05:17.073564 containerd[1527]: time="2025-06-20T19:05:17.072901819Z" level=info msg="CreateContainer within sandbox \"886009484e9b42bf5a8d08f1633bd30db1144459656a9f626352aa6f3dc67d6b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 20 19:05:17.092905 containerd[1527]: time="2025-06-20T19:05:17.092836936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-q8k6j,Uid:b5712643-f39c-47bb-9084-f5e4a6fe7c14,Namespace:kube-system,Attempt:0,} returns sandbox id \"ec72282fcc93fc51152efeaf4aec9f0bc797cbf03e44e67265e70ce31e3702d5\"" Jun 20 19:05:17.095523 containerd[1527]: time="2025-06-20T19:05:17.095407554Z" level=info msg="CreateContainer within sandbox \"886009484e9b42bf5a8d08f1633bd30db1144459656a9f626352aa6f3dc67d6b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4f29ee324069189458b8ed042942b4239c5dd6d76a648f1cc753012c92a667f2\"" Jun 20 19:05:17.096619 containerd[1527]: time="2025-06-20T19:05:17.095547578Z" level=info msg="CreateContainer within sandbox \"ec72282fcc93fc51152efeaf4aec9f0bc797cbf03e44e67265e70ce31e3702d5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 20 19:05:17.096619 containerd[1527]: time="2025-06-20T19:05:17.096040475Z" level=info msg="StartContainer for \"4f29ee324069189458b8ed042942b4239c5dd6d76a648f1cc753012c92a667f2\"" Jun 20 19:05:17.107691 containerd[1527]: time="2025-06-20T19:05:17.107641368Z" level=info msg="CreateContainer within sandbox \"ec72282fcc93fc51152efeaf4aec9f0bc797cbf03e44e67265e70ce31e3702d5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1538262ec4d4669ac96cff900f89176aa938181f62f437a30c851606609a6b00\"" Jun 20 19:05:17.108832 containerd[1527]: time="2025-06-20T19:05:17.108802693Z" level=info msg="StartContainer for \"1538262ec4d4669ac96cff900f89176aa938181f62f437a30c851606609a6b00\"" Jun 20 19:05:17.121918 systemd[1]: Started cri-containerd-4f29ee324069189458b8ed042942b4239c5dd6d76a648f1cc753012c92a667f2.scope - libcontainer container 4f29ee324069189458b8ed042942b4239c5dd6d76a648f1cc753012c92a667f2. Jun 20 19:05:17.135838 systemd[1]: Started cri-containerd-1538262ec4d4669ac96cff900f89176aa938181f62f437a30c851606609a6b00.scope - libcontainer container 1538262ec4d4669ac96cff900f89176aa938181f62f437a30c851606609a6b00. Jun 20 19:05:17.152442 containerd[1527]: time="2025-06-20T19:05:17.152404530Z" level=info msg="StartContainer for \"4f29ee324069189458b8ed042942b4239c5dd6d76a648f1cc753012c92a667f2\" returns successfully" Jun 20 19:05:17.162781 containerd[1527]: time="2025-06-20T19:05:17.162689194Z" level=info msg="StartContainer for \"1538262ec4d4669ac96cff900f89176aa938181f62f437a30c851606609a6b00\" returns successfully" Jun 20 19:05:17.181738 kubelet[2818]: I0620 19:05:17.181669 2818 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-6zn4h" podStartSLOduration=17.181641943 podStartE2EDuration="17.181641943s" podCreationTimestamp="2025-06-20 19:05:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:05:17.18013636 +0000 UTC m=+22.175275385" watchObservedRunningTime="2025-06-20 19:05:17.181641943 +0000 UTC m=+22.176780958" Jun 20 19:05:17.192830 kubelet[2818]: I0620 19:05:17.192381 2818 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-q8k6j" podStartSLOduration=17.192364592 podStartE2EDuration="17.192364592s" podCreationTimestamp="2025-06-20 19:05:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:05:17.191995908 +0000 UTC m=+22.187134923" watchObservedRunningTime="2025-06-20 19:05:17.192364592 +0000 UTC m=+22.187503608" Jun 20 19:05:17.962055 systemd[1]: run-containerd-runc-k8s.io-ec72282fcc93fc51152efeaf4aec9f0bc797cbf03e44e67265e70ce31e3702d5-runc.WzExRD.mount: Deactivated successfully. Jun 20 19:07:26.686360 systemd[1]: Started sshd@7-157.180.24.181:22-106.112.131.56:52900.service - OpenSSH per-connection server daemon (106.112.131.56:52900). Jun 20 19:09:26.719997 systemd[1]: sshd@7-157.180.24.181:22-106.112.131.56:52900.service: Deactivated successfully. Jun 20 19:09:28.202165 systemd[1]: Started sshd@8-157.180.24.181:22-106.112.131.56:52340.service - OpenSSH per-connection server daemon (106.112.131.56:52340). Jun 20 19:09:34.560948 systemd[1]: Started sshd@9-157.180.24.181:22-139.178.68.195:43484.service - OpenSSH per-connection server daemon (139.178.68.195:43484). Jun 20 19:09:35.531831 sshd[4242]: Accepted publickey for core from 139.178.68.195 port 43484 ssh2: RSA SHA256:ttetsIDeDmoOcWQonGIv9sNpOX/TzQSVP7aimMY1zAQ Jun 20 19:09:35.533748 sshd-session[4242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:09:35.540649 systemd-logind[1504]: New session 8 of user core. Jun 20 19:09:35.544846 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 20 19:09:36.638366 sshd[4244]: Connection closed by 139.178.68.195 port 43484 Jun 20 19:09:36.639051 sshd-session[4242]: pam_unix(sshd:session): session closed for user core Jun 20 19:09:36.643529 systemd[1]: sshd@9-157.180.24.181:22-139.178.68.195:43484.service: Deactivated successfully. Jun 20 19:09:36.645874 systemd[1]: session-8.scope: Deactivated successfully. Jun 20 19:09:36.647289 systemd-logind[1504]: Session 8 logged out. Waiting for processes to exit. Jun 20 19:09:36.648681 systemd-logind[1504]: Removed session 8. Jun 20 19:09:41.817298 systemd[1]: Started sshd@10-157.180.24.181:22-139.178.68.195:43486.service - OpenSSH per-connection server daemon (139.178.68.195:43486). Jun 20 19:09:42.793972 sshd[4259]: Accepted publickey for core from 139.178.68.195 port 43486 ssh2: RSA SHA256:ttetsIDeDmoOcWQonGIv9sNpOX/TzQSVP7aimMY1zAQ Jun 20 19:09:42.795410 sshd-session[4259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:09:42.799557 systemd-logind[1504]: New session 9 of user core. Jun 20 19:09:42.804848 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 20 19:09:43.542491 sshd[4261]: Connection closed by 139.178.68.195 port 43486 Jun 20 19:09:43.543631 sshd-session[4259]: pam_unix(sshd:session): session closed for user core Jun 20 19:09:43.548348 systemd-logind[1504]: Session 9 logged out. Waiting for processes to exit. Jun 20 19:09:43.549238 systemd[1]: sshd@10-157.180.24.181:22-139.178.68.195:43486.service: Deactivated successfully. Jun 20 19:09:43.551515 systemd[1]: session-9.scope: Deactivated successfully. Jun 20 19:09:43.552643 systemd-logind[1504]: Removed session 9. Jun 20 19:09:48.714430 systemd[1]: Started sshd@11-157.180.24.181:22-139.178.68.195:53866.service - OpenSSH per-connection server daemon (139.178.68.195:53866). Jun 20 19:09:49.679313 sshd[4274]: Accepted publickey for core from 139.178.68.195 port 53866 ssh2: RSA SHA256:ttetsIDeDmoOcWQonGIv9sNpOX/TzQSVP7aimMY1zAQ Jun 20 19:09:49.680777 sshd-session[4274]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:09:49.685256 systemd-logind[1504]: New session 10 of user core. Jun 20 19:09:49.687882 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 20 19:09:50.408279 sshd[4276]: Connection closed by 139.178.68.195 port 53866 Jun 20 19:09:50.408902 sshd-session[4274]: pam_unix(sshd:session): session closed for user core Jun 20 19:09:50.412997 systemd-logind[1504]: Session 10 logged out. Waiting for processes to exit. Jun 20 19:09:50.413556 systemd[1]: sshd@11-157.180.24.181:22-139.178.68.195:53866.service: Deactivated successfully. Jun 20 19:09:50.415684 systemd[1]: session-10.scope: Deactivated successfully. Jun 20 19:09:50.417651 systemd-logind[1504]: Removed session 10. Jun 20 19:09:50.583006 systemd[1]: Started sshd@12-157.180.24.181:22-139.178.68.195:53876.service - OpenSSH per-connection server daemon (139.178.68.195:53876). Jun 20 19:09:51.555583 sshd[4288]: Accepted publickey for core from 139.178.68.195 port 53876 ssh2: RSA SHA256:ttetsIDeDmoOcWQonGIv9sNpOX/TzQSVP7aimMY1zAQ Jun 20 19:09:51.556992 sshd-session[4288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:09:51.561138 systemd-logind[1504]: New session 11 of user core. Jun 20 19:09:51.569862 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 20 19:09:52.344835 sshd[4290]: Connection closed by 139.178.68.195 port 53876 Jun 20 19:09:52.345659 sshd-session[4288]: pam_unix(sshd:session): session closed for user core Jun 20 19:09:52.350569 systemd[1]: sshd@12-157.180.24.181:22-139.178.68.195:53876.service: Deactivated successfully. Jun 20 19:09:52.353258 systemd[1]: session-11.scope: Deactivated successfully. Jun 20 19:09:52.354831 systemd-logind[1504]: Session 11 logged out. Waiting for processes to exit. Jun 20 19:09:52.356811 systemd-logind[1504]: Removed session 11. Jun 20 19:09:52.519958 systemd[1]: Started sshd@13-157.180.24.181:22-139.178.68.195:53880.service - OpenSSH per-connection server daemon (139.178.68.195:53880). Jun 20 19:09:53.494373 sshd[4301]: Accepted publickey for core from 139.178.68.195 port 53880 ssh2: RSA SHA256:ttetsIDeDmoOcWQonGIv9sNpOX/TzQSVP7aimMY1zAQ Jun 20 19:09:53.496223 sshd-session[4301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:09:53.502847 systemd-logind[1504]: New session 12 of user core. Jun 20 19:09:53.508957 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 20 19:09:54.223954 sshd[4303]: Connection closed by 139.178.68.195 port 53880 Jun 20 19:09:54.224559 sshd-session[4301]: pam_unix(sshd:session): session closed for user core Jun 20 19:09:54.227919 systemd-logind[1504]: Session 12 logged out. Waiting for processes to exit. Jun 20 19:09:54.228500 systemd[1]: sshd@13-157.180.24.181:22-139.178.68.195:53880.service: Deactivated successfully. Jun 20 19:09:54.230265 systemd[1]: session-12.scope: Deactivated successfully. Jun 20 19:09:54.231540 systemd-logind[1504]: Removed session 12. Jun 20 19:09:59.396101 systemd[1]: Started sshd@14-157.180.24.181:22-139.178.68.195:44508.service - OpenSSH per-connection server daemon (139.178.68.195:44508). Jun 20 19:10:00.363910 sshd[4317]: Accepted publickey for core from 139.178.68.195 port 44508 ssh2: RSA SHA256:ttetsIDeDmoOcWQonGIv9sNpOX/TzQSVP7aimMY1zAQ Jun 20 19:10:00.365319 sshd-session[4317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:10:00.371363 systemd-logind[1504]: New session 13 of user core. Jun 20 19:10:00.375858 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 20 19:10:01.098926 sshd[4319]: Connection closed by 139.178.68.195 port 44508 Jun 20 19:10:01.099467 sshd-session[4317]: pam_unix(sshd:session): session closed for user core Jun 20 19:10:01.102745 systemd-logind[1504]: Session 13 logged out. Waiting for processes to exit. Jun 20 19:10:01.102896 systemd[1]: sshd@14-157.180.24.181:22-139.178.68.195:44508.service: Deactivated successfully. Jun 20 19:10:01.104348 systemd[1]: session-13.scope: Deactivated successfully. Jun 20 19:10:01.105447 systemd-logind[1504]: Removed session 13. Jun 20 19:10:01.270379 systemd[1]: Started sshd@15-157.180.24.181:22-139.178.68.195:44520.service - OpenSSH per-connection server daemon (139.178.68.195:44520). Jun 20 19:10:02.263007 sshd[4333]: Accepted publickey for core from 139.178.68.195 port 44520 ssh2: RSA SHA256:ttetsIDeDmoOcWQonGIv9sNpOX/TzQSVP7aimMY1zAQ Jun 20 19:10:02.264499 sshd-session[4333]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:10:02.269631 systemd-logind[1504]: New session 14 of user core. Jun 20 19:10:02.281949 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 20 19:10:03.187419 sshd[4335]: Connection closed by 139.178.68.195 port 44520 Jun 20 19:10:03.188410 sshd-session[4333]: pam_unix(sshd:session): session closed for user core Jun 20 19:10:03.191486 systemd-logind[1504]: Session 14 logged out. Waiting for processes to exit. Jun 20 19:10:03.192383 systemd[1]: sshd@15-157.180.24.181:22-139.178.68.195:44520.service: Deactivated successfully. Jun 20 19:10:03.194198 systemd[1]: session-14.scope: Deactivated successfully. Jun 20 19:10:03.195562 systemd-logind[1504]: Removed session 14. Jun 20 19:10:03.359816 systemd[1]: Started sshd@16-157.180.24.181:22-139.178.68.195:44526.service - OpenSSH per-connection server daemon (139.178.68.195:44526). Jun 20 19:10:04.333348 sshd[4345]: Accepted publickey for core from 139.178.68.195 port 44526 ssh2: RSA SHA256:ttetsIDeDmoOcWQonGIv9sNpOX/TzQSVP7aimMY1zAQ Jun 20 19:10:04.334661 sshd-session[4345]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:10:04.339198 systemd-logind[1504]: New session 15 of user core. Jun 20 19:10:04.345889 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 20 19:10:06.624221 sshd[4347]: Connection closed by 139.178.68.195 port 44526 Jun 20 19:10:06.624986 sshd-session[4345]: pam_unix(sshd:session): session closed for user core Jun 20 19:10:06.629924 systemd[1]: sshd@16-157.180.24.181:22-139.178.68.195:44526.service: Deactivated successfully. Jun 20 19:10:06.630424 systemd-logind[1504]: Session 15 logged out. Waiting for processes to exit. Jun 20 19:10:06.632694 systemd[1]: session-15.scope: Deactivated successfully. Jun 20 19:10:06.634853 systemd-logind[1504]: Removed session 15. Jun 20 19:10:06.795947 systemd[1]: Started sshd@17-157.180.24.181:22-139.178.68.195:58116.service - OpenSSH per-connection server daemon (139.178.68.195:58116). Jun 20 19:10:07.770486 sshd[4364]: Accepted publickey for core from 139.178.68.195 port 58116 ssh2: RSA SHA256:ttetsIDeDmoOcWQonGIv9sNpOX/TzQSVP7aimMY1zAQ Jun 20 19:10:07.771952 sshd-session[4364]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:10:07.776973 systemd-logind[1504]: New session 16 of user core. Jun 20 19:10:07.784882 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 20 19:10:08.611781 sshd[4366]: Connection closed by 139.178.68.195 port 58116 Jun 20 19:10:08.612445 sshd-session[4364]: pam_unix(sshd:session): session closed for user core Jun 20 19:10:08.616053 systemd[1]: sshd@17-157.180.24.181:22-139.178.68.195:58116.service: Deactivated successfully. Jun 20 19:10:08.618199 systemd[1]: session-16.scope: Deactivated successfully. Jun 20 19:10:08.619660 systemd-logind[1504]: Session 16 logged out. Waiting for processes to exit. Jun 20 19:10:08.621034 systemd-logind[1504]: Removed session 16. Jun 20 19:10:08.785008 systemd[1]: Started sshd@18-157.180.24.181:22-139.178.68.195:58124.service - OpenSSH per-connection server daemon (139.178.68.195:58124). Jun 20 19:10:09.757520 sshd[4377]: Accepted publickey for core from 139.178.68.195 port 58124 ssh2: RSA SHA256:ttetsIDeDmoOcWQonGIv9sNpOX/TzQSVP7aimMY1zAQ Jun 20 19:10:09.759104 sshd-session[4377]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:10:09.764273 systemd-logind[1504]: New session 17 of user core. Jun 20 19:10:09.769917 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 20 19:10:10.543544 sshd[4379]: Connection closed by 139.178.68.195 port 58124 Jun 20 19:10:10.544826 sshd-session[4377]: pam_unix(sshd:session): session closed for user core Jun 20 19:10:10.549598 systemd[1]: sshd@18-157.180.24.181:22-139.178.68.195:58124.service: Deactivated successfully. Jun 20 19:10:10.553534 systemd[1]: session-17.scope: Deactivated successfully. Jun 20 19:10:10.555567 systemd-logind[1504]: Session 17 logged out. Waiting for processes to exit. Jun 20 19:10:10.558813 systemd-logind[1504]: Removed session 17. Jun 20 19:10:15.717113 systemd[1]: Started sshd@19-157.180.24.181:22-139.178.68.195:37380.service - OpenSSH per-connection server daemon (139.178.68.195:37380). Jun 20 19:10:16.687868 sshd[4395]: Accepted publickey for core from 139.178.68.195 port 37380 ssh2: RSA SHA256:ttetsIDeDmoOcWQonGIv9sNpOX/TzQSVP7aimMY1zAQ Jun 20 19:10:16.689134 sshd-session[4395]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:10:16.694580 systemd-logind[1504]: New session 18 of user core. Jun 20 19:10:16.696951 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 20 19:10:17.433133 sshd[4397]: Connection closed by 139.178.68.195 port 37380 Jun 20 19:10:17.433753 sshd-session[4395]: pam_unix(sshd:session): session closed for user core Jun 20 19:10:17.437693 systemd[1]: sshd@19-157.180.24.181:22-139.178.68.195:37380.service: Deactivated successfully. Jun 20 19:10:17.440443 systemd[1]: session-18.scope: Deactivated successfully. Jun 20 19:10:17.443492 systemd-logind[1504]: Session 18 logged out. Waiting for processes to exit. Jun 20 19:10:17.444921 systemd-logind[1504]: Removed session 18. Jun 20 19:10:22.605955 systemd[1]: Started sshd@20-157.180.24.181:22-139.178.68.195:37390.service - OpenSSH per-connection server daemon (139.178.68.195:37390). Jun 20 19:10:23.572232 sshd[4411]: Accepted publickey for core from 139.178.68.195 port 37390 ssh2: RSA SHA256:ttetsIDeDmoOcWQonGIv9sNpOX/TzQSVP7aimMY1zAQ Jun 20 19:10:23.573478 sshd-session[4411]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:10:23.577675 systemd-logind[1504]: New session 19 of user core. Jun 20 19:10:23.580845 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 20 19:10:24.307537 sshd[4413]: Connection closed by 139.178.68.195 port 37390 Jun 20 19:10:24.308849 sshd-session[4411]: pam_unix(sshd:session): session closed for user core Jun 20 19:10:24.313982 systemd-logind[1504]: Session 19 logged out. Waiting for processes to exit. Jun 20 19:10:24.314539 systemd[1]: sshd@20-157.180.24.181:22-139.178.68.195:37390.service: Deactivated successfully. Jun 20 19:10:24.318300 systemd[1]: session-19.scope: Deactivated successfully. Jun 20 19:10:24.320777 systemd-logind[1504]: Removed session 19. Jun 20 19:10:24.480009 systemd[1]: Started sshd@21-157.180.24.181:22-139.178.68.195:51442.service - OpenSSH per-connection server daemon (139.178.68.195:51442). Jun 20 19:10:25.449679 sshd[4424]: Accepted publickey for core from 139.178.68.195 port 51442 ssh2: RSA SHA256:ttetsIDeDmoOcWQonGIv9sNpOX/TzQSVP7aimMY1zAQ Jun 20 19:10:25.451006 sshd-session[4424]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:10:25.456006 systemd-logind[1504]: New session 20 of user core. Jun 20 19:10:25.461904 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 20 19:10:27.323443 systemd[1]: run-containerd-runc-k8s.io-791489ed28d47be69e55ad258f08cbb3a5b1e2fab822657dd5aa6feb244c2cc2-runc.uG6FyZ.mount: Deactivated successfully. Jun 20 19:10:27.334749 containerd[1527]: time="2025-06-20T19:10:27.334664193Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 20 19:10:27.401843 containerd[1527]: time="2025-06-20T19:10:27.401605771Z" level=info msg="StopContainer for \"791489ed28d47be69e55ad258f08cbb3a5b1e2fab822657dd5aa6feb244c2cc2\" with timeout 2 (s)" Jun 20 19:10:27.401843 containerd[1527]: time="2025-06-20T19:10:27.401818439Z" level=info msg="StopContainer for \"8f464ffdf1f7a36cab3428da921bd18d261e7d36d52c39e372a678e5b653ac7a\" with timeout 30 (s)" Jun 20 19:10:27.402943 containerd[1527]: time="2025-06-20T19:10:27.402879311Z" level=info msg="Stop container \"791489ed28d47be69e55ad258f08cbb3a5b1e2fab822657dd5aa6feb244c2cc2\" with signal terminated" Jun 20 19:10:27.404418 containerd[1527]: time="2025-06-20T19:10:27.403853040Z" level=info msg="Stop container \"8f464ffdf1f7a36cab3428da921bd18d261e7d36d52c39e372a678e5b653ac7a\" with signal terminated" Jun 20 19:10:27.413318 systemd-networkd[1436]: lxc_health: Link DOWN Jun 20 19:10:27.413324 systemd-networkd[1436]: lxc_health: Lost carrier Jun 20 19:10:27.418690 systemd[1]: cri-containerd-8f464ffdf1f7a36cab3428da921bd18d261e7d36d52c39e372a678e5b653ac7a.scope: Deactivated successfully. Jun 20 19:10:27.438525 systemd[1]: cri-containerd-791489ed28d47be69e55ad258f08cbb3a5b1e2fab822657dd5aa6feb244c2cc2.scope: Deactivated successfully. Jun 20 19:10:27.438944 systemd[1]: cri-containerd-791489ed28d47be69e55ad258f08cbb3a5b1e2fab822657dd5aa6feb244c2cc2.scope: Consumed 6.798s CPU time, 194.2M memory peak, 72.4M read from disk, 13.3M written to disk. Jun 20 19:10:27.447306 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8f464ffdf1f7a36cab3428da921bd18d261e7d36d52c39e372a678e5b653ac7a-rootfs.mount: Deactivated successfully. Jun 20 19:10:27.456232 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-791489ed28d47be69e55ad258f08cbb3a5b1e2fab822657dd5aa6feb244c2cc2-rootfs.mount: Deactivated successfully. Jun 20 19:10:27.459048 containerd[1527]: time="2025-06-20T19:10:27.458952096Z" level=info msg="shim disconnected" id=8f464ffdf1f7a36cab3428da921bd18d261e7d36d52c39e372a678e5b653ac7a namespace=k8s.io Jun 20 19:10:27.459048 containerd[1527]: time="2025-06-20T19:10:27.459044199Z" level=warning msg="cleaning up after shim disconnected" id=8f464ffdf1f7a36cab3428da921bd18d261e7d36d52c39e372a678e5b653ac7a namespace=k8s.io Jun 20 19:10:27.460146 containerd[1527]: time="2025-06-20T19:10:27.459054087Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:10:27.462737 containerd[1527]: time="2025-06-20T19:10:27.461391675Z" level=info msg="shim disconnected" id=791489ed28d47be69e55ad258f08cbb3a5b1e2fab822657dd5aa6feb244c2cc2 namespace=k8s.io Jun 20 19:10:27.462737 containerd[1527]: time="2025-06-20T19:10:27.461424638Z" level=warning msg="cleaning up after shim disconnected" id=791489ed28d47be69e55ad258f08cbb3a5b1e2fab822657dd5aa6feb244c2cc2 namespace=k8s.io Jun 20 19:10:27.462737 containerd[1527]: time="2025-06-20T19:10:27.461432071Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:10:27.469578 containerd[1527]: time="2025-06-20T19:10:27.469555783Z" level=warning msg="cleanup warnings time=\"2025-06-20T19:10:27Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jun 20 19:10:27.472647 containerd[1527]: time="2025-06-20T19:10:27.472629302Z" level=info msg="StopContainer for \"8f464ffdf1f7a36cab3428da921bd18d261e7d36d52c39e372a678e5b653ac7a\" returns successfully" Jun 20 19:10:27.473298 containerd[1527]: time="2025-06-20T19:10:27.473267580Z" level=info msg="StopPodSandbox for \"78921f17c74ca3c6549d5a27565bfd72661ed2a1a07e334865974cf201c554df\"" Jun 20 19:10:27.473964 containerd[1527]: time="2025-06-20T19:10:27.473868759Z" level=info msg="StopContainer for \"791489ed28d47be69e55ad258f08cbb3a5b1e2fab822657dd5aa6feb244c2cc2\" returns successfully" Jun 20 19:10:27.474241 containerd[1527]: time="2025-06-20T19:10:27.474217284Z" level=info msg="StopPodSandbox for \"6d4ce5a6abeaadb881fc3a99c383213b99e50e69a1a6a66388e7817e3e57f248\"" Jun 20 19:10:27.474527 containerd[1527]: time="2025-06-20T19:10:27.474300018Z" level=info msg="Container to stop \"8f464ffdf1f7a36cab3428da921bd18d261e7d36d52c39e372a678e5b653ac7a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 19:10:27.474527 containerd[1527]: time="2025-06-20T19:10:27.474323623Z" level=info msg="Container to stop \"b7e3ff326315474cf9c7d4cebd765f60019d2f58db5efa245ee80d66964134eb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 19:10:27.474527 containerd[1527]: time="2025-06-20T19:10:27.474350403Z" level=info msg="Container to stop \"791489ed28d47be69e55ad258f08cbb3a5b1e2fab822657dd5aa6feb244c2cc2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 19:10:27.474527 containerd[1527]: time="2025-06-20T19:10:27.474357827Z" level=info msg="Container to stop \"7f7342bba161968239a40ea34b04d17166e62405a2e3a1c9152413b501c52b71\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 19:10:27.474527 containerd[1527]: time="2025-06-20T19:10:27.474364600Z" level=info msg="Container to stop \"d4cf972954c94ae4740844db4ce3ecc9adea3146e83a9d275e3c407f2ab8fd49\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 19:10:27.474527 containerd[1527]: time="2025-06-20T19:10:27.474370671Z" level=info msg="Container to stop \"4984c7e27817cc10a5a6ebd93c74036bff96a77d5e88de574566b3390c032636\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 19:10:27.477437 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-78921f17c74ca3c6549d5a27565bfd72661ed2a1a07e334865974cf201c554df-shm.mount: Deactivated successfully. Jun 20 19:10:27.482705 systemd[1]: cri-containerd-78921f17c74ca3c6549d5a27565bfd72661ed2a1a07e334865974cf201c554df.scope: Deactivated successfully. Jun 20 19:10:27.486451 systemd[1]: cri-containerd-6d4ce5a6abeaadb881fc3a99c383213b99e50e69a1a6a66388e7817e3e57f248.scope: Deactivated successfully. Jun 20 19:10:27.511759 containerd[1527]: time="2025-06-20T19:10:27.511565126Z" level=info msg="shim disconnected" id=78921f17c74ca3c6549d5a27565bfd72661ed2a1a07e334865974cf201c554df namespace=k8s.io Jun 20 19:10:27.511759 containerd[1527]: time="2025-06-20T19:10:27.511622824Z" level=warning msg="cleaning up after shim disconnected" id=78921f17c74ca3c6549d5a27565bfd72661ed2a1a07e334865974cf201c554df namespace=k8s.io Jun 20 19:10:27.511759 containerd[1527]: time="2025-06-20T19:10:27.511632542Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:10:27.512790 containerd[1527]: time="2025-06-20T19:10:27.512508176Z" level=info msg="shim disconnected" id=6d4ce5a6abeaadb881fc3a99c383213b99e50e69a1a6a66388e7817e3e57f248 namespace=k8s.io Jun 20 19:10:27.512790 containerd[1527]: time="2025-06-20T19:10:27.512546458Z" level=warning msg="cleaning up after shim disconnected" id=6d4ce5a6abeaadb881fc3a99c383213b99e50e69a1a6a66388e7817e3e57f248 namespace=k8s.io Jun 20 19:10:27.512790 containerd[1527]: time="2025-06-20T19:10:27.512553612Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:10:27.523541 containerd[1527]: time="2025-06-20T19:10:27.523503877Z" level=info msg="TearDown network for sandbox \"6d4ce5a6abeaadb881fc3a99c383213b99e50e69a1a6a66388e7817e3e57f248\" successfully" Jun 20 19:10:27.523541 containerd[1527]: time="2025-06-20T19:10:27.523527722Z" level=info msg="StopPodSandbox for \"6d4ce5a6abeaadb881fc3a99c383213b99e50e69a1a6a66388e7817e3e57f248\" returns successfully" Jun 20 19:10:27.528124 containerd[1527]: time="2025-06-20T19:10:27.528098524Z" level=warning msg="cleanup warnings time=\"2025-06-20T19:10:27Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jun 20 19:10:27.529840 containerd[1527]: time="2025-06-20T19:10:27.529795319Z" level=info msg="TearDown network for sandbox \"78921f17c74ca3c6549d5a27565bfd72661ed2a1a07e334865974cf201c554df\" successfully" Jun 20 19:10:27.529840 containerd[1527]: time="2025-06-20T19:10:27.529817691Z" level=info msg="StopPodSandbox for \"78921f17c74ca3c6549d5a27565bfd72661ed2a1a07e334865974cf201c554df\" returns successfully" Jun 20 19:10:27.618241 kubelet[2818]: I0620 19:10:27.617661 2818 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dd412fae-ddb2-4651-be6d-e666b34abd34-etc-cni-netd\") pod \"dd412fae-ddb2-4651-be6d-e666b34abd34\" (UID: \"dd412fae-ddb2-4651-be6d-e666b34abd34\") " Jun 20 19:10:27.618241 kubelet[2818]: I0620 19:10:27.617773 2818 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dd412fae-ddb2-4651-be6d-e666b34abd34-clustermesh-secrets\") pod \"dd412fae-ddb2-4651-be6d-e666b34abd34\" (UID: \"dd412fae-ddb2-4651-be6d-e666b34abd34\") " Jun 20 19:10:27.618241 kubelet[2818]: I0620 19:10:27.617805 2818 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dd412fae-ddb2-4651-be6d-e666b34abd34-cni-path\") pod \"dd412fae-ddb2-4651-be6d-e666b34abd34\" (UID: \"dd412fae-ddb2-4651-be6d-e666b34abd34\") " Jun 20 19:10:27.618241 kubelet[2818]: I0620 19:10:27.617828 2818 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dd412fae-ddb2-4651-be6d-e666b34abd34-cilium-run\") pod \"dd412fae-ddb2-4651-be6d-e666b34abd34\" (UID: \"dd412fae-ddb2-4651-be6d-e666b34abd34\") " Jun 20 19:10:27.618241 kubelet[2818]: I0620 19:10:27.617854 2818 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dd412fae-ddb2-4651-be6d-e666b34abd34-hubble-tls\") pod \"dd412fae-ddb2-4651-be6d-e666b34abd34\" (UID: \"dd412fae-ddb2-4651-be6d-e666b34abd34\") " Jun 20 19:10:27.618241 kubelet[2818]: I0620 19:10:27.617878 2818 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldz48\" (UniqueName: \"kubernetes.io/projected/53e9b112-cb3a-4942-b001-e0ca1da04070-kube-api-access-ldz48\") pod \"53e9b112-cb3a-4942-b001-e0ca1da04070\" (UID: \"53e9b112-cb3a-4942-b001-e0ca1da04070\") " Jun 20 19:10:27.619871 kubelet[2818]: I0620 19:10:27.617902 2818 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dd412fae-ddb2-4651-be6d-e666b34abd34-lib-modules\") pod \"dd412fae-ddb2-4651-be6d-e666b34abd34\" (UID: \"dd412fae-ddb2-4651-be6d-e666b34abd34\") " Jun 20 19:10:27.619871 kubelet[2818]: I0620 19:10:27.617922 2818 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dd412fae-ddb2-4651-be6d-e666b34abd34-host-proc-sys-net\") pod \"dd412fae-ddb2-4651-be6d-e666b34abd34\" (UID: \"dd412fae-ddb2-4651-be6d-e666b34abd34\") " Jun 20 19:10:27.619871 kubelet[2818]: I0620 19:10:27.617950 2818 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dd412fae-ddb2-4651-be6d-e666b34abd34-cilium-config-path\") pod \"dd412fae-ddb2-4651-be6d-e666b34abd34\" (UID: \"dd412fae-ddb2-4651-be6d-e666b34abd34\") " Jun 20 19:10:27.619871 kubelet[2818]: I0620 19:10:27.617972 2818 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dd412fae-ddb2-4651-be6d-e666b34abd34-host-proc-sys-kernel\") pod \"dd412fae-ddb2-4651-be6d-e666b34abd34\" (UID: \"dd412fae-ddb2-4651-be6d-e666b34abd34\") " Jun 20 19:10:27.619871 kubelet[2818]: I0620 19:10:27.617994 2818 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dd412fae-ddb2-4651-be6d-e666b34abd34-bpf-maps\") pod \"dd412fae-ddb2-4651-be6d-e666b34abd34\" (UID: \"dd412fae-ddb2-4651-be6d-e666b34abd34\") " Jun 20 19:10:27.619871 kubelet[2818]: I0620 19:10:27.618045 2818 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7hvd9\" (UniqueName: \"kubernetes.io/projected/dd412fae-ddb2-4651-be6d-e666b34abd34-kube-api-access-7hvd9\") pod \"dd412fae-ddb2-4651-be6d-e666b34abd34\" (UID: \"dd412fae-ddb2-4651-be6d-e666b34abd34\") " Jun 20 19:10:27.620268 kubelet[2818]: I0620 19:10:27.618065 2818 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dd412fae-ddb2-4651-be6d-e666b34abd34-xtables-lock\") pod \"dd412fae-ddb2-4651-be6d-e666b34abd34\" (UID: \"dd412fae-ddb2-4651-be6d-e666b34abd34\") " Jun 20 19:10:27.620268 kubelet[2818]: I0620 19:10:27.618101 2818 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/53e9b112-cb3a-4942-b001-e0ca1da04070-cilium-config-path\") pod \"53e9b112-cb3a-4942-b001-e0ca1da04070\" (UID: \"53e9b112-cb3a-4942-b001-e0ca1da04070\") " Jun 20 19:10:27.620268 kubelet[2818]: I0620 19:10:27.618122 2818 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dd412fae-ddb2-4651-be6d-e666b34abd34-hostproc\") pod \"dd412fae-ddb2-4651-be6d-e666b34abd34\" (UID: \"dd412fae-ddb2-4651-be6d-e666b34abd34\") " Jun 20 19:10:27.620268 kubelet[2818]: I0620 19:10:27.618144 2818 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dd412fae-ddb2-4651-be6d-e666b34abd34-cilium-cgroup\") pod \"dd412fae-ddb2-4651-be6d-e666b34abd34\" (UID: \"dd412fae-ddb2-4651-be6d-e666b34abd34\") " Jun 20 19:10:27.626749 kubelet[2818]: I0620 19:10:27.624315 2818 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd412fae-ddb2-4651-be6d-e666b34abd34-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "dd412fae-ddb2-4651-be6d-e666b34abd34" (UID: "dd412fae-ddb2-4651-be6d-e666b34abd34"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 20 19:10:27.626961 kubelet[2818]: I0620 19:10:27.624555 2818 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd412fae-ddb2-4651-be6d-e666b34abd34-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "dd412fae-ddb2-4651-be6d-e666b34abd34" (UID: "dd412fae-ddb2-4651-be6d-e666b34abd34"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 20 19:10:27.639045 kubelet[2818]: I0620 19:10:27.638660 2818 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd412fae-ddb2-4651-be6d-e666b34abd34-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "dd412fae-ddb2-4651-be6d-e666b34abd34" (UID: "dd412fae-ddb2-4651-be6d-e666b34abd34"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jun 20 19:10:27.639045 kubelet[2818]: I0620 19:10:27.638807 2818 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd412fae-ddb2-4651-be6d-e666b34abd34-cni-path" (OuterVolumeSpecName: "cni-path") pod "dd412fae-ddb2-4651-be6d-e666b34abd34" (UID: "dd412fae-ddb2-4651-be6d-e666b34abd34"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 20 19:10:27.639045 kubelet[2818]: I0620 19:10:27.638834 2818 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd412fae-ddb2-4651-be6d-e666b34abd34-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "dd412fae-ddb2-4651-be6d-e666b34abd34" (UID: "dd412fae-ddb2-4651-be6d-e666b34abd34"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 20 19:10:27.640041 kubelet[2818]: I0620 19:10:27.639821 2818 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd412fae-ddb2-4651-be6d-e666b34abd34-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "dd412fae-ddb2-4651-be6d-e666b34abd34" (UID: "dd412fae-ddb2-4651-be6d-e666b34abd34"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 20 19:10:27.640041 kubelet[2818]: I0620 19:10:27.639859 2818 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd412fae-ddb2-4651-be6d-e666b34abd34-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "dd412fae-ddb2-4651-be6d-e666b34abd34" (UID: "dd412fae-ddb2-4651-be6d-e666b34abd34"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 20 19:10:27.645496 kubelet[2818]: I0620 19:10:27.645466 2818 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd412fae-ddb2-4651-be6d-e666b34abd34-kube-api-access-7hvd9" (OuterVolumeSpecName: "kube-api-access-7hvd9") pod "dd412fae-ddb2-4651-be6d-e666b34abd34" (UID: "dd412fae-ddb2-4651-be6d-e666b34abd34"). InnerVolumeSpecName "kube-api-access-7hvd9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 20 19:10:27.647751 kubelet[2818]: I0620 19:10:27.645608 2818 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd412fae-ddb2-4651-be6d-e666b34abd34-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "dd412fae-ddb2-4651-be6d-e666b34abd34" (UID: "dd412fae-ddb2-4651-be6d-e666b34abd34"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 20 19:10:27.647751 kubelet[2818]: I0620 19:10:27.645790 2818 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd412fae-ddb2-4651-be6d-e666b34abd34-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "dd412fae-ddb2-4651-be6d-e666b34abd34" (UID: "dd412fae-ddb2-4651-be6d-e666b34abd34"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 20 19:10:27.648168 kubelet[2818]: I0620 19:10:27.648141 2818 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53e9b112-cb3a-4942-b001-e0ca1da04070-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "53e9b112-cb3a-4942-b001-e0ca1da04070" (UID: "53e9b112-cb3a-4942-b001-e0ca1da04070"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jun 20 19:10:27.648272 kubelet[2818]: I0620 19:10:27.648254 2818 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd412fae-ddb2-4651-be6d-e666b34abd34-hostproc" (OuterVolumeSpecName: "hostproc") pod "dd412fae-ddb2-4651-be6d-e666b34abd34" (UID: "dd412fae-ddb2-4651-be6d-e666b34abd34"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 20 19:10:27.648481 kubelet[2818]: I0620 19:10:27.648458 2818 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd412fae-ddb2-4651-be6d-e666b34abd34-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "dd412fae-ddb2-4651-be6d-e666b34abd34" (UID: "dd412fae-ddb2-4651-be6d-e666b34abd34"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jun 20 19:10:27.649374 kubelet[2818]: I0620 19:10:27.649219 2818 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53e9b112-cb3a-4942-b001-e0ca1da04070-kube-api-access-ldz48" (OuterVolumeSpecName: "kube-api-access-ldz48") pod "53e9b112-cb3a-4942-b001-e0ca1da04070" (UID: "53e9b112-cb3a-4942-b001-e0ca1da04070"). InnerVolumeSpecName "kube-api-access-ldz48". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 20 19:10:27.649632 kubelet[2818]: I0620 19:10:27.649329 2818 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd412fae-ddb2-4651-be6d-e666b34abd34-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "dd412fae-ddb2-4651-be6d-e666b34abd34" (UID: "dd412fae-ddb2-4651-be6d-e666b34abd34"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 20 19:10:27.650023 kubelet[2818]: I0620 19:10:27.649530 2818 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd412fae-ddb2-4651-be6d-e666b34abd34-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "dd412fae-ddb2-4651-be6d-e666b34abd34" (UID: "dd412fae-ddb2-4651-be6d-e666b34abd34"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 20 19:10:27.718963 kubelet[2818]: I0620 19:10:27.718901 2818 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7hvd9\" (UniqueName: \"kubernetes.io/projected/dd412fae-ddb2-4651-be6d-e666b34abd34-kube-api-access-7hvd9\") on node \"ci-4230-2-0-e-b360e0c6ec\" DevicePath \"\"" Jun 20 19:10:27.719337 kubelet[2818]: I0620 19:10:27.719292 2818 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dd412fae-ddb2-4651-be6d-e666b34abd34-xtables-lock\") on node \"ci-4230-2-0-e-b360e0c6ec\" DevicePath \"\"" Jun 20 19:10:27.719478 kubelet[2818]: I0620 19:10:27.719459 2818 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/53e9b112-cb3a-4942-b001-e0ca1da04070-cilium-config-path\") on node \"ci-4230-2-0-e-b360e0c6ec\" DevicePath \"\"" Jun 20 19:10:27.719845 kubelet[2818]: I0620 19:10:27.719770 2818 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dd412fae-ddb2-4651-be6d-e666b34abd34-cilium-cgroup\") on node \"ci-4230-2-0-e-b360e0c6ec\" DevicePath \"\"" Jun 20 19:10:27.720081 kubelet[2818]: I0620 19:10:27.720047 2818 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dd412fae-ddb2-4651-be6d-e666b34abd34-hostproc\") on node \"ci-4230-2-0-e-b360e0c6ec\" DevicePath \"\"" Jun 20 19:10:27.720235 kubelet[2818]: I0620 19:10:27.720217 2818 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dd412fae-ddb2-4651-be6d-e666b34abd34-etc-cni-netd\") on node \"ci-4230-2-0-e-b360e0c6ec\" DevicePath \"\"" Jun 20 19:10:27.720372 kubelet[2818]: I0620 19:10:27.720355 2818 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dd412fae-ddb2-4651-be6d-e666b34abd34-cni-path\") on node \"ci-4230-2-0-e-b360e0c6ec\" DevicePath \"\"" Jun 20 19:10:27.720541 kubelet[2818]: I0620 19:10:27.720493 2818 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dd412fae-ddb2-4651-be6d-e666b34abd34-clustermesh-secrets\") on node \"ci-4230-2-0-e-b360e0c6ec\" DevicePath \"\"" Jun 20 19:10:27.720690 kubelet[2818]: I0620 19:10:27.720664 2818 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ldz48\" (UniqueName: \"kubernetes.io/projected/53e9b112-cb3a-4942-b001-e0ca1da04070-kube-api-access-ldz48\") on node \"ci-4230-2-0-e-b360e0c6ec\" DevicePath \"\"" Jun 20 19:10:27.720853 kubelet[2818]: I0620 19:10:27.720834 2818 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dd412fae-ddb2-4651-be6d-e666b34abd34-cilium-run\") on node \"ci-4230-2-0-e-b360e0c6ec\" DevicePath \"\"" Jun 20 19:10:27.720990 kubelet[2818]: I0620 19:10:27.720973 2818 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dd412fae-ddb2-4651-be6d-e666b34abd34-hubble-tls\") on node \"ci-4230-2-0-e-b360e0c6ec\" DevicePath \"\"" Jun 20 19:10:27.721122 kubelet[2818]: I0620 19:10:27.721104 2818 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dd412fae-ddb2-4651-be6d-e666b34abd34-host-proc-sys-kernel\") on node \"ci-4230-2-0-e-b360e0c6ec\" DevicePath \"\"" Jun 20 19:10:27.721267 kubelet[2818]: I0620 19:10:27.721250 2818 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dd412fae-ddb2-4651-be6d-e666b34abd34-bpf-maps\") on node \"ci-4230-2-0-e-b360e0c6ec\" DevicePath \"\"" Jun 20 19:10:27.721368 kubelet[2818]: I0620 19:10:27.721349 2818 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dd412fae-ddb2-4651-be6d-e666b34abd34-lib-modules\") on node \"ci-4230-2-0-e-b360e0c6ec\" DevicePath \"\"" Jun 20 19:10:27.721468 kubelet[2818]: I0620 19:10:27.721451 2818 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dd412fae-ddb2-4651-be6d-e666b34abd34-host-proc-sys-net\") on node \"ci-4230-2-0-e-b360e0c6ec\" DevicePath \"\"" Jun 20 19:10:27.721555 kubelet[2818]: I0620 19:10:27.721540 2818 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dd412fae-ddb2-4651-be6d-e666b34abd34-cilium-config-path\") on node \"ci-4230-2-0-e-b360e0c6ec\" DevicePath \"\"" Jun 20 19:10:27.752266 kubelet[2818]: I0620 19:10:27.752229 2818 scope.go:117] "RemoveContainer" containerID="791489ed28d47be69e55ad258f08cbb3a5b1e2fab822657dd5aa6feb244c2cc2" Jun 20 19:10:27.761578 systemd[1]: Removed slice kubepods-burstable-poddd412fae_ddb2_4651_be6d_e666b34abd34.slice - libcontainer container kubepods-burstable-poddd412fae_ddb2_4651_be6d_e666b34abd34.slice. Jun 20 19:10:27.762207 systemd[1]: kubepods-burstable-poddd412fae_ddb2_4651_be6d_e666b34abd34.slice: Consumed 6.859s CPU time, 194.5M memory peak, 73.5M read from disk, 13.3M written to disk. Jun 20 19:10:27.797587 containerd[1527]: time="2025-06-20T19:10:27.797315219Z" level=info msg="RemoveContainer for \"791489ed28d47be69e55ad258f08cbb3a5b1e2fab822657dd5aa6feb244c2cc2\"" Jun 20 19:10:27.802877 containerd[1527]: time="2025-06-20T19:10:27.802702593Z" level=info msg="RemoveContainer for \"791489ed28d47be69e55ad258f08cbb3a5b1e2fab822657dd5aa6feb244c2cc2\" returns successfully" Jun 20 19:10:27.813247 kubelet[2818]: I0620 19:10:27.811231 2818 scope.go:117] "RemoveContainer" containerID="4984c7e27817cc10a5a6ebd93c74036bff96a77d5e88de574566b3390c032636" Jun 20 19:10:27.815153 containerd[1527]: time="2025-06-20T19:10:27.814637978Z" level=info msg="RemoveContainer for \"4984c7e27817cc10a5a6ebd93c74036bff96a77d5e88de574566b3390c032636\"" Jun 20 19:10:27.817388 systemd[1]: Removed slice kubepods-besteffort-pod53e9b112_cb3a_4942_b001_e0ca1da04070.slice - libcontainer container kubepods-besteffort-pod53e9b112_cb3a_4942_b001_e0ca1da04070.slice. Jun 20 19:10:27.830390 containerd[1527]: time="2025-06-20T19:10:27.830349553Z" level=info msg="RemoveContainer for \"4984c7e27817cc10a5a6ebd93c74036bff96a77d5e88de574566b3390c032636\" returns successfully" Jun 20 19:10:27.831866 kubelet[2818]: I0620 19:10:27.831847 2818 scope.go:117] "RemoveContainer" containerID="b7e3ff326315474cf9c7d4cebd765f60019d2f58db5efa245ee80d66964134eb" Jun 20 19:10:27.833650 containerd[1527]: time="2025-06-20T19:10:27.833624091Z" level=info msg="RemoveContainer for \"b7e3ff326315474cf9c7d4cebd765f60019d2f58db5efa245ee80d66964134eb\"" Jun 20 19:10:27.837898 containerd[1527]: time="2025-06-20T19:10:27.837802585Z" level=info msg="RemoveContainer for \"b7e3ff326315474cf9c7d4cebd765f60019d2f58db5efa245ee80d66964134eb\" returns successfully" Jun 20 19:10:27.838319 kubelet[2818]: I0620 19:10:27.838291 2818 scope.go:117] "RemoveContainer" containerID="d4cf972954c94ae4740844db4ce3ecc9adea3146e83a9d275e3c407f2ab8fd49" Jun 20 19:10:27.839169 containerd[1527]: time="2025-06-20T19:10:27.839139836Z" level=info msg="RemoveContainer for \"d4cf972954c94ae4740844db4ce3ecc9adea3146e83a9d275e3c407f2ab8fd49\"" Jun 20 19:10:27.841510 containerd[1527]: time="2025-06-20T19:10:27.841481392Z" level=info msg="RemoveContainer for \"d4cf972954c94ae4740844db4ce3ecc9adea3146e83a9d275e3c407f2ab8fd49\" returns successfully" Jun 20 19:10:27.841604 kubelet[2818]: I0620 19:10:27.841577 2818 scope.go:117] "RemoveContainer" containerID="7f7342bba161968239a40ea34b04d17166e62405a2e3a1c9152413b501c52b71" Jun 20 19:10:27.842377 containerd[1527]: time="2025-06-20T19:10:27.842350562Z" level=info msg="RemoveContainer for \"7f7342bba161968239a40ea34b04d17166e62405a2e3a1c9152413b501c52b71\"" Jun 20 19:10:27.844555 containerd[1527]: time="2025-06-20T19:10:27.844529864Z" level=info msg="RemoveContainer for \"7f7342bba161968239a40ea34b04d17166e62405a2e3a1c9152413b501c52b71\" returns successfully" Jun 20 19:10:27.844698 kubelet[2818]: I0620 19:10:27.844629 2818 scope.go:117] "RemoveContainer" containerID="791489ed28d47be69e55ad258f08cbb3a5b1e2fab822657dd5aa6feb244c2cc2" Jun 20 19:10:27.845949 containerd[1527]: time="2025-06-20T19:10:27.845825187Z" level=error msg="ContainerStatus for \"791489ed28d47be69e55ad258f08cbb3a5b1e2fab822657dd5aa6feb244c2cc2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"791489ed28d47be69e55ad258f08cbb3a5b1e2fab822657dd5aa6feb244c2cc2\": not found" Jun 20 19:10:27.849421 kubelet[2818]: E0620 19:10:27.849353 2818 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"791489ed28d47be69e55ad258f08cbb3a5b1e2fab822657dd5aa6feb244c2cc2\": not found" containerID="791489ed28d47be69e55ad258f08cbb3a5b1e2fab822657dd5aa6feb244c2cc2" Jun 20 19:10:27.858111 kubelet[2818]: I0620 19:10:27.849387 2818 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"791489ed28d47be69e55ad258f08cbb3a5b1e2fab822657dd5aa6feb244c2cc2"} err="failed to get container status \"791489ed28d47be69e55ad258f08cbb3a5b1e2fab822657dd5aa6feb244c2cc2\": rpc error: code = NotFound desc = an error occurred when try to find container \"791489ed28d47be69e55ad258f08cbb3a5b1e2fab822657dd5aa6feb244c2cc2\": not found" Jun 20 19:10:27.858111 kubelet[2818]: I0620 19:10:27.858101 2818 scope.go:117] "RemoveContainer" containerID="4984c7e27817cc10a5a6ebd93c74036bff96a77d5e88de574566b3390c032636" Jun 20 19:10:27.858263 containerd[1527]: time="2025-06-20T19:10:27.858234673Z" level=error msg="ContainerStatus for \"4984c7e27817cc10a5a6ebd93c74036bff96a77d5e88de574566b3390c032636\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4984c7e27817cc10a5a6ebd93c74036bff96a77d5e88de574566b3390c032636\": not found" Jun 20 19:10:27.858339 kubelet[2818]: E0620 19:10:27.858326 2818 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4984c7e27817cc10a5a6ebd93c74036bff96a77d5e88de574566b3390c032636\": not found" containerID="4984c7e27817cc10a5a6ebd93c74036bff96a77d5e88de574566b3390c032636" Jun 20 19:10:27.858374 kubelet[2818]: I0620 19:10:27.858342 2818 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4984c7e27817cc10a5a6ebd93c74036bff96a77d5e88de574566b3390c032636"} err="failed to get container status \"4984c7e27817cc10a5a6ebd93c74036bff96a77d5e88de574566b3390c032636\": rpc error: code = NotFound desc = an error occurred when try to find container \"4984c7e27817cc10a5a6ebd93c74036bff96a77d5e88de574566b3390c032636\": not found" Jun 20 19:10:27.858374 kubelet[2818]: I0620 19:10:27.858353 2818 scope.go:117] "RemoveContainer" containerID="b7e3ff326315474cf9c7d4cebd765f60019d2f58db5efa245ee80d66964134eb" Jun 20 19:10:27.858523 containerd[1527]: time="2025-06-20T19:10:27.858483349Z" level=error msg="ContainerStatus for \"b7e3ff326315474cf9c7d4cebd765f60019d2f58db5efa245ee80d66964134eb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b7e3ff326315474cf9c7d4cebd765f60019d2f58db5efa245ee80d66964134eb\": not found" Jun 20 19:10:27.858577 kubelet[2818]: E0620 19:10:27.858559 2818 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b7e3ff326315474cf9c7d4cebd765f60019d2f58db5efa245ee80d66964134eb\": not found" containerID="b7e3ff326315474cf9c7d4cebd765f60019d2f58db5efa245ee80d66964134eb" Jun 20 19:10:27.858603 kubelet[2818]: I0620 19:10:27.858576 2818 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b7e3ff326315474cf9c7d4cebd765f60019d2f58db5efa245ee80d66964134eb"} err="failed to get container status \"b7e3ff326315474cf9c7d4cebd765f60019d2f58db5efa245ee80d66964134eb\": rpc error: code = NotFound desc = an error occurred when try to find container \"b7e3ff326315474cf9c7d4cebd765f60019d2f58db5efa245ee80d66964134eb\": not found" Jun 20 19:10:27.858603 kubelet[2818]: I0620 19:10:27.858586 2818 scope.go:117] "RemoveContainer" containerID="d4cf972954c94ae4740844db4ce3ecc9adea3146e83a9d275e3c407f2ab8fd49" Jun 20 19:10:27.858735 containerd[1527]: time="2025-06-20T19:10:27.858691419Z" level=error msg="ContainerStatus for \"d4cf972954c94ae4740844db4ce3ecc9adea3146e83a9d275e3c407f2ab8fd49\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d4cf972954c94ae4740844db4ce3ecc9adea3146e83a9d275e3c407f2ab8fd49\": not found" Jun 20 19:10:27.858812 kubelet[2818]: E0620 19:10:27.858793 2818 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d4cf972954c94ae4740844db4ce3ecc9adea3146e83a9d275e3c407f2ab8fd49\": not found" containerID="d4cf972954c94ae4740844db4ce3ecc9adea3146e83a9d275e3c407f2ab8fd49" Jun 20 19:10:27.858897 kubelet[2818]: I0620 19:10:27.858813 2818 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d4cf972954c94ae4740844db4ce3ecc9adea3146e83a9d275e3c407f2ab8fd49"} err="failed to get container status \"d4cf972954c94ae4740844db4ce3ecc9adea3146e83a9d275e3c407f2ab8fd49\": rpc error: code = NotFound desc = an error occurred when try to find container \"d4cf972954c94ae4740844db4ce3ecc9adea3146e83a9d275e3c407f2ab8fd49\": not found" Jun 20 19:10:27.858897 kubelet[2818]: I0620 19:10:27.858848 2818 scope.go:117] "RemoveContainer" containerID="7f7342bba161968239a40ea34b04d17166e62405a2e3a1c9152413b501c52b71" Jun 20 19:10:27.858991 containerd[1527]: time="2025-06-20T19:10:27.858959122Z" level=error msg="ContainerStatus for \"7f7342bba161968239a40ea34b04d17166e62405a2e3a1c9152413b501c52b71\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7f7342bba161968239a40ea34b04d17166e62405a2e3a1c9152413b501c52b71\": not found" Jun 20 19:10:27.859068 kubelet[2818]: E0620 19:10:27.859046 2818 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7f7342bba161968239a40ea34b04d17166e62405a2e3a1c9152413b501c52b71\": not found" containerID="7f7342bba161968239a40ea34b04d17166e62405a2e3a1c9152413b501c52b71" Jun 20 19:10:27.859068 kubelet[2818]: I0620 19:10:27.859061 2818 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7f7342bba161968239a40ea34b04d17166e62405a2e3a1c9152413b501c52b71"} err="failed to get container status \"7f7342bba161968239a40ea34b04d17166e62405a2e3a1c9152413b501c52b71\": rpc error: code = NotFound desc = an error occurred when try to find container \"7f7342bba161968239a40ea34b04d17166e62405a2e3a1c9152413b501c52b71\": not found" Jun 20 19:10:27.859115 kubelet[2818]: I0620 19:10:27.859070 2818 scope.go:117] "RemoveContainer" containerID="8f464ffdf1f7a36cab3428da921bd18d261e7d36d52c39e372a678e5b653ac7a" Jun 20 19:10:27.859749 containerd[1527]: time="2025-06-20T19:10:27.859706275Z" level=info msg="RemoveContainer for \"8f464ffdf1f7a36cab3428da921bd18d261e7d36d52c39e372a678e5b653ac7a\"" Jun 20 19:10:27.861947 containerd[1527]: time="2025-06-20T19:10:27.861929628Z" level=info msg="RemoveContainer for \"8f464ffdf1f7a36cab3428da921bd18d261e7d36d52c39e372a678e5b653ac7a\" returns successfully" Jun 20 19:10:27.862160 kubelet[2818]: I0620 19:10:27.862086 2818 scope.go:117] "RemoveContainer" containerID="8f464ffdf1f7a36cab3428da921bd18d261e7d36d52c39e372a678e5b653ac7a" Jun 20 19:10:27.862260 containerd[1527]: time="2025-06-20T19:10:27.862216848Z" level=error msg="ContainerStatus for \"8f464ffdf1f7a36cab3428da921bd18d261e7d36d52c39e372a678e5b653ac7a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8f464ffdf1f7a36cab3428da921bd18d261e7d36d52c39e372a678e5b653ac7a\": not found" Jun 20 19:10:27.862309 kubelet[2818]: E0620 19:10:27.862290 2818 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8f464ffdf1f7a36cab3428da921bd18d261e7d36d52c39e372a678e5b653ac7a\": not found" containerID="8f464ffdf1f7a36cab3428da921bd18d261e7d36d52c39e372a678e5b653ac7a" Jun 20 19:10:27.862377 kubelet[2818]: I0620 19:10:27.862306 2818 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8f464ffdf1f7a36cab3428da921bd18d261e7d36d52c39e372a678e5b653ac7a"} err="failed to get container status \"8f464ffdf1f7a36cab3428da921bd18d261e7d36d52c39e372a678e5b653ac7a\": rpc error: code = NotFound desc = an error occurred when try to find container \"8f464ffdf1f7a36cab3428da921bd18d261e7d36d52c39e372a678e5b653ac7a\": not found" Jun 20 19:10:28.318625 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-78921f17c74ca3c6549d5a27565bfd72661ed2a1a07e334865974cf201c554df-rootfs.mount: Deactivated successfully. Jun 20 19:10:28.319116 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6d4ce5a6abeaadb881fc3a99c383213b99e50e69a1a6a66388e7817e3e57f248-rootfs.mount: Deactivated successfully. Jun 20 19:10:28.319259 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6d4ce5a6abeaadb881fc3a99c383213b99e50e69a1a6a66388e7817e3e57f248-shm.mount: Deactivated successfully. Jun 20 19:10:28.319367 systemd[1]: var-lib-kubelet-pods-53e9b112\x2dcb3a\x2d4942\x2db001\x2de0ca1da04070-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dldz48.mount: Deactivated successfully. Jun 20 19:10:28.319468 systemd[1]: var-lib-kubelet-pods-dd412fae\x2dddb2\x2d4651\x2dbe6d\x2de666b34abd34-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7hvd9.mount: Deactivated successfully. Jun 20 19:10:28.319568 systemd[1]: var-lib-kubelet-pods-dd412fae\x2dddb2\x2d4651\x2dbe6d\x2de666b34abd34-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jun 20 19:10:28.319661 systemd[1]: var-lib-kubelet-pods-dd412fae\x2dddb2\x2d4651\x2dbe6d\x2de666b34abd34-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jun 20 19:10:29.084479 kubelet[2818]: I0620 19:10:29.084389 2818 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53e9b112-cb3a-4942-b001-e0ca1da04070" path="/var/lib/kubelet/pods/53e9b112-cb3a-4942-b001-e0ca1da04070/volumes" Jun 20 19:10:29.085248 kubelet[2818]: I0620 19:10:29.085091 2818 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd412fae-ddb2-4651-be6d-e666b34abd34" path="/var/lib/kubelet/pods/dd412fae-ddb2-4651-be6d-e666b34abd34/volumes" Jun 20 19:10:29.385460 sshd[4426]: Connection closed by 139.178.68.195 port 51442 Jun 20 19:10:29.386341 sshd-session[4424]: pam_unix(sshd:session): session closed for user core Jun 20 19:10:29.390010 systemd-logind[1504]: Session 20 logged out. Waiting for processes to exit. Jun 20 19:10:29.390550 systemd[1]: sshd@21-157.180.24.181:22-139.178.68.195:51442.service: Deactivated successfully. Jun 20 19:10:29.392292 systemd[1]: session-20.scope: Deactivated successfully. Jun 20 19:10:29.393367 systemd-logind[1504]: Removed session 20. Jun 20 19:10:29.561158 systemd[1]: Started sshd@22-157.180.24.181:22-139.178.68.195:51448.service - OpenSSH per-connection server daemon (139.178.68.195:51448). Jun 20 19:10:30.209432 kubelet[2818]: E0620 19:10:30.209349 2818 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jun 20 19:10:30.539753 sshd[4586]: Accepted publickey for core from 139.178.68.195 port 51448 ssh2: RSA SHA256:ttetsIDeDmoOcWQonGIv9sNpOX/TzQSVP7aimMY1zAQ Jun 20 19:10:30.541020 sshd-session[4586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:10:30.546414 systemd-logind[1504]: New session 21 of user core. Jun 20 19:10:30.558897 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 20 19:10:30.954148 kubelet[2818]: I0620 19:10:30.954090 2818 setters.go:600] "Node became not ready" node="ci-4230-2-0-e-b360e0c6ec" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-06-20T19:10:30Z","lastTransitionTime":"2025-06-20T19:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jun 20 19:10:31.771272 kubelet[2818]: E0620 19:10:31.770611 2818 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="dd412fae-ddb2-4651-be6d-e666b34abd34" containerName="mount-cgroup" Jun 20 19:10:31.771272 kubelet[2818]: E0620 19:10:31.770635 2818 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="dd412fae-ddb2-4651-be6d-e666b34abd34" containerName="apply-sysctl-overwrites" Jun 20 19:10:31.771272 kubelet[2818]: E0620 19:10:31.770641 2818 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="53e9b112-cb3a-4942-b001-e0ca1da04070" containerName="cilium-operator" Jun 20 19:10:31.771272 kubelet[2818]: E0620 19:10:31.770646 2818 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="dd412fae-ddb2-4651-be6d-e666b34abd34" containerName="mount-bpf-fs" Jun 20 19:10:31.771272 kubelet[2818]: E0620 19:10:31.770651 2818 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="dd412fae-ddb2-4651-be6d-e666b34abd34" containerName="clean-cilium-state" Jun 20 19:10:31.771272 kubelet[2818]: E0620 19:10:31.770655 2818 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="dd412fae-ddb2-4651-be6d-e666b34abd34" containerName="cilium-agent" Jun 20 19:10:31.771272 kubelet[2818]: I0620 19:10:31.770673 2818 memory_manager.go:354] "RemoveStaleState removing state" podUID="53e9b112-cb3a-4942-b001-e0ca1da04070" containerName="cilium-operator" Jun 20 19:10:31.771272 kubelet[2818]: I0620 19:10:31.770678 2818 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd412fae-ddb2-4651-be6d-e666b34abd34" containerName="cilium-agent" Jun 20 19:10:31.841096 systemd[1]: Created slice kubepods-burstable-pod3d623a0d_21a6_45b2_9072_a5a11f60fb08.slice - libcontainer container kubepods-burstable-pod3d623a0d_21a6_45b2_9072_a5a11f60fb08.slice. Jun 20 19:10:31.850049 kubelet[2818]: I0620 19:10:31.850005 2818 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3d623a0d-21a6-45b2-9072-a5a11f60fb08-clustermesh-secrets\") pod \"cilium-p8hhl\" (UID: \"3d623a0d-21a6-45b2-9072-a5a11f60fb08\") " pod="kube-system/cilium-p8hhl" Jun 20 19:10:31.850049 kubelet[2818]: I0620 19:10:31.850035 2818 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3d623a0d-21a6-45b2-9072-a5a11f60fb08-host-proc-sys-net\") pod \"cilium-p8hhl\" (UID: \"3d623a0d-21a6-45b2-9072-a5a11f60fb08\") " pod="kube-system/cilium-p8hhl" Jun 20 19:10:31.850049 kubelet[2818]: I0620 19:10:31.850049 2818 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3d623a0d-21a6-45b2-9072-a5a11f60fb08-lib-modules\") pod \"cilium-p8hhl\" (UID: \"3d623a0d-21a6-45b2-9072-a5a11f60fb08\") " pod="kube-system/cilium-p8hhl" Jun 20 19:10:31.850181 kubelet[2818]: I0620 19:10:31.850063 2818 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3d623a0d-21a6-45b2-9072-a5a11f60fb08-cilium-config-path\") pod \"cilium-p8hhl\" (UID: \"3d623a0d-21a6-45b2-9072-a5a11f60fb08\") " pod="kube-system/cilium-p8hhl" Jun 20 19:10:31.850181 kubelet[2818]: I0620 19:10:31.850078 2818 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3d623a0d-21a6-45b2-9072-a5a11f60fb08-cilium-run\") pod \"cilium-p8hhl\" (UID: \"3d623a0d-21a6-45b2-9072-a5a11f60fb08\") " pod="kube-system/cilium-p8hhl" Jun 20 19:10:31.850181 kubelet[2818]: I0620 19:10:31.850089 2818 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3d623a0d-21a6-45b2-9072-a5a11f60fb08-hubble-tls\") pod \"cilium-p8hhl\" (UID: \"3d623a0d-21a6-45b2-9072-a5a11f60fb08\") " pod="kube-system/cilium-p8hhl" Jun 20 19:10:31.850181 kubelet[2818]: I0620 19:10:31.850102 2818 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3d623a0d-21a6-45b2-9072-a5a11f60fb08-bpf-maps\") pod \"cilium-p8hhl\" (UID: \"3d623a0d-21a6-45b2-9072-a5a11f60fb08\") " pod="kube-system/cilium-p8hhl" Jun 20 19:10:31.850181 kubelet[2818]: I0620 19:10:31.850114 2818 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3d623a0d-21a6-45b2-9072-a5a11f60fb08-cilium-cgroup\") pod \"cilium-p8hhl\" (UID: \"3d623a0d-21a6-45b2-9072-a5a11f60fb08\") " pod="kube-system/cilium-p8hhl" Jun 20 19:10:31.850181 kubelet[2818]: I0620 19:10:31.850127 2818 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7v6fl\" (UniqueName: \"kubernetes.io/projected/3d623a0d-21a6-45b2-9072-a5a11f60fb08-kube-api-access-7v6fl\") pod \"cilium-p8hhl\" (UID: \"3d623a0d-21a6-45b2-9072-a5a11f60fb08\") " pod="kube-system/cilium-p8hhl" Jun 20 19:10:31.850286 kubelet[2818]: I0620 19:10:31.850138 2818 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3d623a0d-21a6-45b2-9072-a5a11f60fb08-hostproc\") pod \"cilium-p8hhl\" (UID: \"3d623a0d-21a6-45b2-9072-a5a11f60fb08\") " pod="kube-system/cilium-p8hhl" Jun 20 19:10:31.850286 kubelet[2818]: I0620 19:10:31.850149 2818 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3d623a0d-21a6-45b2-9072-a5a11f60fb08-cni-path\") pod \"cilium-p8hhl\" (UID: \"3d623a0d-21a6-45b2-9072-a5a11f60fb08\") " pod="kube-system/cilium-p8hhl" Jun 20 19:10:31.850286 kubelet[2818]: I0620 19:10:31.850160 2818 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3d623a0d-21a6-45b2-9072-a5a11f60fb08-xtables-lock\") pod \"cilium-p8hhl\" (UID: \"3d623a0d-21a6-45b2-9072-a5a11f60fb08\") " pod="kube-system/cilium-p8hhl" Jun 20 19:10:31.850286 kubelet[2818]: I0620 19:10:31.850173 2818 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3d623a0d-21a6-45b2-9072-a5a11f60fb08-cilium-ipsec-secrets\") pod \"cilium-p8hhl\" (UID: \"3d623a0d-21a6-45b2-9072-a5a11f60fb08\") " pod="kube-system/cilium-p8hhl" Jun 20 19:10:31.850286 kubelet[2818]: I0620 19:10:31.850184 2818 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3d623a0d-21a6-45b2-9072-a5a11f60fb08-host-proc-sys-kernel\") pod \"cilium-p8hhl\" (UID: \"3d623a0d-21a6-45b2-9072-a5a11f60fb08\") " pod="kube-system/cilium-p8hhl" Jun 20 19:10:31.850286 kubelet[2818]: I0620 19:10:31.850196 2818 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3d623a0d-21a6-45b2-9072-a5a11f60fb08-etc-cni-netd\") pod \"cilium-p8hhl\" (UID: \"3d623a0d-21a6-45b2-9072-a5a11f60fb08\") " pod="kube-system/cilium-p8hhl" Jun 20 19:10:31.997373 sshd[4588]: Connection closed by 139.178.68.195 port 51448 Jun 20 19:10:31.998614 sshd-session[4586]: pam_unix(sshd:session): session closed for user core Jun 20 19:10:32.000782 systemd[1]: sshd@22-157.180.24.181:22-139.178.68.195:51448.service: Deactivated successfully. Jun 20 19:10:32.002313 systemd[1]: session-21.scope: Deactivated successfully. Jun 20 19:10:32.003583 systemd-logind[1504]: Session 21 logged out. Waiting for processes to exit. Jun 20 19:10:32.004506 systemd-logind[1504]: Removed session 21. Jun 20 19:10:32.144050 containerd[1527]: time="2025-06-20T19:10:32.143878673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-p8hhl,Uid:3d623a0d-21a6-45b2-9072-a5a11f60fb08,Namespace:kube-system,Attempt:0,}" Jun 20 19:10:32.170001 containerd[1527]: time="2025-06-20T19:10:32.167635282Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 19:10:32.170001 containerd[1527]: time="2025-06-20T19:10:32.168002041Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 19:10:32.170001 containerd[1527]: time="2025-06-20T19:10:32.168037147Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:10:32.170001 containerd[1527]: time="2025-06-20T19:10:32.168180956Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:10:32.173928 systemd[1]: Started sshd@23-157.180.24.181:22-139.178.68.195:51454.service - OpenSSH per-connection server daemon (139.178.68.195:51454). Jun 20 19:10:32.192909 systemd[1]: Started cri-containerd-cd57d35a8a280be477f2b3e3fd63c8d89ebcf7abde359abf0c85e4bdbaa62960.scope - libcontainer container cd57d35a8a280be477f2b3e3fd63c8d89ebcf7abde359abf0c85e4bdbaa62960. Jun 20 19:10:32.217175 containerd[1527]: time="2025-06-20T19:10:32.217022346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-p8hhl,Uid:3d623a0d-21a6-45b2-9072-a5a11f60fb08,Namespace:kube-system,Attempt:0,} returns sandbox id \"cd57d35a8a280be477f2b3e3fd63c8d89ebcf7abde359abf0c85e4bdbaa62960\"" Jun 20 19:10:32.220958 containerd[1527]: time="2025-06-20T19:10:32.220920694Z" level=info msg="CreateContainer within sandbox \"cd57d35a8a280be477f2b3e3fd63c8d89ebcf7abde359abf0c85e4bdbaa62960\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jun 20 19:10:32.230135 containerd[1527]: time="2025-06-20T19:10:32.230084536Z" level=info msg="CreateContainer within sandbox \"cd57d35a8a280be477f2b3e3fd63c8d89ebcf7abde359abf0c85e4bdbaa62960\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"988ecb18c87b67f2194cd8355393a5f8cc27f802af8a23e9b9c1b2f825549bc9\"" Jun 20 19:10:32.230923 containerd[1527]: time="2025-06-20T19:10:32.230801844Z" level=info msg="StartContainer for \"988ecb18c87b67f2194cd8355393a5f8cc27f802af8a23e9b9c1b2f825549bc9\"" Jun 20 19:10:32.257866 systemd[1]: Started cri-containerd-988ecb18c87b67f2194cd8355393a5f8cc27f802af8a23e9b9c1b2f825549bc9.scope - libcontainer container 988ecb18c87b67f2194cd8355393a5f8cc27f802af8a23e9b9c1b2f825549bc9. Jun 20 19:10:32.278388 containerd[1527]: time="2025-06-20T19:10:32.278304089Z" level=info msg="StartContainer for \"988ecb18c87b67f2194cd8355393a5f8cc27f802af8a23e9b9c1b2f825549bc9\" returns successfully" Jun 20 19:10:32.292227 systemd[1]: cri-containerd-988ecb18c87b67f2194cd8355393a5f8cc27f802af8a23e9b9c1b2f825549bc9.scope: Deactivated successfully. Jun 20 19:10:32.292557 systemd[1]: cri-containerd-988ecb18c87b67f2194cd8355393a5f8cc27f802af8a23e9b9c1b2f825549bc9.scope: Consumed 18ms CPU time, 8.7M memory peak, 2.2M read from disk. Jun 20 19:10:32.322750 containerd[1527]: time="2025-06-20T19:10:32.322660328Z" level=info msg="shim disconnected" id=988ecb18c87b67f2194cd8355393a5f8cc27f802af8a23e9b9c1b2f825549bc9 namespace=k8s.io Jun 20 19:10:32.323060 containerd[1527]: time="2025-06-20T19:10:32.322827922Z" level=warning msg="cleaning up after shim disconnected" id=988ecb18c87b67f2194cd8355393a5f8cc27f802af8a23e9b9c1b2f825549bc9 namespace=k8s.io Jun 20 19:10:32.323060 containerd[1527]: time="2025-06-20T19:10:32.322838352Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:10:32.844601 containerd[1527]: time="2025-06-20T19:10:32.844420558Z" level=info msg="CreateContainer within sandbox \"cd57d35a8a280be477f2b3e3fd63c8d89ebcf7abde359abf0c85e4bdbaa62960\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jun 20 19:10:32.858547 containerd[1527]: time="2025-06-20T19:10:32.858477257Z" level=info msg="CreateContainer within sandbox \"cd57d35a8a280be477f2b3e3fd63c8d89ebcf7abde359abf0c85e4bdbaa62960\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"fa497bdcdcfe52c159acfebca387afa020a165eb2579e4f73d34d901d86bd800\"" Jun 20 19:10:32.859779 containerd[1527]: time="2025-06-20T19:10:32.859641934Z" level=info msg="StartContainer for \"fa497bdcdcfe52c159acfebca387afa020a165eb2579e4f73d34d901d86bd800\"" Jun 20 19:10:32.897923 systemd[1]: Started cri-containerd-fa497bdcdcfe52c159acfebca387afa020a165eb2579e4f73d34d901d86bd800.scope - libcontainer container fa497bdcdcfe52c159acfebca387afa020a165eb2579e4f73d34d901d86bd800. Jun 20 19:10:32.928658 containerd[1527]: time="2025-06-20T19:10:32.928506883Z" level=info msg="StartContainer for \"fa497bdcdcfe52c159acfebca387afa020a165eb2579e4f73d34d901d86bd800\" returns successfully" Jun 20 19:10:32.936381 systemd[1]: cri-containerd-fa497bdcdcfe52c159acfebca387afa020a165eb2579e4f73d34d901d86bd800.scope: Deactivated successfully. Jun 20 19:10:32.936656 systemd[1]: cri-containerd-fa497bdcdcfe52c159acfebca387afa020a165eb2579e4f73d34d901d86bd800.scope: Consumed 17ms CPU time, 6.9M memory peak, 1.7M read from disk. Jun 20 19:10:32.955632 containerd[1527]: time="2025-06-20T19:10:32.955530924Z" level=info msg="shim disconnected" id=fa497bdcdcfe52c159acfebca387afa020a165eb2579e4f73d34d901d86bd800 namespace=k8s.io Jun 20 19:10:32.956679 containerd[1527]: time="2025-06-20T19:10:32.955751268Z" level=warning msg="cleaning up after shim disconnected" id=fa497bdcdcfe52c159acfebca387afa020a165eb2579e4f73d34d901d86bd800 namespace=k8s.io Jun 20 19:10:32.956679 containerd[1527]: time="2025-06-20T19:10:32.955763090Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:10:33.144971 sshd[4622]: Accepted publickey for core from 139.178.68.195 port 51454 ssh2: RSA SHA256:ttetsIDeDmoOcWQonGIv9sNpOX/TzQSVP7aimMY1zAQ Jun 20 19:10:33.146926 sshd-session[4622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:10:33.153781 systemd-logind[1504]: New session 22 of user core. Jun 20 19:10:33.160992 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 20 19:10:33.819627 sshd[4777]: Connection closed by 139.178.68.195 port 51454 Jun 20 19:10:33.820603 sshd-session[4622]: pam_unix(sshd:session): session closed for user core Jun 20 19:10:33.825602 systemd[1]: sshd@23-157.180.24.181:22-139.178.68.195:51454.service: Deactivated successfully. Jun 20 19:10:33.827499 systemd[1]: session-22.scope: Deactivated successfully. Jun 20 19:10:33.829160 systemd-logind[1504]: Session 22 logged out. Waiting for processes to exit. Jun 20 19:10:33.830649 systemd-logind[1504]: Removed session 22. Jun 20 19:10:33.851145 containerd[1527]: time="2025-06-20T19:10:33.850648813Z" level=info msg="CreateContainer within sandbox \"cd57d35a8a280be477f2b3e3fd63c8d89ebcf7abde359abf0c85e4bdbaa62960\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jun 20 19:10:33.873860 containerd[1527]: time="2025-06-20T19:10:33.873794975Z" level=info msg="CreateContainer within sandbox \"cd57d35a8a280be477f2b3e3fd63c8d89ebcf7abde359abf0c85e4bdbaa62960\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d46178755bc3908f7da07d5fe14ed086e0af8c60dd1165ea2fd395027b8a9e16\"" Jun 20 19:10:33.874413 containerd[1527]: time="2025-06-20T19:10:33.874371187Z" level=info msg="StartContainer for \"d46178755bc3908f7da07d5fe14ed086e0af8c60dd1165ea2fd395027b8a9e16\"" Jun 20 19:10:33.912420 systemd[1]: Started cri-containerd-d46178755bc3908f7da07d5fe14ed086e0af8c60dd1165ea2fd395027b8a9e16.scope - libcontainer container d46178755bc3908f7da07d5fe14ed086e0af8c60dd1165ea2fd395027b8a9e16. Jun 20 19:10:33.939249 containerd[1527]: time="2025-06-20T19:10:33.939219134Z" level=info msg="StartContainer for \"d46178755bc3908f7da07d5fe14ed086e0af8c60dd1165ea2fd395027b8a9e16\" returns successfully" Jun 20 19:10:33.944104 systemd[1]: cri-containerd-d46178755bc3908f7da07d5fe14ed086e0af8c60dd1165ea2fd395027b8a9e16.scope: Deactivated successfully. Jun 20 19:10:33.962334 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d46178755bc3908f7da07d5fe14ed086e0af8c60dd1165ea2fd395027b8a9e16-rootfs.mount: Deactivated successfully. Jun 20 19:10:33.968416 containerd[1527]: time="2025-06-20T19:10:33.968248942Z" level=info msg="shim disconnected" id=d46178755bc3908f7da07d5fe14ed086e0af8c60dd1165ea2fd395027b8a9e16 namespace=k8s.io Jun 20 19:10:33.968416 containerd[1527]: time="2025-06-20T19:10:33.968298935Z" level=warning msg="cleaning up after shim disconnected" id=d46178755bc3908f7da07d5fe14ed086e0af8c60dd1165ea2fd395027b8a9e16 namespace=k8s.io Jun 20 19:10:33.968416 containerd[1527]: time="2025-06-20T19:10:33.968306730Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:10:33.988931 systemd[1]: Started sshd@24-157.180.24.181:22-139.178.68.195:33210.service - OpenSSH per-connection server daemon (139.178.68.195:33210). Jun 20 19:10:34.850403 containerd[1527]: time="2025-06-20T19:10:34.850250528Z" level=info msg="CreateContainer within sandbox \"cd57d35a8a280be477f2b3e3fd63c8d89ebcf7abde359abf0c85e4bdbaa62960\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jun 20 19:10:34.869061 containerd[1527]: time="2025-06-20T19:10:34.868896422Z" level=info msg="CreateContainer within sandbox \"cd57d35a8a280be477f2b3e3fd63c8d89ebcf7abde359abf0c85e4bdbaa62960\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e362019cfc87e87c78c9b5a304c78b1f378761a141941b486e3ffdf839dec974\"" Jun 20 19:10:34.873045 containerd[1527]: time="2025-06-20T19:10:34.870897419Z" level=info msg="StartContainer for \"e362019cfc87e87c78c9b5a304c78b1f378761a141941b486e3ffdf839dec974\"" Jun 20 19:10:34.872606 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1236983077.mount: Deactivated successfully. Jun 20 19:10:34.896853 systemd[1]: Started cri-containerd-e362019cfc87e87c78c9b5a304c78b1f378761a141941b486e3ffdf839dec974.scope - libcontainer container e362019cfc87e87c78c9b5a304c78b1f378761a141941b486e3ffdf839dec974. Jun 20 19:10:34.916193 systemd[1]: cri-containerd-e362019cfc87e87c78c9b5a304c78b1f378761a141941b486e3ffdf839dec974.scope: Deactivated successfully. Jun 20 19:10:34.918515 containerd[1527]: time="2025-06-20T19:10:34.918483390Z" level=info msg="StartContainer for \"e362019cfc87e87c78c9b5a304c78b1f378761a141941b486e3ffdf839dec974\" returns successfully" Jun 20 19:10:34.946447 containerd[1527]: time="2025-06-20T19:10:34.946383417Z" level=info msg="shim disconnected" id=e362019cfc87e87c78c9b5a304c78b1f378761a141941b486e3ffdf839dec974 namespace=k8s.io Jun 20 19:10:34.946447 containerd[1527]: time="2025-06-20T19:10:34.946433370Z" level=warning msg="cleaning up after shim disconnected" id=e362019cfc87e87c78c9b5a304c78b1f378761a141941b486e3ffdf839dec974 namespace=k8s.io Jun 20 19:10:34.946447 containerd[1527]: time="2025-06-20T19:10:34.946440123Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:10:34.952994 sshd[4841]: Accepted publickey for core from 139.178.68.195 port 33210 ssh2: RSA SHA256:ttetsIDeDmoOcWQonGIv9sNpOX/TzQSVP7aimMY1zAQ Jun 20 19:10:34.955326 sshd-session[4841]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:10:34.961871 systemd-logind[1504]: New session 23 of user core. Jun 20 19:10:34.964760 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e362019cfc87e87c78c9b5a304c78b1f378761a141941b486e3ffdf839dec974-rootfs.mount: Deactivated successfully. Jun 20 19:10:34.972874 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 20 19:10:35.211072 kubelet[2818]: E0620 19:10:35.211011 2818 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jun 20 19:10:35.854464 containerd[1527]: time="2025-06-20T19:10:35.854272260Z" level=info msg="CreateContainer within sandbox \"cd57d35a8a280be477f2b3e3fd63c8d89ebcf7abde359abf0c85e4bdbaa62960\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jun 20 19:10:35.871325 containerd[1527]: time="2025-06-20T19:10:35.871287443Z" level=info msg="CreateContainer within sandbox \"cd57d35a8a280be477f2b3e3fd63c8d89ebcf7abde359abf0c85e4bdbaa62960\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3453116ed7b01e1593c7c46de37b7b02f8e2a03cd31d80280eb64d5d66d90e15\"" Jun 20 19:10:35.872651 containerd[1527]: time="2025-06-20T19:10:35.872627848Z" level=info msg="StartContainer for \"3453116ed7b01e1593c7c46de37b7b02f8e2a03cd31d80280eb64d5d66d90e15\"" Jun 20 19:10:35.905853 systemd[1]: Started cri-containerd-3453116ed7b01e1593c7c46de37b7b02f8e2a03cd31d80280eb64d5d66d90e15.scope - libcontainer container 3453116ed7b01e1593c7c46de37b7b02f8e2a03cd31d80280eb64d5d66d90e15. Jun 20 19:10:35.938499 containerd[1527]: time="2025-06-20T19:10:35.938448232Z" level=info msg="StartContainer for \"3453116ed7b01e1593c7c46de37b7b02f8e2a03cd31d80280eb64d5d66d90e15\" returns successfully" Jun 20 19:10:35.966203 systemd[1]: run-containerd-runc-k8s.io-3453116ed7b01e1593c7c46de37b7b02f8e2a03cd31d80280eb64d5d66d90e15-runc.wGMqNS.mount: Deactivated successfully. Jun 20 19:10:36.339757 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jun 20 19:10:39.033620 systemd-networkd[1436]: lxc_health: Link UP Jun 20 19:10:39.036837 systemd-networkd[1436]: lxc_health: Gained carrier Jun 20 19:10:40.060358 systemd[1]: run-containerd-runc-k8s.io-3453116ed7b01e1593c7c46de37b7b02f8e2a03cd31d80280eb64d5d66d90e15-runc.wArzct.mount: Deactivated successfully. Jun 20 19:10:40.167341 kubelet[2818]: I0620 19:10:40.164630 2818 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-p8hhl" podStartSLOduration=9.164613707 podStartE2EDuration="9.164613707s" podCreationTimestamp="2025-06-20 19:10:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:10:36.871411597 +0000 UTC m=+341.866550613" watchObservedRunningTime="2025-06-20 19:10:40.164613707 +0000 UTC m=+345.159752723" Jun 20 19:10:40.863003 systemd-networkd[1436]: lxc_health: Gained IPv6LL Jun 20 19:10:42.211319 systemd[1]: run-containerd-runc-k8s.io-3453116ed7b01e1593c7c46de37b7b02f8e2a03cd31d80280eb64d5d66d90e15-runc.lbFjW1.mount: Deactivated successfully. Jun 20 19:10:46.513608 systemd[1]: run-containerd-runc-k8s.io-3453116ed7b01e1593c7c46de37b7b02f8e2a03cd31d80280eb64d5d66d90e15-runc.c2nyUa.mount: Deactivated successfully. Jun 20 19:10:46.735242 sshd[4899]: Connection closed by 139.178.68.195 port 33210 Jun 20 19:10:46.736929 sshd-session[4841]: pam_unix(sshd:session): session closed for user core Jun 20 19:10:46.742246 systemd[1]: sshd@24-157.180.24.181:22-139.178.68.195:33210.service: Deactivated successfully. Jun 20 19:10:46.744990 systemd[1]: session-23.scope: Deactivated successfully. Jun 20 19:10:46.747654 systemd-logind[1504]: Session 23 logged out. Waiting for processes to exit. Jun 20 19:10:46.749456 systemd-logind[1504]: Removed session 23. Jun 20 19:10:55.106460 containerd[1527]: time="2025-06-20T19:10:55.106399944Z" level=info msg="StopPodSandbox for \"6d4ce5a6abeaadb881fc3a99c383213b99e50e69a1a6a66388e7817e3e57f248\"" Jun 20 19:10:55.107146 containerd[1527]: time="2025-06-20T19:10:55.106488611Z" level=info msg="TearDown network for sandbox \"6d4ce5a6abeaadb881fc3a99c383213b99e50e69a1a6a66388e7817e3e57f248\" successfully" Jun 20 19:10:55.107146 containerd[1527]: time="2025-06-20T19:10:55.106499290Z" level=info msg="StopPodSandbox for \"6d4ce5a6abeaadb881fc3a99c383213b99e50e69a1a6a66388e7817e3e57f248\" returns successfully" Jun 20 19:10:55.107146 containerd[1527]: time="2025-06-20T19:10:55.106835692Z" level=info msg="RemovePodSandbox for \"6d4ce5a6abeaadb881fc3a99c383213b99e50e69a1a6a66388e7817e3e57f248\"" Jun 20 19:10:55.107146 containerd[1527]: time="2025-06-20T19:10:55.106877942Z" level=info msg="Forcibly stopping sandbox \"6d4ce5a6abeaadb881fc3a99c383213b99e50e69a1a6a66388e7817e3e57f248\"" Jun 20 19:10:55.107146 containerd[1527]: time="2025-06-20T19:10:55.106923908Z" level=info msg="TearDown network for sandbox \"6d4ce5a6abeaadb881fc3a99c383213b99e50e69a1a6a66388e7817e3e57f248\" successfully" Jun 20 19:10:55.111509 containerd[1527]: time="2025-06-20T19:10:55.111470092Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6d4ce5a6abeaadb881fc3a99c383213b99e50e69a1a6a66388e7817e3e57f248\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 20 19:10:55.111593 containerd[1527]: time="2025-06-20T19:10:55.111536607Z" level=info msg="RemovePodSandbox \"6d4ce5a6abeaadb881fc3a99c383213b99e50e69a1a6a66388e7817e3e57f248\" returns successfully" Jun 20 19:10:55.112060 containerd[1527]: time="2025-06-20T19:10:55.111928463Z" level=info msg="StopPodSandbox for \"78921f17c74ca3c6549d5a27565bfd72661ed2a1a07e334865974cf201c554df\"" Jun 20 19:10:55.112060 containerd[1527]: time="2025-06-20T19:10:55.111991662Z" level=info msg="TearDown network for sandbox \"78921f17c74ca3c6549d5a27565bfd72661ed2a1a07e334865974cf201c554df\" successfully" Jun 20 19:10:55.112060 containerd[1527]: time="2025-06-20T19:10:55.112002933Z" level=info msg="StopPodSandbox for \"78921f17c74ca3c6549d5a27565bfd72661ed2a1a07e334865974cf201c554df\" returns successfully" Jun 20 19:10:55.112365 containerd[1527]: time="2025-06-20T19:10:55.112304318Z" level=info msg="RemovePodSandbox for \"78921f17c74ca3c6549d5a27565bfd72661ed2a1a07e334865974cf201c554df\"" Jun 20 19:10:55.112365 containerd[1527]: time="2025-06-20T19:10:55.112332502Z" level=info msg="Forcibly stopping sandbox \"78921f17c74ca3c6549d5a27565bfd72661ed2a1a07e334865974cf201c554df\"" Jun 20 19:10:55.112509 containerd[1527]: time="2025-06-20T19:10:55.112373259Z" level=info msg="TearDown network for sandbox \"78921f17c74ca3c6549d5a27565bfd72661ed2a1a07e334865974cf201c554df\" successfully" Jun 20 19:10:55.115734 containerd[1527]: time="2025-06-20T19:10:55.115414778Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"78921f17c74ca3c6549d5a27565bfd72661ed2a1a07e334865974cf201c554df\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 20 19:10:55.115734 containerd[1527]: time="2025-06-20T19:10:55.115451837Z" level=info msg="RemovePodSandbox \"78921f17c74ca3c6549d5a27565bfd72661ed2a1a07e334865974cf201c554df\" returns successfully"