Jun 20 18:57:44.927363 kernel: Linux version 6.6.94-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Fri Jun 20 17:12:40 -00 2025 Jun 20 18:57:44.927384 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=c5ce7ee72c13e935b8a741ba19830125b417ea1672f46b6a215da9317cee8e17 Jun 20 18:57:44.927394 kernel: BIOS-provided physical RAM map: Jun 20 18:57:44.927400 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jun 20 18:57:44.927406 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jun 20 18:57:44.927412 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jun 20 18:57:44.927419 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007cfdbfff] usable Jun 20 18:57:44.927425 kernel: BIOS-e820: [mem 0x000000007cfdc000-0x000000007cffffff] reserved Jun 20 18:57:44.927433 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jun 20 18:57:44.927438 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jun 20 18:57:44.927444 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jun 20 18:57:44.927450 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jun 20 18:57:44.927456 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jun 20 18:57:44.927463 kernel: NX (Execute Disable) protection: active Jun 20 18:57:44.927471 kernel: APIC: Static calls initialized Jun 20 18:57:44.927478 kernel: SMBIOS 3.0.0 present. Jun 20 18:57:44.927485 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Jun 20 18:57:44.927491 kernel: Hypervisor detected: KVM Jun 20 18:57:44.927498 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jun 20 18:57:44.927504 kernel: kvm-clock: using sched offset of 3388564096 cycles Jun 20 18:57:44.927511 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jun 20 18:57:44.927518 kernel: tsc: Detected 2495.312 MHz processor Jun 20 18:57:44.927540 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jun 20 18:57:44.927548 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jun 20 18:57:44.927556 kernel: last_pfn = 0x7cfdc max_arch_pfn = 0x400000000 Jun 20 18:57:44.927563 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jun 20 18:57:44.927570 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jun 20 18:57:44.927577 kernel: Using GB pages for direct mapping Jun 20 18:57:44.927583 kernel: ACPI: Early table checksum verification disabled Jun 20 18:57:44.927590 kernel: ACPI: RSDP 0x00000000000F5270 000014 (v00 BOCHS ) Jun 20 18:57:44.927596 kernel: ACPI: RSDT 0x000000007CFE2693 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 20 18:57:44.927603 kernel: ACPI: FACP 0x000000007CFE2483 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jun 20 18:57:44.927610 kernel: ACPI: DSDT 0x000000007CFE0040 002443 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 20 18:57:44.927618 kernel: ACPI: FACS 0x000000007CFE0000 000040 Jun 20 18:57:44.927625 kernel: ACPI: APIC 0x000000007CFE2577 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jun 20 18:57:44.927631 kernel: ACPI: HPET 0x000000007CFE25F7 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 20 18:57:44.927638 kernel: ACPI: MCFG 0x000000007CFE262F 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 20 18:57:44.927645 kernel: ACPI: WAET 0x000000007CFE266B 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 20 18:57:44.927652 kernel: ACPI: Reserving FACP table memory at [mem 0x7cfe2483-0x7cfe2576] Jun 20 18:57:44.927659 kernel: ACPI: Reserving DSDT table memory at [mem 0x7cfe0040-0x7cfe2482] Jun 20 18:57:44.927669 kernel: ACPI: Reserving FACS table memory at [mem 0x7cfe0000-0x7cfe003f] Jun 20 18:57:44.927676 kernel: ACPI: Reserving APIC table memory at [mem 0x7cfe2577-0x7cfe25f6] Jun 20 18:57:44.927683 kernel: ACPI: Reserving HPET table memory at [mem 0x7cfe25f7-0x7cfe262e] Jun 20 18:57:44.927690 kernel: ACPI: Reserving MCFG table memory at [mem 0x7cfe262f-0x7cfe266a] Jun 20 18:57:44.927697 kernel: ACPI: Reserving WAET table memory at [mem 0x7cfe266b-0x7cfe2692] Jun 20 18:57:44.927703 kernel: No NUMA configuration found Jun 20 18:57:44.927710 kernel: Faking a node at [mem 0x0000000000000000-0x000000007cfdbfff] Jun 20 18:57:44.927719 kernel: NODE_DATA(0) allocated [mem 0x7cfd6000-0x7cfdbfff] Jun 20 18:57:44.927725 kernel: Zone ranges: Jun 20 18:57:44.927732 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jun 20 18:57:44.927739 kernel: DMA32 [mem 0x0000000001000000-0x000000007cfdbfff] Jun 20 18:57:44.927746 kernel: Normal empty Jun 20 18:57:44.927753 kernel: Movable zone start for each node Jun 20 18:57:44.927760 kernel: Early memory node ranges Jun 20 18:57:44.927767 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jun 20 18:57:44.927773 kernel: node 0: [mem 0x0000000000100000-0x000000007cfdbfff] Jun 20 18:57:44.927782 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007cfdbfff] Jun 20 18:57:44.927789 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jun 20 18:57:44.927795 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jun 20 18:57:44.927802 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jun 20 18:57:44.927809 kernel: ACPI: PM-Timer IO Port: 0x608 Jun 20 18:57:44.927816 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jun 20 18:57:44.927823 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jun 20 18:57:44.927830 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jun 20 18:57:44.927836 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jun 20 18:57:44.927845 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jun 20 18:57:44.927852 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jun 20 18:57:44.927858 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jun 20 18:57:44.927865 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jun 20 18:57:44.927872 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jun 20 18:57:44.927879 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jun 20 18:57:44.927886 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jun 20 18:57:44.927893 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jun 20 18:57:44.927900 kernel: Booting paravirtualized kernel on KVM Jun 20 18:57:44.927907 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jun 20 18:57:44.927915 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jun 20 18:57:44.927922 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 Jun 20 18:57:44.927929 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 Jun 20 18:57:44.927936 kernel: pcpu-alloc: [0] 0 1 Jun 20 18:57:44.927943 kernel: kvm-guest: PV spinlocks disabled, no host support Jun 20 18:57:44.927951 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=c5ce7ee72c13e935b8a741ba19830125b417ea1672f46b6a215da9317cee8e17 Jun 20 18:57:44.927959 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 20 18:57:44.927967 kernel: random: crng init done Jun 20 18:57:44.927974 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jun 20 18:57:44.927981 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jun 20 18:57:44.927988 kernel: Fallback order for Node 0: 0 Jun 20 18:57:44.927994 kernel: Built 1 zonelists, mobility grouping on. Total pages: 503708 Jun 20 18:57:44.928001 kernel: Policy zone: DMA32 Jun 20 18:57:44.928008 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 20 18:57:44.928015 kernel: Memory: 1920004K/2047464K available (14336K kernel code, 2295K rwdata, 22872K rodata, 43488K init, 1588K bss, 127200K reserved, 0K cma-reserved) Jun 20 18:57:44.928022 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jun 20 18:57:44.928030 kernel: ftrace: allocating 37938 entries in 149 pages Jun 20 18:57:44.928038 kernel: ftrace: allocated 149 pages with 4 groups Jun 20 18:57:44.928047 kernel: Dynamic Preempt: voluntary Jun 20 18:57:44.928057 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 20 18:57:44.928067 kernel: rcu: RCU event tracing is enabled. Jun 20 18:57:44.928076 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jun 20 18:57:44.928084 kernel: Trampoline variant of Tasks RCU enabled. Jun 20 18:57:44.928091 kernel: Rude variant of Tasks RCU enabled. Jun 20 18:57:44.928097 kernel: Tracing variant of Tasks RCU enabled. Jun 20 18:57:44.928104 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 20 18:57:44.928113 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jun 20 18:57:44.928120 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jun 20 18:57:44.928127 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 20 18:57:44.928134 kernel: Console: colour VGA+ 80x25 Jun 20 18:57:44.928141 kernel: printk: console [tty0] enabled Jun 20 18:57:44.928148 kernel: printk: console [ttyS0] enabled Jun 20 18:57:44.928154 kernel: ACPI: Core revision 20230628 Jun 20 18:57:44.928162 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jun 20 18:57:44.928169 kernel: APIC: Switch to symmetric I/O mode setup Jun 20 18:57:44.928177 kernel: x2apic enabled Jun 20 18:57:44.928184 kernel: APIC: Switched APIC routing to: physical x2apic Jun 20 18:57:44.928283 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jun 20 18:57:44.928290 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jun 20 18:57:44.928297 kernel: Calibrating delay loop (skipped) preset value.. 4990.62 BogoMIPS (lpj=2495312) Jun 20 18:57:44.928304 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jun 20 18:57:44.928311 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jun 20 18:57:44.928318 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jun 20 18:57:44.928332 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jun 20 18:57:44.928339 kernel: Spectre V2 : Mitigation: Retpolines Jun 20 18:57:44.928346 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jun 20 18:57:44.928353 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jun 20 18:57:44.928362 kernel: RETBleed: Mitigation: untrained return thunk Jun 20 18:57:44.928369 kernel: Spectre V2 : User space: Vulnerable Jun 20 18:57:44.928376 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jun 20 18:57:44.928383 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jun 20 18:57:44.928390 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jun 20 18:57:44.928399 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jun 20 18:57:44.928406 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jun 20 18:57:44.928414 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jun 20 18:57:44.928421 kernel: Freeing SMP alternatives memory: 32K Jun 20 18:57:44.928428 kernel: pid_max: default: 32768 minimum: 301 Jun 20 18:57:44.928435 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jun 20 18:57:44.928442 kernel: landlock: Up and running. Jun 20 18:57:44.928449 kernel: SELinux: Initializing. Jun 20 18:57:44.928458 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jun 20 18:57:44.928465 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jun 20 18:57:44.928473 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0) Jun 20 18:57:44.928480 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 20 18:57:44.928487 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 20 18:57:44.928494 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 20 18:57:44.928502 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jun 20 18:57:44.928509 kernel: ... version: 0 Jun 20 18:57:44.928516 kernel: ... bit width: 48 Jun 20 18:57:44.928541 kernel: ... generic registers: 6 Jun 20 18:57:44.928548 kernel: ... value mask: 0000ffffffffffff Jun 20 18:57:44.928555 kernel: ... max period: 00007fffffffffff Jun 20 18:57:44.928575 kernel: ... fixed-purpose events: 0 Jun 20 18:57:44.928583 kernel: ... event mask: 000000000000003f Jun 20 18:57:44.928590 kernel: signal: max sigframe size: 1776 Jun 20 18:57:44.928597 kernel: rcu: Hierarchical SRCU implementation. Jun 20 18:57:44.928604 kernel: rcu: Max phase no-delay instances is 400. Jun 20 18:57:44.928611 kernel: smp: Bringing up secondary CPUs ... Jun 20 18:57:44.928620 kernel: smpboot: x86: Booting SMP configuration: Jun 20 18:57:44.928628 kernel: .... node #0, CPUs: #1 Jun 20 18:57:44.928635 kernel: smp: Brought up 1 node, 2 CPUs Jun 20 18:57:44.928644 kernel: smpboot: Max logical packages: 1 Jun 20 18:57:44.928652 kernel: smpboot: Total of 2 processors activated (9981.24 BogoMIPS) Jun 20 18:57:44.928659 kernel: devtmpfs: initialized Jun 20 18:57:44.928667 kernel: x86/mm: Memory block size: 128MB Jun 20 18:57:44.928676 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 20 18:57:44.928684 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jun 20 18:57:44.928693 kernel: pinctrl core: initialized pinctrl subsystem Jun 20 18:57:44.928702 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 20 18:57:44.928709 kernel: audit: initializing netlink subsys (disabled) Jun 20 18:57:44.928717 kernel: audit: type=2000 audit(1750445863.400:1): state=initialized audit_enabled=0 res=1 Jun 20 18:57:44.928724 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 20 18:57:44.928731 kernel: thermal_sys: Registered thermal governor 'user_space' Jun 20 18:57:44.928738 kernel: cpuidle: using governor menu Jun 20 18:57:44.928745 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 20 18:57:44.928759 kernel: dca service started, version 1.12.1 Jun 20 18:57:44.928766 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jun 20 18:57:44.928775 kernel: PCI: Using configuration type 1 for base access Jun 20 18:57:44.928782 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jun 20 18:57:44.928789 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jun 20 18:57:44.928796 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jun 20 18:57:44.928804 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 20 18:57:44.928812 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jun 20 18:57:44.928820 kernel: ACPI: Added _OSI(Module Device) Jun 20 18:57:44.928827 kernel: ACPI: Added _OSI(Processor Device) Jun 20 18:57:44.928834 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 20 18:57:44.928843 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jun 20 18:57:44.928850 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jun 20 18:57:44.928857 kernel: ACPI: Interpreter enabled Jun 20 18:57:44.928864 kernel: ACPI: PM: (supports S0 S5) Jun 20 18:57:44.928871 kernel: ACPI: Using IOAPIC for interrupt routing Jun 20 18:57:44.928878 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jun 20 18:57:44.928886 kernel: PCI: Using E820 reservations for host bridge windows Jun 20 18:57:44.928893 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jun 20 18:57:44.928900 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jun 20 18:57:44.929034 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jun 20 18:57:44.929125 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jun 20 18:57:44.929217 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jun 20 18:57:44.929227 kernel: PCI host bridge to bus 0000:00 Jun 20 18:57:44.929307 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jun 20 18:57:44.929376 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jun 20 18:57:44.929467 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jun 20 18:57:44.929569 kernel: pci_bus 0000:00: root bus resource [mem 0x7d000000-0xafffffff window] Jun 20 18:57:44.929639 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jun 20 18:57:44.929706 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jun 20 18:57:44.929774 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jun 20 18:57:44.929865 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jun 20 18:57:44.929954 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 Jun 20 18:57:44.930036 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfb800000-0xfbffffff pref] Jun 20 18:57:44.930135 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfd200000-0xfd203fff 64bit pref] Jun 20 18:57:44.930230 kernel: pci 0000:00:01.0: reg 0x20: [mem 0xfea10000-0xfea10fff] Jun 20 18:57:44.930307 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea00000-0xfea0ffff pref] Jun 20 18:57:44.930383 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jun 20 18:57:44.930468 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jun 20 18:57:44.930580 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea11000-0xfea11fff] Jun 20 18:57:44.930666 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jun 20 18:57:44.930741 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea12000-0xfea12fff] Jun 20 18:57:44.930824 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jun 20 18:57:44.930901 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea13000-0xfea13fff] Jun 20 18:57:44.930988 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jun 20 18:57:44.931070 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea14000-0xfea14fff] Jun 20 18:57:44.931173 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jun 20 18:57:44.931308 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea15000-0xfea15fff] Jun 20 18:57:44.931389 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jun 20 18:57:44.931464 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea16000-0xfea16fff] Jun 20 18:57:44.931563 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jun 20 18:57:44.931643 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea17000-0xfea17fff] Jun 20 18:57:44.931726 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jun 20 18:57:44.931803 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea18000-0xfea18fff] Jun 20 18:57:44.931885 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Jun 20 18:57:44.931961 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfea19000-0xfea19fff] Jun 20 18:57:44.932042 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jun 20 18:57:44.932131 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jun 20 18:57:44.932241 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jun 20 18:57:44.932316 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc040-0xc05f] Jun 20 18:57:44.932389 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea1a000-0xfea1afff] Jun 20 18:57:44.932471 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jun 20 18:57:44.932578 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jun 20 18:57:44.932671 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Jun 20 18:57:44.932755 kernel: pci 0000:01:00.0: reg 0x14: [mem 0xfe880000-0xfe880fff] Jun 20 18:57:44.932833 kernel: pci 0000:01:00.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Jun 20 18:57:44.932910 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfe800000-0xfe87ffff pref] Jun 20 18:57:44.932985 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jun 20 18:57:44.933059 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Jun 20 18:57:44.933152 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Jun 20 18:57:44.933261 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Jun 20 18:57:44.933346 kernel: pci 0000:02:00.0: reg 0x10: [mem 0xfe600000-0xfe603fff 64bit] Jun 20 18:57:44.933424 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jun 20 18:57:44.933502 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Jun 20 18:57:44.933640 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jun 20 18:57:44.933735 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Jun 20 18:57:44.933812 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfe400000-0xfe400fff] Jun 20 18:57:44.933892 kernel: pci 0000:03:00.0: reg 0x20: [mem 0xfcc00000-0xfcc03fff 64bit pref] Jun 20 18:57:44.934174 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jun 20 18:57:44.934269 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Jun 20 18:57:44.934344 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jun 20 18:57:44.934452 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Jun 20 18:57:44.934588 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Jun 20 18:57:44.934666 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jun 20 18:57:44.934742 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Jun 20 18:57:44.934820 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jun 20 18:57:44.934904 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Jun 20 18:57:44.934984 kernel: pci 0000:05:00.0: reg 0x14: [mem 0xfe000000-0xfe000fff] Jun 20 18:57:44.935063 kernel: pci 0000:05:00.0: reg 0x20: [mem 0xfc800000-0xfc803fff 64bit pref] Jun 20 18:57:44.935156 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jun 20 18:57:44.935250 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Jun 20 18:57:44.935324 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jun 20 18:57:44.935414 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Jun 20 18:57:44.935493 kernel: pci 0000:06:00.0: reg 0x14: [mem 0xfde00000-0xfde00fff] Jun 20 18:57:44.935608 kernel: pci 0000:06:00.0: reg 0x20: [mem 0xfc600000-0xfc603fff 64bit pref] Jun 20 18:57:44.935685 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jun 20 18:57:44.935761 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Jun 20 18:57:44.935835 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jun 20 18:57:44.935845 kernel: acpiphp: Slot [0] registered Jun 20 18:57:44.935928 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Jun 20 18:57:44.936013 kernel: pci 0000:07:00.0: reg 0x14: [mem 0xfdc80000-0xfdc80fff] Jun 20 18:57:44.936090 kernel: pci 0000:07:00.0: reg 0x20: [mem 0xfc400000-0xfc403fff 64bit pref] Jun 20 18:57:44.936167 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfdc00000-0xfdc7ffff pref] Jun 20 18:57:44.936261 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jun 20 18:57:44.936336 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Jun 20 18:57:44.936411 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jun 20 18:57:44.936421 kernel: acpiphp: Slot [0-2] registered Jun 20 18:57:44.936497 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jun 20 18:57:44.938621 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Jun 20 18:57:44.938708 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jun 20 18:57:44.938719 kernel: acpiphp: Slot [0-3] registered Jun 20 18:57:44.938792 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jun 20 18:57:44.938867 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Jun 20 18:57:44.938942 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jun 20 18:57:44.938952 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jun 20 18:57:44.938960 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jun 20 18:57:44.938970 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jun 20 18:57:44.938978 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jun 20 18:57:44.938985 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jun 20 18:57:44.938993 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jun 20 18:57:44.939001 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jun 20 18:57:44.939008 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jun 20 18:57:44.939015 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jun 20 18:57:44.939023 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jun 20 18:57:44.939030 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jun 20 18:57:44.939040 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jun 20 18:57:44.939048 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jun 20 18:57:44.939055 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jun 20 18:57:44.939062 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jun 20 18:57:44.939069 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jun 20 18:57:44.939077 kernel: iommu: Default domain type: Translated Jun 20 18:57:44.939084 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jun 20 18:57:44.939092 kernel: PCI: Using ACPI for IRQ routing Jun 20 18:57:44.939099 kernel: PCI: pci_cache_line_size set to 64 bytes Jun 20 18:57:44.939109 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jun 20 18:57:44.939116 kernel: e820: reserve RAM buffer [mem 0x7cfdc000-0x7fffffff] Jun 20 18:57:44.939279 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jun 20 18:57:44.939357 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jun 20 18:57:44.939432 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jun 20 18:57:44.939442 kernel: vgaarb: loaded Jun 20 18:57:44.939450 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jun 20 18:57:44.939457 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jun 20 18:57:44.939465 kernel: clocksource: Switched to clocksource kvm-clock Jun 20 18:57:44.939475 kernel: VFS: Disk quotas dquot_6.6.0 Jun 20 18:57:44.939483 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 20 18:57:44.939490 kernel: pnp: PnP ACPI init Jun 20 18:57:44.939606 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jun 20 18:57:44.939619 kernel: pnp: PnP ACPI: found 5 devices Jun 20 18:57:44.939626 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jun 20 18:57:44.939634 kernel: NET: Registered PF_INET protocol family Jun 20 18:57:44.939641 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jun 20 18:57:44.939652 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jun 20 18:57:44.939660 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 20 18:57:44.939667 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jun 20 18:57:44.939675 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jun 20 18:57:44.939682 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jun 20 18:57:44.939690 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jun 20 18:57:44.939698 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jun 20 18:57:44.939705 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 20 18:57:44.939714 kernel: NET: Registered PF_XDP protocol family Jun 20 18:57:44.939795 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jun 20 18:57:44.939872 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jun 20 18:57:44.939950 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jun 20 18:57:44.940027 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x1000-0x1fff] Jun 20 18:57:44.940115 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x2000-0x2fff] Jun 20 18:57:44.940214 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x3000-0x3fff] Jun 20 18:57:44.940299 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jun 20 18:57:44.940376 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Jun 20 18:57:44.940451 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Jun 20 18:57:44.942557 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jun 20 18:57:44.942653 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Jun 20 18:57:44.942915 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jun 20 18:57:44.942992 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jun 20 18:57:44.943075 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Jun 20 18:57:44.943149 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jun 20 18:57:44.943248 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jun 20 18:57:44.943325 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Jun 20 18:57:44.943400 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jun 20 18:57:44.943478 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jun 20 18:57:44.943657 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Jun 20 18:57:44.943733 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jun 20 18:57:44.943815 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jun 20 18:57:44.943924 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Jun 20 18:57:44.944005 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jun 20 18:57:44.944081 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jun 20 18:57:44.944157 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Jun 20 18:57:44.944251 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Jun 20 18:57:44.944326 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jun 20 18:57:44.944401 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jun 20 18:57:44.944475 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Jun 20 18:57:44.944567 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Jun 20 18:57:44.944645 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jun 20 18:57:44.944725 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jun 20 18:57:44.944799 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Jun 20 18:57:44.944877 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Jun 20 18:57:44.944963 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jun 20 18:57:44.945037 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jun 20 18:57:44.945108 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jun 20 18:57:44.945180 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jun 20 18:57:44.945263 kernel: pci_bus 0000:00: resource 7 [mem 0x7d000000-0xafffffff window] Jun 20 18:57:44.945331 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jun 20 18:57:44.945398 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jun 20 18:57:44.945482 kernel: pci_bus 0000:01: resource 1 [mem 0xfe800000-0xfe9fffff] Jun 20 18:57:44.945609 kernel: pci_bus 0000:01: resource 2 [mem 0xfd000000-0xfd1fffff 64bit pref] Jun 20 18:57:44.945688 kernel: pci_bus 0000:02: resource 1 [mem 0xfe600000-0xfe7fffff] Jun 20 18:57:44.945758 kernel: pci_bus 0000:02: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Jun 20 18:57:44.945834 kernel: pci_bus 0000:03: resource 1 [mem 0xfe400000-0xfe5fffff] Jun 20 18:57:44.945902 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Jun 20 18:57:44.945983 kernel: pci_bus 0000:04: resource 1 [mem 0xfe200000-0xfe3fffff] Jun 20 18:57:44.946058 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Jun 20 18:57:44.946147 kernel: pci_bus 0000:05: resource 1 [mem 0xfe000000-0xfe1fffff] Jun 20 18:57:44.946233 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Jun 20 18:57:44.946309 kernel: pci_bus 0000:06: resource 1 [mem 0xfde00000-0xfdffffff] Jun 20 18:57:44.946377 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Jun 20 18:57:44.946456 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Jun 20 18:57:44.946553 kernel: pci_bus 0000:07: resource 1 [mem 0xfdc00000-0xfddfffff] Jun 20 18:57:44.946627 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Jun 20 18:57:44.946707 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Jun 20 18:57:44.946777 kernel: pci_bus 0000:08: resource 1 [mem 0xfda00000-0xfdbfffff] Jun 20 18:57:44.946844 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Jun 20 18:57:44.946920 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Jun 20 18:57:44.946994 kernel: pci_bus 0000:09: resource 1 [mem 0xfd800000-0xfd9fffff] Jun 20 18:57:44.947063 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Jun 20 18:57:44.947074 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jun 20 18:57:44.947083 kernel: PCI: CLS 0 bytes, default 64 Jun 20 18:57:44.947091 kernel: Initialise system trusted keyrings Jun 20 18:57:44.947099 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jun 20 18:57:44.947107 kernel: Key type asymmetric registered Jun 20 18:57:44.947115 kernel: Asymmetric key parser 'x509' registered Jun 20 18:57:44.947125 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jun 20 18:57:44.947133 kernel: io scheduler mq-deadline registered Jun 20 18:57:44.947141 kernel: io scheduler kyber registered Jun 20 18:57:44.947166 kernel: io scheduler bfq registered Jun 20 18:57:44.947271 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jun 20 18:57:44.947355 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jun 20 18:57:44.947431 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jun 20 18:57:44.947505 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jun 20 18:57:44.947651 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jun 20 18:57:44.947730 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jun 20 18:57:44.947803 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jun 20 18:57:44.947877 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jun 20 18:57:44.947956 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jun 20 18:57:44.948031 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jun 20 18:57:44.948107 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jun 20 18:57:44.948426 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jun 20 18:57:44.948509 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jun 20 18:57:44.948604 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jun 20 18:57:44.948679 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jun 20 18:57:44.948753 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jun 20 18:57:44.948765 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jun 20 18:57:44.948838 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Jun 20 18:57:44.948913 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Jun 20 18:57:44.948924 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jun 20 18:57:44.948932 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Jun 20 18:57:44.948943 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 20 18:57:44.948951 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jun 20 18:57:44.948959 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jun 20 18:57:44.948967 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jun 20 18:57:44.948975 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jun 20 18:57:44.949099 kernel: rtc_cmos 00:03: RTC can wake from S4 Jun 20 18:57:44.949111 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jun 20 18:57:44.949180 kernel: rtc_cmos 00:03: registered as rtc0 Jun 20 18:57:44.949277 kernel: rtc_cmos 00:03: setting system clock to 2025-06-20T18:57:44 UTC (1750445864) Jun 20 18:57:44.949346 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jun 20 18:57:44.949357 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jun 20 18:57:44.949365 kernel: NET: Registered PF_INET6 protocol family Jun 20 18:57:44.949373 kernel: Segment Routing with IPv6 Jun 20 18:57:44.949381 kernel: In-situ OAM (IOAM) with IPv6 Jun 20 18:57:44.949389 kernel: NET: Registered PF_PACKET protocol family Jun 20 18:57:44.949397 kernel: Key type dns_resolver registered Jun 20 18:57:44.949404 kernel: IPI shorthand broadcast: enabled Jun 20 18:57:44.949416 kernel: sched_clock: Marking stable (1346008453, 148081603)->(1507610548, -13520492) Jun 20 18:57:44.949424 kernel: registered taskstats version 1 Jun 20 18:57:44.949431 kernel: Loading compiled-in X.509 certificates Jun 20 18:57:44.949439 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.94-flatcar: 583832681762bbd3c2cbcca308896cbba88c4497' Jun 20 18:57:44.949447 kernel: Key type .fscrypt registered Jun 20 18:57:44.949455 kernel: Key type fscrypt-provisioning registered Jun 20 18:57:44.949462 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 20 18:57:44.949470 kernel: ima: Allocated hash algorithm: sha1 Jun 20 18:57:44.949478 kernel: ima: No architecture policies found Jun 20 18:57:44.949487 kernel: clk: Disabling unused clocks Jun 20 18:57:44.949495 kernel: Freeing unused kernel image (initmem) memory: 43488K Jun 20 18:57:44.949503 kernel: Write protecting the kernel read-only data: 38912k Jun 20 18:57:44.949511 kernel: Freeing unused kernel image (rodata/data gap) memory: 1704K Jun 20 18:57:44.949518 kernel: Run /init as init process Jun 20 18:57:44.949549 kernel: with arguments: Jun 20 18:57:44.949558 kernel: /init Jun 20 18:57:44.949568 kernel: with environment: Jun 20 18:57:44.949575 kernel: HOME=/ Jun 20 18:57:44.949584 kernel: TERM=linux Jun 20 18:57:44.949592 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 20 18:57:44.949601 systemd[1]: Successfully made /usr/ read-only. Jun 20 18:57:44.949613 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 20 18:57:44.949623 systemd[1]: Detected virtualization kvm. Jun 20 18:57:44.949631 systemd[1]: Detected architecture x86-64. Jun 20 18:57:44.949639 systemd[1]: Running in initrd. Jun 20 18:57:44.949647 systemd[1]: No hostname configured, using default hostname. Jun 20 18:57:44.949657 systemd[1]: Hostname set to . Jun 20 18:57:44.949665 systemd[1]: Initializing machine ID from VM UUID. Jun 20 18:57:44.949674 systemd[1]: Queued start job for default target initrd.target. Jun 20 18:57:44.949682 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 18:57:44.949691 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 18:57:44.949700 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jun 20 18:57:44.949708 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 20 18:57:44.949717 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jun 20 18:57:44.949728 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jun 20 18:57:44.949737 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jun 20 18:57:44.949745 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jun 20 18:57:44.949754 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 18:57:44.949762 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 20 18:57:44.949771 systemd[1]: Reached target paths.target - Path Units. Jun 20 18:57:44.949780 systemd[1]: Reached target slices.target - Slice Units. Jun 20 18:57:44.949788 systemd[1]: Reached target swap.target - Swaps. Jun 20 18:57:44.949796 systemd[1]: Reached target timers.target - Timer Units. Jun 20 18:57:44.949805 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 20 18:57:44.949813 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 20 18:57:44.949822 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 20 18:57:44.949830 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jun 20 18:57:44.949838 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 20 18:57:44.949847 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 20 18:57:44.949856 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 18:57:44.949864 systemd[1]: Reached target sockets.target - Socket Units. Jun 20 18:57:44.949873 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jun 20 18:57:44.949881 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 20 18:57:44.949889 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 20 18:57:44.949897 systemd[1]: Starting systemd-fsck-usr.service... Jun 20 18:57:44.949906 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 20 18:57:44.949914 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 20 18:57:44.949942 systemd-journald[188]: Collecting audit messages is disabled. Jun 20 18:57:44.949966 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 18:57:44.949975 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jun 20 18:57:44.949984 systemd-journald[188]: Journal started Jun 20 18:57:44.950006 systemd-journald[188]: Runtime Journal (/run/log/journal/e64c85f248094c35a8a060997f7a627b) is 4.8M, max 38.3M, 33.5M free. Jun 20 18:57:44.955588 systemd[1]: Started systemd-journald.service - Journal Service. Jun 20 18:57:44.958634 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 18:57:44.959073 systemd-modules-load[191]: Inserted module 'overlay' Jun 20 18:57:44.961008 systemd[1]: Finished systemd-fsck-usr.service. Jun 20 18:57:44.973694 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 20 18:57:45.012567 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 20 18:57:45.012597 kernel: Bridge firewalling registered Jun 20 18:57:44.986584 systemd-modules-load[191]: Inserted module 'br_netfilter' Jun 20 18:57:45.016733 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 20 18:57:45.018044 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 20 18:57:45.019604 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 18:57:45.020808 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 20 18:57:45.024941 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 20 18:57:45.039123 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 18:57:45.043675 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 20 18:57:45.044856 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 18:57:45.047936 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 18:57:45.054721 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 20 18:57:45.055426 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 18:57:45.059657 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 20 18:57:45.069780 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 18:57:45.078753 dracut-cmdline[225]: dracut-dracut-053 Jun 20 18:57:45.081055 dracut-cmdline[225]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=c5ce7ee72c13e935b8a741ba19830125b417ea1672f46b6a215da9317cee8e17 Jun 20 18:57:45.087231 systemd-resolved[217]: Positive Trust Anchors: Jun 20 18:57:45.087243 systemd-resolved[217]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 20 18:57:45.087273 systemd-resolved[217]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 20 18:57:45.096260 systemd-resolved[217]: Defaulting to hostname 'linux'. Jun 20 18:57:45.097085 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 20 18:57:45.097990 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 20 18:57:45.142588 kernel: SCSI subsystem initialized Jun 20 18:57:45.152564 kernel: Loading iSCSI transport class v2.0-870. Jun 20 18:57:45.169570 kernel: iscsi: registered transport (tcp) Jun 20 18:57:45.201591 kernel: iscsi: registered transport (qla4xxx) Jun 20 18:57:45.201689 kernel: QLogic iSCSI HBA Driver Jun 20 18:57:45.250028 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 20 18:57:45.262810 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 20 18:57:45.300232 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 20 18:57:45.300335 kernel: device-mapper: uevent: version 1.0.3 Jun 20 18:57:45.300359 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jun 20 18:57:45.357770 kernel: raid6: avx2x4 gen() 14122 MB/s Jun 20 18:57:45.375596 kernel: raid6: avx2x2 gen() 19141 MB/s Jun 20 18:57:45.392734 kernel: raid6: avx2x1 gen() 19861 MB/s Jun 20 18:57:45.392779 kernel: raid6: using algorithm avx2x1 gen() 19861 MB/s Jun 20 18:57:45.410826 kernel: raid6: .... xor() 16171 MB/s, rmw enabled Jun 20 18:57:45.410864 kernel: raid6: using avx2x2 recovery algorithm Jun 20 18:57:45.431588 kernel: xor: automatically using best checksumming function avx Jun 20 18:57:45.576583 kernel: Btrfs loaded, zoned=no, fsverity=no Jun 20 18:57:45.594325 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 20 18:57:45.602881 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 18:57:45.616358 systemd-udevd[408]: Using default interface naming scheme 'v255'. Jun 20 18:57:45.620658 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 18:57:45.631774 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 20 18:57:45.655676 dracut-pre-trigger[419]: rd.md=0: removing MD RAID activation Jun 20 18:57:45.707724 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 20 18:57:45.713740 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 20 18:57:45.780817 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 18:57:45.793811 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 20 18:57:45.823355 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 20 18:57:45.825601 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 20 18:57:45.829508 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 18:57:45.832030 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 20 18:57:45.840721 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 20 18:57:45.853562 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 20 18:57:45.874624 kernel: cryptd: max_cpu_qlen set to 1000 Jun 20 18:57:45.882612 kernel: scsi host0: Virtio SCSI HBA Jun 20 18:57:45.891579 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jun 20 18:57:45.932205 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 20 18:57:45.932372 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 18:57:45.934894 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 20 18:57:45.937288 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 18:57:45.937443 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 18:57:45.938770 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 18:57:45.953162 kernel: ACPI: bus type USB registered Jun 20 18:57:45.953214 kernel: usbcore: registered new interface driver usbfs Jun 20 18:57:45.956429 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 18:57:45.957582 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jun 20 18:57:45.961541 kernel: AVX2 version of gcm_enc/dec engaged. Jun 20 18:57:45.966616 kernel: AES CTR mode by8 optimization enabled Jun 20 18:57:45.980899 kernel: sd 0:0:0:0: Power-on or device reset occurred Jun 20 18:57:45.983547 kernel: usbcore: registered new interface driver hub Jun 20 18:57:45.983579 kernel: sd 0:0:0:0: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Jun 20 18:57:45.987970 kernel: sd 0:0:0:0: [sda] Write Protect is off Jun 20 18:57:45.988334 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Jun 20 18:57:45.988439 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jun 20 18:57:45.992567 kernel: usbcore: registered new device driver usb Jun 20 18:57:45.992600 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jun 20 18:57:45.992615 kernel: GPT:17805311 != 80003071 Jun 20 18:57:45.992628 kernel: GPT:Alternate GPT header not at the end of the disk. Jun 20 18:57:45.992639 kernel: GPT:17805311 != 80003071 Jun 20 18:57:45.992648 kernel: GPT: Use GNU Parted to correct GPT errors. Jun 20 18:57:45.992662 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 20 18:57:45.993549 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jun 20 18:57:45.999540 kernel: libata version 3.00 loaded. Jun 20 18:57:46.016619 kernel: ahci 0000:00:1f.2: version 3.0 Jun 20 18:57:46.018590 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jun 20 18:57:46.019700 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jun 20 18:57:46.019838 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jun 20 18:57:46.021544 kernel: scsi host1: ahci Jun 20 18:57:46.023551 kernel: scsi host2: ahci Jun 20 18:57:46.024561 kernel: scsi host3: ahci Jun 20 18:57:46.028667 kernel: scsi host4: ahci Jun 20 18:57:46.033551 kernel: scsi host5: ahci Jun 20 18:57:46.034545 kernel: scsi host6: ahci Jun 20 18:57:46.034669 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a100 irq 48 Jun 20 18:57:46.034695 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a180 irq 48 Jun 20 18:57:46.034708 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a200 irq 48 Jun 20 18:57:46.034719 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a280 irq 48 Jun 20 18:57:46.034730 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a300 irq 48 Jun 20 18:57:46.034739 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a380 irq 48 Jun 20 18:57:46.097959 kernel: BTRFS: device fsid 5ff786f3-14e2-4689-ad32-ff903cf13f91 devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (456) Jun 20 18:57:46.102550 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (452) Jun 20 18:57:46.106949 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jun 20 18:57:46.108677 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 18:57:46.119613 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jun 20 18:57:46.128795 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Jun 20 18:57:46.129360 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jun 20 18:57:46.139147 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jun 20 18:57:46.149753 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 20 18:57:46.152139 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 20 18:57:46.157753 disk-uuid[565]: Primary Header is updated. Jun 20 18:57:46.157753 disk-uuid[565]: Secondary Entries is updated. Jun 20 18:57:46.157753 disk-uuid[565]: Secondary Header is updated. Jun 20 18:57:46.166351 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 18:57:46.169553 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 20 18:57:46.357574 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jun 20 18:57:46.357688 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jun 20 18:57:46.361575 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jun 20 18:57:46.361640 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jun 20 18:57:46.364573 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jun 20 18:57:46.368549 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jun 20 18:57:46.371603 kernel: ata1.00: applying bridge limits Jun 20 18:57:46.374398 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jun 20 18:57:46.374627 kernel: ata1.00: configured for UDMA/100 Jun 20 18:57:46.381671 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jun 20 18:57:46.414342 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jun 20 18:57:46.414776 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Jun 20 18:57:46.420650 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jun 20 18:57:46.425952 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jun 20 18:57:46.426277 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Jun 20 18:57:46.428898 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Jun 20 18:57:46.432782 kernel: hub 1-0:1.0: USB hub found Jun 20 18:57:46.433139 kernel: hub 1-0:1.0: 4 ports detected Jun 20 18:57:46.435039 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jun 20 18:57:46.437699 kernel: hub 2-0:1.0: USB hub found Jun 20 18:57:46.440420 kernel: hub 2-0:1.0: 4 ports detected Jun 20 18:57:46.440715 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jun 20 18:57:46.445574 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jun 20 18:57:46.456605 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Jun 20 18:57:46.680610 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jun 20 18:57:46.826595 kernel: hid: raw HID events driver (C) Jiri Kosina Jun 20 18:57:46.836005 kernel: usbcore: registered new interface driver usbhid Jun 20 18:57:46.836076 kernel: usbhid: USB HID core driver Jun 20 18:57:46.849518 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input3 Jun 20 18:57:46.849630 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Jun 20 18:57:47.183610 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 20 18:57:47.184240 disk-uuid[568]: The operation has completed successfully. Jun 20 18:57:47.271897 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 20 18:57:47.272085 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 20 18:57:47.335757 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 20 18:57:47.339040 sh[593]: Success Jun 20 18:57:47.352552 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jun 20 18:57:47.415383 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 20 18:57:47.423443 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 20 18:57:47.425228 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 20 18:57:47.449345 kernel: BTRFS info (device dm-0): first mount of filesystem 5ff786f3-14e2-4689-ad32-ff903cf13f91 Jun 20 18:57:47.449467 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jun 20 18:57:47.449485 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jun 20 18:57:47.451756 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jun 20 18:57:47.453799 kernel: BTRFS info (device dm-0): using free space tree Jun 20 18:57:47.465595 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jun 20 18:57:47.468134 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 20 18:57:47.470801 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 20 18:57:47.476748 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 20 18:57:47.482841 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 20 18:57:47.495725 kernel: BTRFS info (device sda6): first mount of filesystem 0d4ae0d2-6537-4cbd-8c37-7b929dcf3a9f Jun 20 18:57:47.495787 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jun 20 18:57:47.497831 kernel: BTRFS info (device sda6): using free space tree Jun 20 18:57:47.501688 kernel: BTRFS info (device sda6): enabling ssd optimizations Jun 20 18:57:47.501744 kernel: BTRFS info (device sda6): auto enabling async discard Jun 20 18:57:47.507593 kernel: BTRFS info (device sda6): last unmount of filesystem 0d4ae0d2-6537-4cbd-8c37-7b929dcf3a9f Jun 20 18:57:47.510674 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 20 18:57:47.518801 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 20 18:57:47.577770 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 20 18:57:47.586707 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 20 18:57:47.623124 systemd-networkd[771]: lo: Link UP Jun 20 18:57:47.623625 systemd-networkd[771]: lo: Gained carrier Jun 20 18:57:47.627479 systemd-networkd[771]: Enumeration completed Jun 20 18:57:47.628714 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 20 18:57:47.629043 ignition[666]: Ignition 2.20.0 Jun 20 18:57:47.629374 systemd-networkd[771]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 18:57:47.629049 ignition[666]: Stage: fetch-offline Jun 20 18:57:47.629377 systemd-networkd[771]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 20 18:57:47.629080 ignition[666]: no configs at "/usr/lib/ignition/base.d" Jun 20 18:57:47.630950 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 20 18:57:47.629093 ignition[666]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jun 20 18:57:47.632870 systemd-networkd[771]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 18:57:47.629217 ignition[666]: parsed url from cmdline: "" Jun 20 18:57:47.632873 systemd-networkd[771]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 20 18:57:47.629221 ignition[666]: no config URL provided Jun 20 18:57:47.633010 systemd[1]: Reached target network.target - Network. Jun 20 18:57:47.629226 ignition[666]: reading system config file "/usr/lib/ignition/user.ign" Jun 20 18:57:47.634156 systemd-networkd[771]: eth0: Link UP Jun 20 18:57:47.629234 ignition[666]: no config at "/usr/lib/ignition/user.ign" Jun 20 18:57:47.634159 systemd-networkd[771]: eth0: Gained carrier Jun 20 18:57:47.629243 ignition[666]: failed to fetch config: resource requires networking Jun 20 18:57:47.634166 systemd-networkd[771]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 18:57:47.629462 ignition[666]: Ignition finished successfully Jun 20 18:57:47.639001 systemd-networkd[771]: eth1: Link UP Jun 20 18:57:47.639004 systemd-networkd[771]: eth1: Gained carrier Jun 20 18:57:47.639013 systemd-networkd[771]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 18:57:47.639697 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jun 20 18:57:47.653150 ignition[780]: Ignition 2.20.0 Jun 20 18:57:47.653162 ignition[780]: Stage: fetch Jun 20 18:57:47.653337 ignition[780]: no configs at "/usr/lib/ignition/base.d" Jun 20 18:57:47.653345 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jun 20 18:57:47.653419 ignition[780]: parsed url from cmdline: "" Jun 20 18:57:47.653422 ignition[780]: no config URL provided Jun 20 18:57:47.653426 ignition[780]: reading system config file "/usr/lib/ignition/user.ign" Jun 20 18:57:47.653432 ignition[780]: no config at "/usr/lib/ignition/user.ign" Jun 20 18:57:47.653457 ignition[780]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Jun 20 18:57:47.653609 ignition[780]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Jun 20 18:57:47.683623 systemd-networkd[771]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jun 20 18:57:47.698655 systemd-networkd[771]: eth0: DHCPv4 address 157.180.74.176/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jun 20 18:57:47.853918 ignition[780]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Jun 20 18:57:47.859259 ignition[780]: GET result: OK Jun 20 18:57:47.859358 ignition[780]: parsing config with SHA512: 3836b3a49b15407e1ffd7e41ec26004b45334ca024a161df02c0d7afa10eae7c926ded57668fdf763aae5a898f97a4c96db5ce6bbd3c70bcc7c3f3864bc068e5 Jun 20 18:57:47.867757 unknown[780]: fetched base config from "system" Jun 20 18:57:47.868778 unknown[780]: fetched base config from "system" Jun 20 18:57:47.868795 unknown[780]: fetched user config from "hetzner" Jun 20 18:57:47.869367 ignition[780]: fetch: fetch complete Jun 20 18:57:47.871161 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jun 20 18:57:47.869374 ignition[780]: fetch: fetch passed Jun 20 18:57:47.869427 ignition[780]: Ignition finished successfully Jun 20 18:57:47.878776 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 20 18:57:47.904554 ignition[788]: Ignition 2.20.0 Jun 20 18:57:47.905603 ignition[788]: Stage: kargs Jun 20 18:57:47.905839 ignition[788]: no configs at "/usr/lib/ignition/base.d" Jun 20 18:57:47.905853 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jun 20 18:57:47.907015 ignition[788]: kargs: kargs passed Jun 20 18:57:47.908518 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 20 18:57:47.907066 ignition[788]: Ignition finished successfully Jun 20 18:57:47.914767 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 20 18:57:47.929585 ignition[794]: Ignition 2.20.0 Jun 20 18:57:47.929598 ignition[794]: Stage: disks Jun 20 18:57:47.929794 ignition[794]: no configs at "/usr/lib/ignition/base.d" Jun 20 18:57:47.932487 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 20 18:57:47.929804 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jun 20 18:57:47.936019 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 20 18:57:47.930888 ignition[794]: disks: disks passed Jun 20 18:57:47.937010 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 20 18:57:47.930936 ignition[794]: Ignition finished successfully Jun 20 18:57:47.938141 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 20 18:57:47.939252 systemd[1]: Reached target sysinit.target - System Initialization. Jun 20 18:57:47.940144 systemd[1]: Reached target basic.target - Basic System. Jun 20 18:57:47.950834 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 20 18:57:47.965271 systemd-fsck[802]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jun 20 18:57:47.968464 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 20 18:57:47.973646 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 20 18:57:48.056576 kernel: EXT4-fs (sda9): mounted filesystem 943f8432-3dc9-4e22-b9bd-c29bf6a1f5e1 r/w with ordered data mode. Quota mode: none. Jun 20 18:57:48.057137 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 20 18:57:48.058132 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 20 18:57:48.063656 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 20 18:57:48.066506 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 20 18:57:48.070351 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jun 20 18:57:48.073456 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 20 18:57:48.073495 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 20 18:57:48.077125 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 20 18:57:48.079817 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 20 18:57:48.086591 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (810) Jun 20 18:57:48.090555 kernel: BTRFS info (device sda6): first mount of filesystem 0d4ae0d2-6537-4cbd-8c37-7b929dcf3a9f Jun 20 18:57:48.094144 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jun 20 18:57:48.094177 kernel: BTRFS info (device sda6): using free space tree Jun 20 18:57:48.109685 kernel: BTRFS info (device sda6): enabling ssd optimizations Jun 20 18:57:48.109769 kernel: BTRFS info (device sda6): auto enabling async discard Jun 20 18:57:48.113464 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 20 18:57:48.147192 initrd-setup-root[838]: cut: /sysroot/etc/passwd: No such file or directory Jun 20 18:57:48.152248 coreos-metadata[812]: Jun 20 18:57:48.152 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Jun 20 18:57:48.153594 coreos-metadata[812]: Jun 20 18:57:48.153 INFO Fetch successful Jun 20 18:57:48.153594 coreos-metadata[812]: Jun 20 18:57:48.153 INFO wrote hostname ci-4230-2-0-4-ec216ba796 to /sysroot/etc/hostname Jun 20 18:57:48.155756 initrd-setup-root[845]: cut: /sysroot/etc/group: No such file or directory Jun 20 18:57:48.156649 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jun 20 18:57:48.161430 initrd-setup-root[853]: cut: /sysroot/etc/shadow: No such file or directory Jun 20 18:57:48.165238 initrd-setup-root[860]: cut: /sysroot/etc/gshadow: No such file or directory Jun 20 18:57:48.234044 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 20 18:57:48.238641 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 20 18:57:48.242466 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 20 18:57:48.250563 kernel: BTRFS info (device sda6): last unmount of filesystem 0d4ae0d2-6537-4cbd-8c37-7b929dcf3a9f Jun 20 18:57:48.271642 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 20 18:57:48.273399 ignition[928]: INFO : Ignition 2.20.0 Jun 20 18:57:48.273399 ignition[928]: INFO : Stage: mount Jun 20 18:57:48.274750 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 18:57:48.274750 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jun 20 18:57:48.275977 ignition[928]: INFO : mount: mount passed Jun 20 18:57:48.275977 ignition[928]: INFO : Ignition finished successfully Jun 20 18:57:48.276086 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 20 18:57:48.282623 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 20 18:57:48.447830 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 20 18:57:48.456933 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 20 18:57:48.471607 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (939) Jun 20 18:57:48.474646 kernel: BTRFS info (device sda6): first mount of filesystem 0d4ae0d2-6537-4cbd-8c37-7b929dcf3a9f Jun 20 18:57:48.474693 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jun 20 18:57:48.476857 kernel: BTRFS info (device sda6): using free space tree Jun 20 18:57:48.492150 kernel: BTRFS info (device sda6): enabling ssd optimizations Jun 20 18:57:48.492216 kernel: BTRFS info (device sda6): auto enabling async discard Jun 20 18:57:48.497711 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 20 18:57:48.525897 ignition[955]: INFO : Ignition 2.20.0 Jun 20 18:57:48.527358 ignition[955]: INFO : Stage: files Jun 20 18:57:48.527358 ignition[955]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 18:57:48.527358 ignition[955]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jun 20 18:57:48.531573 ignition[955]: DEBUG : files: compiled without relabeling support, skipping Jun 20 18:57:48.531573 ignition[955]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 20 18:57:48.531573 ignition[955]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 20 18:57:48.536325 ignition[955]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 20 18:57:48.536325 ignition[955]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 20 18:57:48.539556 ignition[955]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 20 18:57:48.539556 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jun 20 18:57:48.539556 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jun 20 18:57:48.536980 unknown[955]: wrote ssh authorized keys file for user: core Jun 20 18:57:48.718515 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jun 20 18:57:48.817777 systemd-networkd[771]: eth0: Gained IPv6LL Jun 20 18:57:49.196971 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jun 20 18:57:49.196971 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jun 20 18:57:49.198860 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jun 20 18:57:49.713828 systemd-networkd[771]: eth1: Gained IPv6LL Jun 20 18:57:49.846916 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jun 20 18:57:49.964916 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jun 20 18:57:49.964916 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jun 20 18:57:49.969126 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jun 20 18:57:49.969126 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 20 18:57:49.969126 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 20 18:57:49.969126 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 20 18:57:49.969126 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 20 18:57:49.969126 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 20 18:57:49.969126 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 20 18:57:49.969126 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 20 18:57:49.969126 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 20 18:57:49.969126 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jun 20 18:57:49.969126 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jun 20 18:57:49.969126 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jun 20 18:57:49.969126 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jun 20 18:57:50.696054 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jun 20 18:59:34.211852 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jun 20 18:59:34.211852 ignition[955]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jun 20 18:59:34.215741 ignition[955]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 20 18:59:34.215741 ignition[955]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 20 18:59:34.215741 ignition[955]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jun 20 18:59:34.215741 ignition[955]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jun 20 18:59:34.215741 ignition[955]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jun 20 18:59:34.215741 ignition[955]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jun 20 18:59:34.215741 ignition[955]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jun 20 18:59:34.215741 ignition[955]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jun 20 18:59:34.215741 ignition[955]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jun 20 18:59:34.215741 ignition[955]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 20 18:59:34.215741 ignition[955]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 20 18:59:34.215741 ignition[955]: INFO : files: files passed Jun 20 18:59:34.215741 ignition[955]: INFO : Ignition finished successfully Jun 20 18:59:34.216616 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 20 18:59:34.227729 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 20 18:59:34.233301 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 20 18:59:34.237813 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 20 18:59:34.238574 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 20 18:59:34.250377 initrd-setup-root-after-ignition[984]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 20 18:59:34.250377 initrd-setup-root-after-ignition[984]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 20 18:59:34.254353 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 20 18:59:34.256314 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 20 18:59:34.259323 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 20 18:59:34.265799 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 20 18:59:34.324474 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 20 18:59:34.324677 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 20 18:59:34.327370 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 20 18:59:34.329998 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 20 18:59:34.332520 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 20 18:59:34.341088 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 20 18:59:34.362776 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 20 18:59:34.371782 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 20 18:59:34.391085 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 20 18:59:34.392680 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 18:59:34.395176 systemd[1]: Stopped target timers.target - Timer Units. Jun 20 18:59:34.397483 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 20 18:59:34.397798 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 20 18:59:34.400154 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 20 18:59:34.401824 systemd[1]: Stopped target basic.target - Basic System. Jun 20 18:59:34.404420 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 20 18:59:34.406599 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 20 18:59:34.408944 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 20 18:59:34.411781 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 20 18:59:34.414227 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 20 18:59:34.416239 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 20 18:59:34.418516 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 20 18:59:34.420628 systemd[1]: Stopped target swap.target - Swaps. Jun 20 18:59:34.422721 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 20 18:59:34.422959 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 20 18:59:34.425741 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 20 18:59:34.427307 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 18:59:34.429493 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jun 20 18:59:34.429701 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 18:59:34.432073 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 20 18:59:34.432275 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 20 18:59:34.435679 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 20 18:59:34.435864 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 20 18:59:34.437628 systemd[1]: ignition-files.service: Deactivated successfully. Jun 20 18:59:34.437778 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 20 18:59:34.439437 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jun 20 18:59:34.439636 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jun 20 18:59:34.448199 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 20 18:59:34.452822 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 20 18:59:34.453624 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 20 18:59:34.453798 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 18:59:34.459325 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 20 18:59:34.459498 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 20 18:59:34.465908 ignition[1008]: INFO : Ignition 2.20.0 Jun 20 18:59:34.465908 ignition[1008]: INFO : Stage: umount Jun 20 18:59:34.470093 ignition[1008]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 18:59:34.470093 ignition[1008]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jun 20 18:59:34.470093 ignition[1008]: INFO : umount: umount passed Jun 20 18:59:34.470093 ignition[1008]: INFO : Ignition finished successfully Jun 20 18:59:34.468846 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 20 18:59:34.468962 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 20 18:59:34.482046 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 20 18:59:34.482662 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 20 18:59:34.489191 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 20 18:59:34.489244 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 20 18:59:34.491862 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 20 18:59:34.491956 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 20 18:59:34.494330 systemd[1]: ignition-fetch.service: Deactivated successfully. Jun 20 18:59:34.494383 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jun 20 18:59:34.495051 systemd[1]: Stopped target network.target - Network. Jun 20 18:59:34.496663 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 20 18:59:34.496747 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 20 18:59:34.498080 systemd[1]: Stopped target paths.target - Path Units. Jun 20 18:59:34.499430 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 20 18:59:34.499506 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 18:59:34.500944 systemd[1]: Stopped target slices.target - Slice Units. Jun 20 18:59:34.502562 systemd[1]: Stopped target sockets.target - Socket Units. Jun 20 18:59:34.504419 systemd[1]: iscsid.socket: Deactivated successfully. Jun 20 18:59:34.504475 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 20 18:59:34.505759 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 20 18:59:34.505796 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 20 18:59:34.507335 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 20 18:59:34.507403 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 20 18:59:34.509662 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jun 20 18:59:34.509718 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jun 20 18:59:34.511262 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 20 18:59:34.513074 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 20 18:59:34.516489 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 20 18:59:34.519296 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 20 18:59:34.519436 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 20 18:59:34.524956 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jun 20 18:59:34.525247 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 20 18:59:34.525366 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 20 18:59:34.527809 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 20 18:59:34.527929 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 20 18:59:34.530393 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jun 20 18:59:34.531695 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 20 18:59:34.532105 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 20 18:59:34.533851 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 20 18:59:34.533912 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 20 18:59:34.539666 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 20 18:59:34.540460 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 20 18:59:34.540541 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 20 18:59:34.542509 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 20 18:59:34.542589 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 20 18:59:34.545670 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 20 18:59:34.545725 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 20 18:59:34.546841 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 20 18:59:34.546896 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 18:59:34.548902 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 18:59:34.551179 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jun 20 18:59:34.551258 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jun 20 18:59:34.559875 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 20 18:59:34.561398 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 18:59:34.562826 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 20 18:59:34.562926 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 20 18:59:34.565143 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 20 18:59:34.565221 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 20 18:59:34.566410 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 20 18:59:34.566466 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 18:59:34.568088 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 20 18:59:34.568147 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 20 18:59:34.570978 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 20 18:59:34.571033 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 20 18:59:34.572802 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 20 18:59:34.572859 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 18:59:34.580789 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 20 18:59:34.582453 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 20 18:59:34.582544 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 18:59:34.585101 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jun 20 18:59:34.585153 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 20 18:59:34.586168 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 20 18:59:34.586221 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 18:59:34.586951 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 18:59:34.586996 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 18:59:34.588850 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jun 20 18:59:34.588917 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jun 20 18:59:34.589303 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 20 18:59:34.589404 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 20 18:59:34.591702 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 20 18:59:34.600642 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 20 18:59:34.606933 systemd[1]: Switching root. Jun 20 18:59:34.641923 systemd-journald[188]: Journal stopped Jun 20 18:59:35.635856 systemd-journald[188]: Received SIGTERM from PID 1 (systemd). Jun 20 18:59:35.635943 kernel: SELinux: policy capability network_peer_controls=1 Jun 20 18:59:35.635956 kernel: SELinux: policy capability open_perms=1 Jun 20 18:59:35.635965 kernel: SELinux: policy capability extended_socket_class=1 Jun 20 18:59:35.635974 kernel: SELinux: policy capability always_check_network=0 Jun 20 18:59:35.635984 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 20 18:59:35.635994 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 20 18:59:35.636003 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 20 18:59:35.636015 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 20 18:59:35.636024 kernel: audit: type=1403 audit(1750445974.753:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 20 18:59:35.636818 systemd[1]: Successfully loaded SELinux policy in 52.845ms. Jun 20 18:59:35.636859 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.172ms. Jun 20 18:59:35.636875 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 20 18:59:35.636886 systemd[1]: Detected virtualization kvm. Jun 20 18:59:35.636896 systemd[1]: Detected architecture x86-64. Jun 20 18:59:35.636906 systemd[1]: Detected first boot. Jun 20 18:59:35.636918 systemd[1]: Hostname set to . Jun 20 18:59:35.636928 systemd[1]: Initializing machine ID from VM UUID. Jun 20 18:59:35.636937 zram_generator::config[1053]: No configuration found. Jun 20 18:59:35.636951 kernel: Guest personality initialized and is inactive Jun 20 18:59:35.636961 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jun 20 18:59:35.636970 kernel: Initialized host personality Jun 20 18:59:35.636980 kernel: NET: Registered PF_VSOCK protocol family Jun 20 18:59:35.636989 systemd[1]: Populated /etc with preset unit settings. Jun 20 18:59:35.637000 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jun 20 18:59:35.637011 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jun 20 18:59:35.637021 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jun 20 18:59:35.637030 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jun 20 18:59:35.637041 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 20 18:59:35.637051 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 20 18:59:35.637061 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 20 18:59:35.637071 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 20 18:59:35.637081 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 20 18:59:35.637092 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 20 18:59:35.637102 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 20 18:59:35.637112 systemd[1]: Created slice user.slice - User and Session Slice. Jun 20 18:59:35.637122 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 18:59:35.637132 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 18:59:35.637142 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 20 18:59:35.637152 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 20 18:59:35.637162 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 20 18:59:35.637174 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 20 18:59:35.637184 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jun 20 18:59:35.637195 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 18:59:35.637206 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jun 20 18:59:35.637216 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jun 20 18:59:35.637226 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jun 20 18:59:35.637236 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 20 18:59:35.637248 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 18:59:35.637261 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 20 18:59:35.637273 systemd[1]: Reached target slices.target - Slice Units. Jun 20 18:59:35.637286 systemd[1]: Reached target swap.target - Swaps. Jun 20 18:59:35.637296 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 20 18:59:35.637307 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 20 18:59:35.637319 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jun 20 18:59:35.637329 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 20 18:59:35.637339 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 20 18:59:35.641226 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 18:59:35.641242 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 20 18:59:35.641253 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 20 18:59:35.641263 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 20 18:59:35.641273 systemd[1]: Mounting media.mount - External Media Directory... Jun 20 18:59:35.641283 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 18:59:35.641297 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 20 18:59:35.641307 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 20 18:59:35.641317 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 20 18:59:35.641327 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 20 18:59:35.641337 systemd[1]: Reached target machines.target - Containers. Jun 20 18:59:35.641347 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 20 18:59:35.641358 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 18:59:35.641369 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 20 18:59:35.641380 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 20 18:59:35.641391 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 18:59:35.641403 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 20 18:59:35.641413 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 18:59:35.641423 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 20 18:59:35.641433 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 18:59:35.641508 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 20 18:59:35.641519 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jun 20 18:59:35.641551 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jun 20 18:59:35.641564 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jun 20 18:59:35.641574 systemd[1]: Stopped systemd-fsck-usr.service. Jun 20 18:59:35.641584 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 18:59:35.641595 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 20 18:59:35.641604 kernel: fuse: init (API version 7.39) Jun 20 18:59:35.641614 kernel: loop: module loaded Jun 20 18:59:35.641627 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 20 18:59:35.641637 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 20 18:59:35.641648 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 20 18:59:35.641659 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jun 20 18:59:35.641669 kernel: ACPI: bus type drm_connector registered Jun 20 18:59:35.641678 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 20 18:59:35.641689 systemd[1]: verity-setup.service: Deactivated successfully. Jun 20 18:59:35.641700 systemd[1]: Stopped verity-setup.service. Jun 20 18:59:35.641710 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 18:59:35.641750 systemd-journald[1144]: Collecting audit messages is disabled. Jun 20 18:59:35.641776 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 20 18:59:35.641787 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 20 18:59:35.641798 systemd[1]: Mounted media.mount - External Media Directory. Jun 20 18:59:35.641810 systemd-journald[1144]: Journal started Jun 20 18:59:35.641831 systemd-journald[1144]: Runtime Journal (/run/log/journal/e64c85f248094c35a8a060997f7a627b) is 4.8M, max 38.3M, 33.5M free. Jun 20 18:59:35.314086 systemd[1]: Queued start job for default target multi-user.target. Jun 20 18:59:35.331252 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jun 20 18:59:35.331809 systemd[1]: systemd-journald.service: Deactivated successfully. Jun 20 18:59:35.643676 systemd[1]: Started systemd-journald.service - Journal Service. Jun 20 18:59:35.645332 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 20 18:59:35.645987 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 20 18:59:35.646601 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 20 18:59:35.647239 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 20 18:59:35.648925 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 18:59:35.649811 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 20 18:59:35.649945 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 20 18:59:35.650696 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 18:59:35.650827 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 18:59:35.651626 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 20 18:59:35.651750 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 20 18:59:35.652431 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 18:59:35.652586 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 18:59:35.653310 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 20 18:59:35.653435 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 20 18:59:35.654243 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 18:59:35.654360 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 18:59:35.655094 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 20 18:59:35.655975 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 20 18:59:35.656725 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 20 18:59:35.666437 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jun 20 18:59:35.668230 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 20 18:59:35.676688 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 20 18:59:35.681773 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 20 18:59:35.682351 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 20 18:59:35.682388 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 20 18:59:35.685008 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jun 20 18:59:35.688617 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jun 20 18:59:35.690740 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 20 18:59:35.691289 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 18:59:35.694795 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 20 18:59:35.697150 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 20 18:59:35.697683 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 20 18:59:35.700144 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jun 20 18:59:35.702101 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 20 18:59:35.704156 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 18:59:35.707604 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 20 18:59:35.708876 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 20 18:59:35.713725 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 20 18:59:35.715054 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 20 18:59:35.719564 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jun 20 18:59:35.761911 systemd-journald[1144]: Time spent on flushing to /var/log/journal/e64c85f248094c35a8a060997f7a627b is 54.334ms for 1149 entries. Jun 20 18:59:35.761911 systemd-journald[1144]: System Journal (/var/log/journal/e64c85f248094c35a8a060997f7a627b) is 8M, max 584.8M, 576.8M free. Jun 20 18:59:35.847276 systemd-journald[1144]: Received client request to flush runtime journal. Jun 20 18:59:35.847315 kernel: loop0: detected capacity change from 0 to 8 Jun 20 18:59:35.847331 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 20 18:59:35.847347 kernel: loop1: detected capacity change from 0 to 147912 Jun 20 18:59:35.769483 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 18:59:35.771928 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jun 20 18:59:35.773424 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 20 18:59:35.787390 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jun 20 18:59:35.790867 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jun 20 18:59:35.795779 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 18:59:35.810456 systemd-tmpfiles[1180]: ACLs are not supported, ignoring. Jun 20 18:59:35.810468 systemd-tmpfiles[1180]: ACLs are not supported, ignoring. Jun 20 18:59:35.828869 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 20 18:59:35.837102 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 20 18:59:35.844560 udevadm[1192]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jun 20 18:59:35.849035 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 20 18:59:35.870000 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jun 20 18:59:35.888725 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 20 18:59:35.896641 kernel: loop2: detected capacity change from 0 to 138176 Jun 20 18:59:35.896457 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 20 18:59:35.912647 systemd-tmpfiles[1203]: ACLs are not supported, ignoring. Jun 20 18:59:35.912664 systemd-tmpfiles[1203]: ACLs are not supported, ignoring. Jun 20 18:59:35.920295 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 18:59:35.956561 kernel: loop3: detected capacity change from 0 to 224512 Jun 20 18:59:36.003596 kernel: loop4: detected capacity change from 0 to 8 Jun 20 18:59:36.006553 kernel: loop5: detected capacity change from 0 to 147912 Jun 20 18:59:36.029561 kernel: loop6: detected capacity change from 0 to 138176 Jun 20 18:59:36.060571 kernel: loop7: detected capacity change from 0 to 224512 Jun 20 18:59:36.087601 (sd-merge)[1208]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Jun 20 18:59:36.088051 (sd-merge)[1208]: Merged extensions into '/usr'. Jun 20 18:59:36.092615 systemd[1]: Reload requested from client PID 1179 ('systemd-sysext') (unit systemd-sysext.service)... Jun 20 18:59:36.092711 systemd[1]: Reloading... Jun 20 18:59:36.182555 zram_generator::config[1236]: No configuration found. Jun 20 18:59:36.296670 ldconfig[1174]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 20 18:59:36.305498 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 18:59:36.373494 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 20 18:59:36.374003 systemd[1]: Reloading finished in 280 ms. Jun 20 18:59:36.388668 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 20 18:59:36.392412 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 20 18:59:36.404670 systemd[1]: Starting ensure-sysext.service... Jun 20 18:59:36.408659 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 20 18:59:36.424060 systemd[1]: Reload requested from client PID 1279 ('systemctl') (unit ensure-sysext.service)... Jun 20 18:59:36.424074 systemd[1]: Reloading... Jun 20 18:59:36.447266 systemd-tmpfiles[1280]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 20 18:59:36.447488 systemd-tmpfiles[1280]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 20 18:59:36.448138 systemd-tmpfiles[1280]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 20 18:59:36.448361 systemd-tmpfiles[1280]: ACLs are not supported, ignoring. Jun 20 18:59:36.448405 systemd-tmpfiles[1280]: ACLs are not supported, ignoring. Jun 20 18:59:36.452219 systemd-tmpfiles[1280]: Detected autofs mount point /boot during canonicalization of boot. Jun 20 18:59:36.452233 systemd-tmpfiles[1280]: Skipping /boot Jun 20 18:59:36.461934 systemd-tmpfiles[1280]: Detected autofs mount point /boot during canonicalization of boot. Jun 20 18:59:36.461944 systemd-tmpfiles[1280]: Skipping /boot Jun 20 18:59:36.498573 zram_generator::config[1312]: No configuration found. Jun 20 18:59:36.596088 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 18:59:36.664357 systemd[1]: Reloading finished in 239 ms. Jun 20 18:59:36.673509 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 20 18:59:36.686960 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 18:59:36.705067 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 20 18:59:36.712691 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 20 18:59:36.717898 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 20 18:59:36.726953 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 20 18:59:36.735016 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 18:59:36.741320 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 20 18:59:36.748594 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 18:59:36.748839 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 18:59:36.756852 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 18:59:36.767824 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 18:59:36.779628 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 18:59:36.780743 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 18:59:36.780922 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 18:59:36.789391 systemd-udevd[1364]: Using default interface naming scheme 'v255'. Jun 20 18:59:36.791238 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 20 18:59:36.792593 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 18:59:36.794861 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 20 18:59:36.796102 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 18:59:36.800694 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 18:59:36.801824 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 18:59:36.801963 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 18:59:36.802904 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 18:59:36.803037 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 18:59:36.815391 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 18:59:36.815785 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 18:59:36.822121 augenrules[1388]: No rules Jun 20 18:59:36.823368 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 18:59:36.827760 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 18:59:36.836438 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 18:59:36.837182 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 18:59:36.837334 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 18:59:36.840714 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 20 18:59:36.841661 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 18:59:36.843478 systemd[1]: audit-rules.service: Deactivated successfully. Jun 20 18:59:36.845715 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 20 18:59:36.846691 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 18:59:36.846827 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 18:59:36.847978 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 18:59:36.848577 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 18:59:36.852063 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 18:59:36.855861 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 20 18:59:36.857659 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 18:59:36.857795 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 18:59:36.863063 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 20 18:59:36.874245 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 20 18:59:36.881064 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 20 18:59:36.886212 systemd[1]: Finished ensure-sysext.service. Jun 20 18:59:36.889652 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 18:59:36.897148 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 20 18:59:36.897858 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 18:59:36.899397 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 18:59:36.902653 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 20 18:59:36.905681 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 18:59:36.908674 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 18:59:36.909264 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 18:59:36.909305 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 18:59:36.913467 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 20 18:59:36.925715 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jun 20 18:59:36.927767 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 20 18:59:36.927809 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 18:59:36.929423 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 18:59:36.929682 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 18:59:36.930412 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 18:59:36.930601 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 18:59:36.940148 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 20 18:59:36.959072 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 18:59:36.959237 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 18:59:36.960009 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 20 18:59:36.960273 augenrules[1423]: /sbin/augenrules: No change Jun 20 18:59:36.968929 augenrules[1452]: No rules Jun 20 18:59:36.968755 systemd[1]: audit-rules.service: Deactivated successfully. Jun 20 18:59:36.968929 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 20 18:59:36.971952 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 20 18:59:36.972119 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 20 18:59:36.976421 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jun 20 18:59:37.061483 systemd-networkd[1429]: lo: Link UP Jun 20 18:59:37.061492 systemd-networkd[1429]: lo: Gained carrier Jun 20 18:59:37.062261 systemd-networkd[1429]: Enumeration completed Jun 20 18:59:37.062356 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 20 18:59:37.073485 kernel: mousedev: PS/2 mouse device common for all mice Jun 20 18:59:37.070836 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jun 20 18:59:37.079669 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 20 18:59:37.098574 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jun 20 18:59:37.103876 kernel: ACPI: button: Power Button [PWRF] Jun 20 18:59:37.103519 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jun 20 18:59:37.104624 systemd[1]: Reached target time-set.target - System Time Set. Jun 20 18:59:37.108176 systemd-resolved[1363]: Positive Trust Anchors: Jun 20 18:59:37.110606 systemd-resolved[1363]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 20 18:59:37.110695 systemd-resolved[1363]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 20 18:59:37.111716 systemd-networkd[1429]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 18:59:37.111731 systemd-networkd[1429]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 20 18:59:37.113719 systemd-networkd[1429]: eth1: Link UP Jun 20 18:59:37.113726 systemd-networkd[1429]: eth1: Gained carrier Jun 20 18:59:37.113737 systemd-networkd[1429]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 18:59:37.116562 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1416) Jun 20 18:59:37.120257 systemd-resolved[1363]: Using system hostname 'ci-4230-2-0-4-ec216ba796'. Jun 20 18:59:37.122805 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jun 20 18:59:37.126384 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 20 18:59:37.127638 systemd[1]: Reached target network.target - Network. Jun 20 18:59:37.128353 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 20 18:59:37.130515 systemd-networkd[1429]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 18:59:37.130638 systemd-networkd[1429]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 20 18:59:37.132359 systemd-networkd[1429]: eth0: Link UP Jun 20 18:59:37.132416 systemd-networkd[1429]: eth0: Gained carrier Jun 20 18:59:37.132473 systemd-networkd[1429]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 18:59:37.159672 systemd-networkd[1429]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jun 20 18:59:37.160558 systemd-timesyncd[1430]: Network configuration changed, trying to establish connection. Jun 20 18:59:37.175822 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jun 20 18:59:37.180673 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 20 18:59:37.193589 systemd-networkd[1429]: eth0: DHCPv4 address 157.180.74.176/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jun 20 18:59:37.194611 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Jun 20 18:59:37.194647 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 18:59:37.194731 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 18:59:37.196892 systemd-timesyncd[1430]: Network configuration changed, trying to establish connection. Jun 20 18:59:37.200661 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 18:59:37.202858 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 18:59:37.203629 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Jun 20 18:59:37.204551 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Jun 20 18:59:37.211561 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input5 Jun 20 18:59:37.224560 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jun 20 18:59:37.224833 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jun 20 18:59:37.224980 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jun 20 18:59:37.228832 kernel: Console: switching to colour dummy device 80x25 Jun 20 18:59:37.243555 kernel: EDAC MC: Ver: 3.0.0 Jun 20 18:59:37.252724 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jun 20 18:59:37.252823 kernel: [drm] features: -context_init Jun 20 18:59:37.258133 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 18:59:37.258299 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 18:59:37.258338 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 18:59:37.258367 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 20 18:59:37.258389 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 18:59:37.258953 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 20 18:59:37.260493 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 18:59:37.261068 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 18:59:37.261418 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 18:59:37.262162 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 18:59:37.262407 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 18:59:37.262586 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 18:59:37.268567 kernel: [drm] number of scanouts: 1 Jun 20 18:59:37.271558 kernel: [drm] number of cap sets: 0 Jun 20 18:59:37.274778 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Jun 20 18:59:37.283179 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 20 18:59:37.283338 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 20 18:59:37.291798 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jun 20 18:59:37.291951 kernel: Console: switching to colour frame buffer device 160x50 Jun 20 18:59:37.294561 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jun 20 18:59:37.296038 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 18:59:37.305796 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 18:59:37.305992 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 18:59:37.314671 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 18:59:37.376269 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 18:59:37.440234 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jun 20 18:59:37.448828 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jun 20 18:59:37.464052 lvm[1505]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 20 18:59:37.504195 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jun 20 18:59:37.505296 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 20 18:59:37.505477 systemd[1]: Reached target sysinit.target - System Initialization. Jun 20 18:59:37.505981 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 20 18:59:37.507230 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 20 18:59:37.507735 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 20 18:59:37.507982 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 20 18:59:37.508108 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 20 18:59:37.508207 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 20 18:59:37.508261 systemd[1]: Reached target paths.target - Path Units. Jun 20 18:59:37.508356 systemd[1]: Reached target timers.target - Timer Units. Jun 20 18:59:37.511670 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 20 18:59:37.515274 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 20 18:59:37.522053 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jun 20 18:59:37.522682 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jun 20 18:59:37.522812 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jun 20 18:59:37.540761 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 20 18:59:37.542691 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jun 20 18:59:37.555791 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jun 20 18:59:37.558971 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 20 18:59:37.563566 lvm[1509]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 20 18:59:37.562728 systemd[1]: Reached target sockets.target - Socket Units. Jun 20 18:59:37.563346 systemd[1]: Reached target basic.target - Basic System. Jun 20 18:59:37.564515 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 20 18:59:37.566195 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 20 18:59:37.572692 systemd[1]: Starting containerd.service - containerd container runtime... Jun 20 18:59:37.579991 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jun 20 18:59:37.585722 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 20 18:59:37.598768 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 20 18:59:37.606754 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 20 18:59:37.607656 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 20 18:59:37.612853 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 20 18:59:37.615734 jq[1513]: false Jun 20 18:59:37.626729 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 20 18:59:37.637859 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Jun 20 18:59:37.650799 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 20 18:59:37.656741 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 20 18:59:37.676929 extend-filesystems[1516]: Found loop4 Jun 20 18:59:37.676929 extend-filesystems[1516]: Found loop5 Jun 20 18:59:37.676929 extend-filesystems[1516]: Found loop6 Jun 20 18:59:37.676929 extend-filesystems[1516]: Found loop7 Jun 20 18:59:37.676929 extend-filesystems[1516]: Found sda Jun 20 18:59:37.676929 extend-filesystems[1516]: Found sda1 Jun 20 18:59:37.676929 extend-filesystems[1516]: Found sda2 Jun 20 18:59:37.676929 extend-filesystems[1516]: Found sda3 Jun 20 18:59:37.676929 extend-filesystems[1516]: Found usr Jun 20 18:59:37.676929 extend-filesystems[1516]: Found sda4 Jun 20 18:59:37.676929 extend-filesystems[1516]: Found sda6 Jun 20 18:59:37.676929 extend-filesystems[1516]: Found sda7 Jun 20 18:59:37.676929 extend-filesystems[1516]: Found sda9 Jun 20 18:59:37.676929 extend-filesystems[1516]: Checking size of /dev/sda9 Jun 20 18:59:37.740770 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Jun 20 18:59:37.672699 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 20 18:59:37.741060 extend-filesystems[1516]: Resized partition /dev/sda9 Jun 20 18:59:37.743784 coreos-metadata[1511]: Jun 20 18:59:37.685 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Jun 20 18:59:37.743784 coreos-metadata[1511]: Jun 20 18:59:37.689 INFO Fetch successful Jun 20 18:59:37.743784 coreos-metadata[1511]: Jun 20 18:59:37.689 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Jun 20 18:59:37.743784 coreos-metadata[1511]: Jun 20 18:59:37.693 INFO Fetch successful Jun 20 18:59:37.691965 dbus-daemon[1512]: [system] SELinux support is enabled Jun 20 18:59:37.674127 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 20 18:59:37.750809 extend-filesystems[1535]: resize2fs 1.47.1 (20-May-2024) Jun 20 18:59:37.675224 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 20 18:59:37.685488 systemd[1]: Starting update-engine.service - Update Engine... Jun 20 18:59:37.704677 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 20 18:59:37.713788 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 20 18:59:37.761402 jq[1537]: true Jun 20 18:59:37.723714 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jun 20 18:59:37.737235 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 20 18:59:37.737409 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 20 18:59:37.737681 systemd[1]: motdgen.service: Deactivated successfully. Jun 20 18:59:37.737827 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 20 18:59:37.748954 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 20 18:59:37.749625 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 20 18:59:37.784516 update_engine[1532]: I20250620 18:59:37.783147 1532 main.cc:92] Flatcar Update Engine starting Jun 20 18:59:37.789970 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1421) Jun 20 18:59:37.790049 update_engine[1532]: I20250620 18:59:37.784562 1532 update_check_scheduler.cc:74] Next update check in 10m44s Jun 20 18:59:37.801137 tar[1544]: linux-amd64/LICENSE Jun 20 18:59:37.801137 tar[1544]: linux-amd64/helm Jun 20 18:59:37.800944 (ntainerd)[1547]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jun 20 18:59:37.807434 systemd[1]: Started update-engine.service - Update Engine. Jun 20 18:59:37.812198 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 20 18:59:37.812230 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 20 18:59:37.812736 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 20 18:59:37.812752 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 20 18:59:37.816277 jq[1546]: true Jun 20 18:59:37.820881 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 20 18:59:37.854160 systemd-logind[1529]: New seat seat0. Jun 20 18:59:37.857751 systemd-logind[1529]: Watching system buttons on /dev/input/event2 (Power Button) Jun 20 18:59:37.857771 systemd-logind[1529]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jun 20 18:59:37.860005 systemd[1]: Started systemd-logind.service - User Login Management. Jun 20 18:59:37.883638 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jun 20 18:59:37.885158 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 20 18:59:37.975609 bash[1582]: Updated "/home/core/.ssh/authorized_keys" Jun 20 18:59:37.976794 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 20 18:59:37.998819 systemd[1]: Starting sshkeys.service... Jun 20 18:59:38.054224 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jun 20 18:59:38.065188 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Jun 20 18:59:38.070358 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jun 20 18:59:38.082161 locksmithd[1562]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 20 18:59:38.103569 coreos-metadata[1592]: Jun 20 18:59:38.090 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Jun 20 18:59:38.103569 coreos-metadata[1592]: Jun 20 18:59:38.094 INFO Fetch successful Jun 20 18:59:38.106126 unknown[1592]: wrote ssh authorized keys file for user: core Jun 20 18:59:38.107738 extend-filesystems[1535]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jun 20 18:59:38.107738 extend-filesystems[1535]: old_desc_blocks = 1, new_desc_blocks = 5 Jun 20 18:59:38.107738 extend-filesystems[1535]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Jun 20 18:59:38.119770 extend-filesystems[1516]: Resized filesystem in /dev/sda9 Jun 20 18:59:38.119770 extend-filesystems[1516]: Found sr0 Jun 20 18:59:38.108843 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 20 18:59:38.109037 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 20 18:59:38.146198 update-ssh-keys[1598]: Updated "/home/core/.ssh/authorized_keys" Jun 20 18:59:38.146905 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jun 20 18:59:38.154625 systemd[1]: Finished sshkeys.service. Jun 20 18:59:38.162773 sshd_keygen[1545]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 20 18:59:38.192878 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 20 18:59:38.205057 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 20 18:59:38.216539 containerd[1547]: time="2025-06-20T18:59:38.215255382Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jun 20 18:59:38.219183 systemd[1]: issuegen.service: Deactivated successfully. Jun 20 18:59:38.219382 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 20 18:59:38.231819 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 20 18:59:38.250392 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 20 18:59:38.263979 containerd[1547]: time="2025-06-20T18:59:38.262040365Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jun 20 18:59:38.263031 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 20 18:59:38.269373 containerd[1547]: time="2025-06-20T18:59:38.269317618Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.94-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jun 20 18:59:38.269373 containerd[1547]: time="2025-06-20T18:59:38.269360819Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jun 20 18:59:38.269373 containerd[1547]: time="2025-06-20T18:59:38.269379474Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jun 20 18:59:38.270829 containerd[1547]: time="2025-06-20T18:59:38.269602763Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jun 20 18:59:38.270829 containerd[1547]: time="2025-06-20T18:59:38.269627359Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jun 20 18:59:38.270829 containerd[1547]: time="2025-06-20T18:59:38.269680789Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jun 20 18:59:38.270829 containerd[1547]: time="2025-06-20T18:59:38.269691018Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jun 20 18:59:38.270829 containerd[1547]: time="2025-06-20T18:59:38.269869784Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 20 18:59:38.270829 containerd[1547]: time="2025-06-20T18:59:38.269883189Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jun 20 18:59:38.270829 containerd[1547]: time="2025-06-20T18:59:38.269895752Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jun 20 18:59:38.270829 containerd[1547]: time="2025-06-20T18:59:38.269903687Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jun 20 18:59:38.270829 containerd[1547]: time="2025-06-20T18:59:38.269962498Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jun 20 18:59:38.270829 containerd[1547]: time="2025-06-20T18:59:38.270121145Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jun 20 18:59:38.270829 containerd[1547]: time="2025-06-20T18:59:38.270241501Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 20 18:59:38.271039 containerd[1547]: time="2025-06-20T18:59:38.270252771Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jun 20 18:59:38.271039 containerd[1547]: time="2025-06-20T18:59:38.270325819Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jun 20 18:59:38.271039 containerd[1547]: time="2025-06-20T18:59:38.270360844Z" level=info msg="metadata content store policy set" policy=shared Jun 20 18:59:38.273790 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jun 20 18:59:38.275191 systemd[1]: Reached target getty.target - Login Prompts. Jun 20 18:59:38.282107 containerd[1547]: time="2025-06-20T18:59:38.282055359Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jun 20 18:59:38.282546 containerd[1547]: time="2025-06-20T18:59:38.282256466Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jun 20 18:59:38.282546 containerd[1547]: time="2025-06-20T18:59:38.282279510Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jun 20 18:59:38.282546 containerd[1547]: time="2025-06-20T18:59:38.282295720Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jun 20 18:59:38.282546 containerd[1547]: time="2025-06-20T18:59:38.282310047Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jun 20 18:59:38.282546 containerd[1547]: time="2025-06-20T18:59:38.282486397Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jun 20 18:59:38.282869 containerd[1547]: time="2025-06-20T18:59:38.282855299Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jun 20 18:59:38.282994 containerd[1547]: time="2025-06-20T18:59:38.282981606Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jun 20 18:59:38.283044 containerd[1547]: time="2025-06-20T18:59:38.283032611Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jun 20 18:59:38.283089 containerd[1547]: time="2025-06-20T18:59:38.283080692Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jun 20 18:59:38.283129 containerd[1547]: time="2025-06-20T18:59:38.283121588Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jun 20 18:59:38.283178 containerd[1547]: time="2025-06-20T18:59:38.283169678Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jun 20 18:59:38.283218 containerd[1547]: time="2025-06-20T18:59:38.283210335Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jun 20 18:59:38.283257 containerd[1547]: time="2025-06-20T18:59:38.283249478Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jun 20 18:59:38.283298 containerd[1547]: time="2025-06-20T18:59:38.283289784Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jun 20 18:59:38.283738 containerd[1547]: time="2025-06-20T18:59:38.283328446Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jun 20 18:59:38.283738 containerd[1547]: time="2025-06-20T18:59:38.283342452Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jun 20 18:59:38.283738 containerd[1547]: time="2025-06-20T18:59:38.283352952Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jun 20 18:59:38.283738 containerd[1547]: time="2025-06-20T18:59:38.283371797Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jun 20 18:59:38.283738 containerd[1547]: time="2025-06-20T18:59:38.283384060Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jun 20 18:59:38.283738 containerd[1547]: time="2025-06-20T18:59:38.283395151Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jun 20 18:59:38.283738 containerd[1547]: time="2025-06-20T18:59:38.283419356Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jun 20 18:59:38.283738 containerd[1547]: time="2025-06-20T18:59:38.283430467Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jun 20 18:59:38.283738 containerd[1547]: time="2025-06-20T18:59:38.283442139Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jun 20 18:59:38.283738 containerd[1547]: time="2025-06-20T18:59:38.283473498Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jun 20 18:59:38.283738 containerd[1547]: time="2025-06-20T18:59:38.283485350Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jun 20 18:59:38.283738 containerd[1547]: time="2025-06-20T18:59:38.283497142Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jun 20 18:59:38.283738 containerd[1547]: time="2025-06-20T18:59:38.283512461Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jun 20 18:59:38.283738 containerd[1547]: time="2025-06-20T18:59:38.283537077Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jun 20 18:59:38.283977 containerd[1547]: time="2025-06-20T18:59:38.283547857Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jun 20 18:59:38.283977 containerd[1547]: time="2025-06-20T18:59:38.283567645Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jun 20 18:59:38.283977 containerd[1547]: time="2025-06-20T18:59:38.283580959Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jun 20 18:59:38.283977 containerd[1547]: time="2025-06-20T18:59:38.283600025Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jun 20 18:59:38.283977 containerd[1547]: time="2025-06-20T18:59:38.283611817Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jun 20 18:59:38.283977 containerd[1547]: time="2025-06-20T18:59:38.283622027Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jun 20 18:59:38.284610 containerd[1547]: time="2025-06-20T18:59:38.284555196Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jun 20 18:59:38.284610 containerd[1547]: time="2025-06-20T18:59:38.284584681Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jun 20 18:59:38.286035 containerd[1547]: time="2025-06-20T18:59:38.284726026Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jun 20 18:59:38.286035 containerd[1547]: time="2025-06-20T18:59:38.284748538Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jun 20 18:59:38.286035 containerd[1547]: time="2025-06-20T18:59:38.284759098Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jun 20 18:59:38.286035 containerd[1547]: time="2025-06-20T18:59:38.284773425Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jun 20 18:59:38.286035 containerd[1547]: time="2025-06-20T18:59:38.284784245Z" level=info msg="NRI interface is disabled by configuration." Jun 20 18:59:38.286035 containerd[1547]: time="2025-06-20T18:59:38.284795937Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jun 20 18:59:38.286183 containerd[1547]: time="2025-06-20T18:59:38.285072616Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jun 20 18:59:38.286183 containerd[1547]: time="2025-06-20T18:59:38.285114855Z" level=info msg="Connect containerd service" Jun 20 18:59:38.286183 containerd[1547]: time="2025-06-20T18:59:38.285137838Z" level=info msg="using legacy CRI server" Jun 20 18:59:38.286183 containerd[1547]: time="2025-06-20T18:59:38.285143148Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 20 18:59:38.286183 containerd[1547]: time="2025-06-20T18:59:38.285242845Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jun 20 18:59:38.286183 containerd[1547]: time="2025-06-20T18:59:38.285843030Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 20 18:59:38.286551 containerd[1547]: time="2025-06-20T18:59:38.286506584Z" level=info msg="Start subscribing containerd event" Jun 20 18:59:38.286619 containerd[1547]: time="2025-06-20T18:59:38.286608946Z" level=info msg="Start recovering state" Jun 20 18:59:38.286708 containerd[1547]: time="2025-06-20T18:59:38.286696560Z" level=info msg="Start event monitor" Jun 20 18:59:38.286750 containerd[1547]: time="2025-06-20T18:59:38.286742777Z" level=info msg="Start snapshots syncer" Jun 20 18:59:38.286793 containerd[1547]: time="2025-06-20T18:59:38.286784726Z" level=info msg="Start cni network conf syncer for default" Jun 20 18:59:38.286830 containerd[1547]: time="2025-06-20T18:59:38.286822476Z" level=info msg="Start streaming server" Jun 20 18:59:38.287093 containerd[1547]: time="2025-06-20T18:59:38.287080831Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 20 18:59:38.287234 containerd[1547]: time="2025-06-20T18:59:38.287222767Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 20 18:59:38.287477 systemd[1]: Started containerd.service - containerd container runtime. Jun 20 18:59:38.288550 containerd[1547]: time="2025-06-20T18:59:38.287360345Z" level=info msg="containerd successfully booted in 0.073562s" Jun 20 18:59:38.527130 tar[1544]: linux-amd64/README.md Jun 20 18:59:38.543124 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 20 18:59:38.650294 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 20 18:59:38.660308 systemd[1]: Started sshd@0-157.180.74.176:22-139.178.68.195:35960.service - OpenSSH per-connection server daemon (139.178.68.195:35960). Jun 20 18:59:38.769976 systemd-networkd[1429]: eth0: Gained IPv6LL Jun 20 18:59:38.771075 systemd-timesyncd[1430]: Network configuration changed, trying to establish connection. Jun 20 18:59:38.775018 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 20 18:59:38.779952 systemd[1]: Reached target network-online.target - Network is Online. Jun 20 18:59:38.789869 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:59:38.805498 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 20 18:59:38.833823 systemd-networkd[1429]: eth1: Gained IPv6LL Jun 20 18:59:38.834446 systemd-timesyncd[1430]: Network configuration changed, trying to establish connection. Jun 20 18:59:38.847284 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 20 18:59:39.666596 sshd[1628]: Accepted publickey for core from 139.178.68.195 port 35960 ssh2: RSA SHA256:ttetsIDeDmoOcWQonGIv9sNpOX/TzQSVP7aimMY1zAQ Jun 20 18:59:39.668902 sshd-session[1628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:59:39.682119 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 20 18:59:39.693300 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 20 18:59:39.712667 systemd-logind[1529]: New session 1 of user core. Jun 20 18:59:39.722348 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 20 18:59:39.731971 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 20 18:59:39.739304 (systemd)[1644]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 20 18:59:39.742377 systemd-logind[1529]: New session c1 of user core. Jun 20 18:59:39.894860 systemd[1644]: Queued start job for default target default.target. Jun 20 18:59:39.905740 systemd[1644]: Created slice app.slice - User Application Slice. Jun 20 18:59:39.905765 systemd[1644]: Reached target paths.target - Paths. Jun 20 18:59:39.905946 systemd[1644]: Reached target timers.target - Timers. Jun 20 18:59:39.907220 systemd[1644]: Starting dbus.socket - D-Bus User Message Bus Socket... Jun 20 18:59:39.924021 systemd[1644]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jun 20 18:59:39.924127 systemd[1644]: Reached target sockets.target - Sockets. Jun 20 18:59:39.924164 systemd[1644]: Reached target basic.target - Basic System. Jun 20 18:59:39.924196 systemd[1644]: Reached target default.target - Main User Target. Jun 20 18:59:39.924218 systemd[1644]: Startup finished in 173ms. Jun 20 18:59:39.924610 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 20 18:59:39.930725 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 20 18:59:40.415825 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:59:40.418734 (kubelet)[1659]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 18:59:40.420412 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 20 18:59:40.424584 systemd[1]: Startup finished in 1.482s (kernel) + 1min 50.028s (initrd) + 5.722s (userspace) = 1min 57.233s. Jun 20 18:59:40.627975 systemd[1]: Started sshd@1-157.180.74.176:22-139.178.68.195:35972.service - OpenSSH per-connection server daemon (139.178.68.195:35972). Jun 20 18:59:41.293269 kubelet[1659]: E0620 18:59:41.293156 1659 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 18:59:41.295673 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 18:59:41.295935 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 18:59:41.296695 systemd[1]: kubelet.service: Consumed 1.706s CPU time, 269.6M memory peak. Jun 20 18:59:41.603887 sshd[1669]: Accepted publickey for core from 139.178.68.195 port 35972 ssh2: RSA SHA256:ttetsIDeDmoOcWQonGIv9sNpOX/TzQSVP7aimMY1zAQ Jun 20 18:59:41.606930 sshd-session[1669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:59:41.615080 systemd-logind[1529]: New session 2 of user core. Jun 20 18:59:41.625761 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 20 18:59:42.280896 sshd[1673]: Connection closed by 139.178.68.195 port 35972 Jun 20 18:59:42.281734 sshd-session[1669]: pam_unix(sshd:session): session closed for user core Jun 20 18:59:42.285978 systemd[1]: sshd@1-157.180.74.176:22-139.178.68.195:35972.service: Deactivated successfully. Jun 20 18:59:42.288908 systemd[1]: session-2.scope: Deactivated successfully. Jun 20 18:59:42.291326 systemd-logind[1529]: Session 2 logged out. Waiting for processes to exit. Jun 20 18:59:42.293030 systemd-logind[1529]: Removed session 2. Jun 20 18:59:42.461024 systemd[1]: Started sshd@2-157.180.74.176:22-139.178.68.195:35980.service - OpenSSH per-connection server daemon (139.178.68.195:35980). Jun 20 18:59:43.450037 sshd[1679]: Accepted publickey for core from 139.178.68.195 port 35980 ssh2: RSA SHA256:ttetsIDeDmoOcWQonGIv9sNpOX/TzQSVP7aimMY1zAQ Jun 20 18:59:43.452231 sshd-session[1679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:59:43.459580 systemd-logind[1529]: New session 3 of user core. Jun 20 18:59:43.469787 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 20 18:59:44.121816 sshd[1681]: Connection closed by 139.178.68.195 port 35980 Jun 20 18:59:44.122985 sshd-session[1679]: pam_unix(sshd:session): session closed for user core Jun 20 18:59:44.128765 systemd[1]: sshd@2-157.180.74.176:22-139.178.68.195:35980.service: Deactivated successfully. Jun 20 18:59:44.131788 systemd[1]: session-3.scope: Deactivated successfully. Jun 20 18:59:44.133027 systemd-logind[1529]: Session 3 logged out. Waiting for processes to exit. Jun 20 18:59:44.134904 systemd-logind[1529]: Removed session 3. Jun 20 18:59:44.298000 systemd[1]: Started sshd@3-157.180.74.176:22-139.178.68.195:54666.service - OpenSSH per-connection server daemon (139.178.68.195:54666). Jun 20 18:59:45.293222 sshd[1687]: Accepted publickey for core from 139.178.68.195 port 54666 ssh2: RSA SHA256:ttetsIDeDmoOcWQonGIv9sNpOX/TzQSVP7aimMY1zAQ Jun 20 18:59:45.295335 sshd-session[1687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:59:45.302519 systemd-logind[1529]: New session 4 of user core. Jun 20 18:59:45.309743 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 20 18:59:45.971902 sshd[1689]: Connection closed by 139.178.68.195 port 54666 Jun 20 18:59:45.972784 sshd-session[1687]: pam_unix(sshd:session): session closed for user core Jun 20 18:59:45.977774 systemd-logind[1529]: Session 4 logged out. Waiting for processes to exit. Jun 20 18:59:45.978874 systemd[1]: sshd@3-157.180.74.176:22-139.178.68.195:54666.service: Deactivated successfully. Jun 20 18:59:45.982061 systemd[1]: session-4.scope: Deactivated successfully. Jun 20 18:59:45.983742 systemd-logind[1529]: Removed session 4. Jun 20 18:59:46.152813 systemd[1]: Started sshd@4-157.180.74.176:22-139.178.68.195:54680.service - OpenSSH per-connection server daemon (139.178.68.195:54680). Jun 20 18:59:47.131856 sshd[1695]: Accepted publickey for core from 139.178.68.195 port 54680 ssh2: RSA SHA256:ttetsIDeDmoOcWQonGIv9sNpOX/TzQSVP7aimMY1zAQ Jun 20 18:59:47.134438 sshd-session[1695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:59:47.139829 systemd-logind[1529]: New session 5 of user core. Jun 20 18:59:47.145733 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 20 18:59:47.663180 sudo[1698]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 20 18:59:47.663662 sudo[1698]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 18:59:47.682189 sudo[1698]: pam_unix(sudo:session): session closed for user root Jun 20 18:59:47.839805 sshd[1697]: Connection closed by 139.178.68.195 port 54680 Jun 20 18:59:47.840898 sshd-session[1695]: pam_unix(sshd:session): session closed for user core Jun 20 18:59:47.845039 systemd[1]: sshd@4-157.180.74.176:22-139.178.68.195:54680.service: Deactivated successfully. Jun 20 18:59:47.847751 systemd[1]: session-5.scope: Deactivated successfully. Jun 20 18:59:47.849776 systemd-logind[1529]: Session 5 logged out. Waiting for processes to exit. Jun 20 18:59:47.851404 systemd-logind[1529]: Removed session 5. Jun 20 18:59:48.021016 systemd[1]: Started sshd@5-157.180.74.176:22-139.178.68.195:54684.service - OpenSSH per-connection server daemon (139.178.68.195:54684). Jun 20 18:59:49.009624 sshd[1704]: Accepted publickey for core from 139.178.68.195 port 54684 ssh2: RSA SHA256:ttetsIDeDmoOcWQonGIv9sNpOX/TzQSVP7aimMY1zAQ Jun 20 18:59:49.011983 sshd-session[1704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:59:49.022677 systemd-logind[1529]: New session 6 of user core. Jun 20 18:59:49.033857 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 20 18:59:49.533553 sudo[1708]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 20 18:59:49.534042 sudo[1708]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 18:59:49.539904 sudo[1708]: pam_unix(sudo:session): session closed for user root Jun 20 18:59:49.549410 sudo[1707]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jun 20 18:59:49.549905 sudo[1707]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 18:59:49.573149 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 20 18:59:49.623080 augenrules[1730]: No rules Jun 20 18:59:49.625347 systemd[1]: audit-rules.service: Deactivated successfully. Jun 20 18:59:49.625867 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 20 18:59:49.628408 sudo[1707]: pam_unix(sudo:session): session closed for user root Jun 20 18:59:49.787389 sshd[1706]: Connection closed by 139.178.68.195 port 54684 Jun 20 18:59:49.788608 sshd-session[1704]: pam_unix(sshd:session): session closed for user core Jun 20 18:59:49.793408 systemd[1]: sshd@5-157.180.74.176:22-139.178.68.195:54684.service: Deactivated successfully. Jun 20 18:59:49.796106 systemd[1]: session-6.scope: Deactivated successfully. Jun 20 18:59:49.797179 systemd-logind[1529]: Session 6 logged out. Waiting for processes to exit. Jun 20 18:59:49.799037 systemd-logind[1529]: Removed session 6. Jun 20 18:59:49.965063 systemd[1]: Started sshd@6-157.180.74.176:22-139.178.68.195:54688.service - OpenSSH per-connection server daemon (139.178.68.195:54688). Jun 20 18:59:50.940155 sshd[1739]: Accepted publickey for core from 139.178.68.195 port 54688 ssh2: RSA SHA256:ttetsIDeDmoOcWQonGIv9sNpOX/TzQSVP7aimMY1zAQ Jun 20 18:59:50.942143 sshd-session[1739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:59:50.950359 systemd-logind[1529]: New session 7 of user core. Jun 20 18:59:50.957770 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 20 18:59:51.412409 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 20 18:59:51.419095 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:59:51.463153 sudo[1745]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 20 18:59:51.463665 sudo[1745]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 18:59:51.561441 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:59:51.566600 (kubelet)[1757]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 18:59:51.608674 kubelet[1757]: E0620 18:59:51.608627 1757 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 18:59:51.614958 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 18:59:51.615121 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 18:59:51.615557 systemd[1]: kubelet.service: Consumed 166ms CPU time, 110.3M memory peak. Jun 20 18:59:51.900043 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 20 18:59:51.913216 (dockerd)[1774]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jun 20 18:59:52.336473 dockerd[1774]: time="2025-06-20T18:59:52.336098849Z" level=info msg="Starting up" Jun 20 18:59:52.494881 dockerd[1774]: time="2025-06-20T18:59:52.494592713Z" level=info msg="Loading containers: start." Jun 20 18:59:52.701870 kernel: Initializing XFRM netlink socket Jun 20 18:59:52.747219 systemd-timesyncd[1430]: Network configuration changed, trying to establish connection. Jun 20 18:59:52.788694 systemd-timesyncd[1430]: Contacted time server 176.9.42.91:123 (2.flatcar.pool.ntp.org). Jun 20 18:59:52.788772 systemd-timesyncd[1430]: Initial clock synchronization to Fri 2025-06-20 18:59:52.943244 UTC. Jun 20 18:59:52.829075 systemd-networkd[1429]: docker0: Link UP Jun 20 18:59:52.867097 dockerd[1774]: time="2025-06-20T18:59:52.867014179Z" level=info msg="Loading containers: done." Jun 20 18:59:52.900562 dockerd[1774]: time="2025-06-20T18:59:52.900384111Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 20 18:59:52.900827 dockerd[1774]: time="2025-06-20T18:59:52.900666630Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jun 20 18:59:52.900925 dockerd[1774]: time="2025-06-20T18:59:52.900875081Z" level=info msg="Daemon has completed initialization" Jun 20 18:59:52.969417 dockerd[1774]: time="2025-06-20T18:59:52.968847496Z" level=info msg="API listen on /run/docker.sock" Jun 20 18:59:52.969143 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 20 18:59:54.500866 containerd[1547]: time="2025-06-20T18:59:54.500791712Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jun 20 18:59:55.162302 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3601626085.mount: Deactivated successfully. Jun 20 18:59:56.081995 containerd[1547]: time="2025-06-20T18:59:56.081925672Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:59:56.082909 containerd[1547]: time="2025-06-20T18:59:56.082885862Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=28799139" Jun 20 18:59:56.083852 containerd[1547]: time="2025-06-20T18:59:56.083799764Z" level=info msg="ImageCreate event name:\"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:59:56.087642 containerd[1547]: time="2025-06-20T18:59:56.087598937Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:59:56.088511 containerd[1547]: time="2025-06-20T18:59:56.088376816Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"28795845\" in 1.5875174s" Jun 20 18:59:56.088511 containerd[1547]: time="2025-06-20T18:59:56.088403943Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\"" Jun 20 18:59:56.089523 containerd[1547]: time="2025-06-20T18:59:56.089496392Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jun 20 18:59:57.353942 containerd[1547]: time="2025-06-20T18:59:57.353871502Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:59:57.354981 containerd[1547]: time="2025-06-20T18:59:57.354947490Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=24783934" Jun 20 18:59:57.355979 containerd[1547]: time="2025-06-20T18:59:57.355946147Z" level=info msg="ImageCreate event name:\"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:59:57.358293 containerd[1547]: time="2025-06-20T18:59:57.358259787Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:59:57.359296 containerd[1547]: time="2025-06-20T18:59:57.359195450Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"26385746\" in 1.269675514s" Jun 20 18:59:57.359296 containerd[1547]: time="2025-06-20T18:59:57.359219211Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\"" Jun 20 18:59:57.359882 containerd[1547]: time="2025-06-20T18:59:57.359856573Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jun 20 18:59:59.359018 containerd[1547]: time="2025-06-20T18:59:59.358939551Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:59:59.360078 containerd[1547]: time="2025-06-20T18:59:59.360044300Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=19176938" Jun 20 18:59:59.361054 containerd[1547]: time="2025-06-20T18:59:59.361016966Z" level=info msg="ImageCreate event name:\"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:59:59.363610 containerd[1547]: time="2025-06-20T18:59:59.363575256Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:59:59.364529 containerd[1547]: time="2025-06-20T18:59:59.364425289Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"20778768\" in 2.004544032s" Jun 20 18:59:59.364529 containerd[1547]: time="2025-06-20T18:59:59.364450411Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\"" Jun 20 18:59:59.365029 containerd[1547]: time="2025-06-20T18:59:59.364914420Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jun 20 19:00:00.464969 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2537695521.mount: Deactivated successfully. Jun 20 19:00:00.825178 containerd[1547]: time="2025-06-20T19:00:00.825014344Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:00:00.826372 containerd[1547]: time="2025-06-20T19:00:00.826311292Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=30895391" Jun 20 19:00:00.827949 containerd[1547]: time="2025-06-20T19:00:00.827891677Z" level=info msg="ImageCreate event name:\"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:00:00.830228 containerd[1547]: time="2025-06-20T19:00:00.830175316Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:00:00.830899 containerd[1547]: time="2025-06-20T19:00:00.830717586Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"30894382\" in 1.465634115s" Jun 20 19:00:00.830899 containerd[1547]: time="2025-06-20T19:00:00.830759380Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\"" Jun 20 19:00:00.831375 containerd[1547]: time="2025-06-20T19:00:00.831337709Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jun 20 19:00:01.390470 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2627856835.mount: Deactivated successfully. Jun 20 19:00:01.662418 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jun 20 19:00:01.670910 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:00:01.803574 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:00:01.810753 (kubelet)[2058]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:00:01.863919 kubelet[2058]: E0620 19:00:01.863835 2058 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:00:01.866280 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:00:01.866438 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:00:01.866770 systemd[1]: kubelet.service: Consumed 156ms CPU time, 110.1M memory peak. Jun 20 19:00:02.299188 containerd[1547]: time="2025-06-20T19:00:02.299107414Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:00:02.300724 containerd[1547]: time="2025-06-20T19:00:02.300677093Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565335" Jun 20 19:00:02.302175 containerd[1547]: time="2025-06-20T19:00:02.302111397Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:00:02.305429 containerd[1547]: time="2025-06-20T19:00:02.305057718Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:00:02.306091 containerd[1547]: time="2025-06-20T19:00:02.306066053Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.474700536s" Jun 20 19:00:02.306132 containerd[1547]: time="2025-06-20T19:00:02.306102009Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jun 20 19:00:02.306597 containerd[1547]: time="2025-06-20T19:00:02.306577587Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jun 20 19:00:02.837342 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3511401856.mount: Deactivated successfully. Jun 20 19:00:02.845694 containerd[1547]: time="2025-06-20T19:00:02.845605512Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:00:02.847083 containerd[1547]: time="2025-06-20T19:00:02.847034203Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321160" Jun 20 19:00:02.848229 containerd[1547]: time="2025-06-20T19:00:02.848160157Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:00:02.859592 containerd[1547]: time="2025-06-20T19:00:02.859450306Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:00:02.860583 containerd[1547]: time="2025-06-20T19:00:02.860301377Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 553.692269ms" Jun 20 19:00:02.860583 containerd[1547]: time="2025-06-20T19:00:02.860341836Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jun 20 19:00:02.862202 containerd[1547]: time="2025-06-20T19:00:02.861872366Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jun 20 19:00:03.441660 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2641772182.mount: Deactivated successfully. Jun 20 19:00:06.123085 containerd[1547]: time="2025-06-20T19:00:06.122995850Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:00:06.124853 containerd[1547]: time="2025-06-20T19:00:06.124804106Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551430" Jun 20 19:00:06.126877 containerd[1547]: time="2025-06-20T19:00:06.126836072Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:00:06.131201 containerd[1547]: time="2025-06-20T19:00:06.130077510Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:00:06.131201 containerd[1547]: time="2025-06-20T19:00:06.131067114Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.269162633s" Jun 20 19:00:06.131201 containerd[1547]: time="2025-06-20T19:00:06.131094297Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jun 20 19:00:09.105500 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:00:09.105782 systemd[1]: kubelet.service: Consumed 156ms CPU time, 110.1M memory peak. Jun 20 19:00:09.118280 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:00:09.166280 systemd[1]: Reload requested from client PID 2182 ('systemctl') (unit session-7.scope)... Jun 20 19:00:09.166312 systemd[1]: Reloading... Jun 20 19:00:09.275560 zram_generator::config[2227]: No configuration found. Jun 20 19:00:09.381889 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:00:09.505544 systemd[1]: Reloading finished in 338 ms. Jun 20 19:00:09.558461 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jun 20 19:00:09.558568 systemd[1]: kubelet.service: Failed with result 'signal'. Jun 20 19:00:09.558817 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:00:09.558891 systemd[1]: kubelet.service: Consumed 87ms CPU time, 97.4M memory peak. Jun 20 19:00:09.562236 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:00:09.681907 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:00:09.687504 (kubelet)[2282]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 20 19:00:09.749159 kubelet[2282]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:00:09.749159 kubelet[2282]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jun 20 19:00:09.749159 kubelet[2282]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:00:09.749623 kubelet[2282]: I0620 19:00:09.749226 2282 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 20 19:00:09.919869 kubelet[2282]: I0620 19:00:09.919805 2282 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jun 20 19:00:09.919869 kubelet[2282]: I0620 19:00:09.919836 2282 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 20 19:00:09.920140 kubelet[2282]: I0620 19:00:09.920105 2282 server.go:954] "Client rotation is on, will bootstrap in background" Jun 20 19:00:09.953132 kubelet[2282]: I0620 19:00:09.952983 2282 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 20 19:00:09.965584 kubelet[2282]: E0620 19:00:09.965450 2282 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://157.180.74.176:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 157.180.74.176:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:00:09.977451 kubelet[2282]: E0620 19:00:09.977391 2282 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jun 20 19:00:09.977451 kubelet[2282]: I0620 19:00:09.977439 2282 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jun 20 19:00:09.984209 kubelet[2282]: I0620 19:00:09.984173 2282 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 20 19:00:09.990875 kubelet[2282]: I0620 19:00:09.990783 2282 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 20 19:00:09.991172 kubelet[2282]: I0620 19:00:09.990869 2282 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-2-0-4-ec216ba796","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 20 19:00:09.994330 kubelet[2282]: I0620 19:00:09.994289 2282 topology_manager.go:138] "Creating topology manager with none policy" Jun 20 19:00:09.994330 kubelet[2282]: I0620 19:00:09.994322 2282 container_manager_linux.go:304] "Creating device plugin manager" Jun 20 19:00:09.996654 kubelet[2282]: I0620 19:00:09.996613 2282 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:00:10.002857 kubelet[2282]: I0620 19:00:10.002829 2282 kubelet.go:446] "Attempting to sync node with API server" Jun 20 19:00:10.002939 kubelet[2282]: I0620 19:00:10.002889 2282 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 20 19:00:10.002939 kubelet[2282]: I0620 19:00:10.002921 2282 kubelet.go:352] "Adding apiserver pod source" Jun 20 19:00:10.002978 kubelet[2282]: I0620 19:00:10.002939 2282 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 20 19:00:10.012637 kubelet[2282]: W0620 19:00:10.012129 2282 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://157.180.74.176:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 157.180.74.176:6443: connect: connection refused Jun 20 19:00:10.012637 kubelet[2282]: E0620 19:00:10.012212 2282 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://157.180.74.176:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 157.180.74.176:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:00:10.012637 kubelet[2282]: W0620 19:00:10.012569 2282 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://157.180.74.176:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-2-0-4-ec216ba796&limit=500&resourceVersion=0": dial tcp 157.180.74.176:6443: connect: connection refused Jun 20 19:00:10.012637 kubelet[2282]: E0620 19:00:10.012597 2282 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://157.180.74.176:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-2-0-4-ec216ba796&limit=500&resourceVersion=0\": dial tcp 157.180.74.176:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:00:10.014874 kubelet[2282]: I0620 19:00:10.014752 2282 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jun 20 19:00:10.018103 kubelet[2282]: I0620 19:00:10.018077 2282 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 20 19:00:10.018198 kubelet[2282]: W0620 19:00:10.018128 2282 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 20 19:00:10.018733 kubelet[2282]: I0620 19:00:10.018658 2282 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jun 20 19:00:10.018733 kubelet[2282]: I0620 19:00:10.018690 2282 server.go:1287] "Started kubelet" Jun 20 19:00:10.020331 kubelet[2282]: I0620 19:00:10.020122 2282 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jun 20 19:00:10.025508 kubelet[2282]: I0620 19:00:10.025085 2282 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 20 19:00:10.025508 kubelet[2282]: I0620 19:00:10.025430 2282 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 20 19:00:10.026953 kubelet[2282]: I0620 19:00:10.026255 2282 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 20 19:00:10.028643 kubelet[2282]: E0620 19:00:10.026817 2282 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://157.180.74.176:6443/api/v1/namespaces/default/events\": dial tcp 157.180.74.176:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230-2-0-4-ec216ba796.184ad55f31a9a8ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230-2-0-4-ec216ba796,UID:ci-4230-2-0-4-ec216ba796,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230-2-0-4-ec216ba796,},FirstTimestamp:2025-06-20 19:00:10.018670829 +0000 UTC m=+0.325291500,LastTimestamp:2025-06-20 19:00:10.018670829 +0000 UTC m=+0.325291500,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-2-0-4-ec216ba796,}" Jun 20 19:00:10.031633 kubelet[2282]: I0620 19:00:10.031616 2282 server.go:479] "Adding debug handlers to kubelet server" Jun 20 19:00:10.035079 kubelet[2282]: I0620 19:00:10.034505 2282 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 20 19:00:10.038809 kubelet[2282]: I0620 19:00:10.038077 2282 volume_manager.go:297] "Starting Kubelet Volume Manager" Jun 20 19:00:10.038809 kubelet[2282]: E0620 19:00:10.038365 2282 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230-2-0-4-ec216ba796\" not found" Jun 20 19:00:10.041662 kubelet[2282]: E0620 19:00:10.041627 2282 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://157.180.74.176:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-2-0-4-ec216ba796?timeout=10s\": dial tcp 157.180.74.176:6443: connect: connection refused" interval="200ms" Jun 20 19:00:10.042045 kubelet[2282]: I0620 19:00:10.042025 2282 factory.go:221] Registration of the systemd container factory successfully Jun 20 19:00:10.042250 kubelet[2282]: I0620 19:00:10.042233 2282 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 20 19:00:10.042624 kubelet[2282]: I0620 19:00:10.042050 2282 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jun 20 19:00:10.042702 kubelet[2282]: I0620 19:00:10.042140 2282 reconciler.go:26] "Reconciler: start to sync state" Jun 20 19:00:10.045437 kubelet[2282]: W0620 19:00:10.045390 2282 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://157.180.74.176:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 157.180.74.176:6443: connect: connection refused Jun 20 19:00:10.045522 kubelet[2282]: E0620 19:00:10.045449 2282 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://157.180.74.176:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 157.180.74.176:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:00:10.046905 kubelet[2282]: I0620 19:00:10.045592 2282 factory.go:221] Registration of the containerd container factory successfully Jun 20 19:00:10.053097 kubelet[2282]: E0620 19:00:10.053063 2282 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 20 19:00:10.059255 kubelet[2282]: I0620 19:00:10.059196 2282 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 20 19:00:10.061608 kubelet[2282]: I0620 19:00:10.060572 2282 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 20 19:00:10.061608 kubelet[2282]: I0620 19:00:10.060590 2282 status_manager.go:227] "Starting to sync pod status with apiserver" Jun 20 19:00:10.061608 kubelet[2282]: I0620 19:00:10.060613 2282 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jun 20 19:00:10.061608 kubelet[2282]: I0620 19:00:10.060622 2282 kubelet.go:2382] "Starting kubelet main sync loop" Jun 20 19:00:10.061608 kubelet[2282]: E0620 19:00:10.060672 2282 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 20 19:00:10.072488 kubelet[2282]: W0620 19:00:10.072416 2282 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://157.180.74.176:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 157.180.74.176:6443: connect: connection refused Jun 20 19:00:10.072653 kubelet[2282]: E0620 19:00:10.072496 2282 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://157.180.74.176:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 157.180.74.176:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:00:10.078831 kubelet[2282]: I0620 19:00:10.078801 2282 cpu_manager.go:221] "Starting CPU manager" policy="none" Jun 20 19:00:10.078831 kubelet[2282]: I0620 19:00:10.078817 2282 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jun 20 19:00:10.078985 kubelet[2282]: I0620 19:00:10.078895 2282 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:00:10.081653 kubelet[2282]: I0620 19:00:10.081627 2282 policy_none.go:49] "None policy: Start" Jun 20 19:00:10.081653 kubelet[2282]: I0620 19:00:10.081645 2282 memory_manager.go:186] "Starting memorymanager" policy="None" Jun 20 19:00:10.081653 kubelet[2282]: I0620 19:00:10.081656 2282 state_mem.go:35] "Initializing new in-memory state store" Jun 20 19:00:10.087973 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jun 20 19:00:10.098098 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jun 20 19:00:10.101218 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jun 20 19:00:10.111542 kubelet[2282]: I0620 19:00:10.111159 2282 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 20 19:00:10.111619 kubelet[2282]: I0620 19:00:10.111575 2282 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 20 19:00:10.111619 kubelet[2282]: I0620 19:00:10.111588 2282 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 20 19:00:10.111875 kubelet[2282]: I0620 19:00:10.111857 2282 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 20 19:00:10.113394 kubelet[2282]: E0620 19:00:10.113376 2282 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jun 20 19:00:10.113452 kubelet[2282]: E0620 19:00:10.113417 2282 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230-2-0-4-ec216ba796\" not found" Jun 20 19:00:10.178516 systemd[1]: Created slice kubepods-burstable-pod67b7f6ec6a09c20db189bb964469cf8a.slice - libcontainer container kubepods-burstable-pod67b7f6ec6a09c20db189bb964469cf8a.slice. Jun 20 19:00:10.200214 kubelet[2282]: E0620 19:00:10.200179 2282 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-0-4-ec216ba796\" not found" node="ci-4230-2-0-4-ec216ba796" Jun 20 19:00:10.204347 systemd[1]: Created slice kubepods-burstable-pod78e2a6298204b6cadcaf35bc7931a54c.slice - libcontainer container kubepods-burstable-pod78e2a6298204b6cadcaf35bc7931a54c.slice. Jun 20 19:00:10.208247 kubelet[2282]: E0620 19:00:10.208222 2282 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-0-4-ec216ba796\" not found" node="ci-4230-2-0-4-ec216ba796" Jun 20 19:00:10.212665 systemd[1]: Created slice kubepods-burstable-pod1ad1e74a8d5a5dab456b5d7961b02543.slice - libcontainer container kubepods-burstable-pod1ad1e74a8d5a5dab456b5d7961b02543.slice. Jun 20 19:00:10.216201 kubelet[2282]: I0620 19:00:10.216178 2282 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-2-0-4-ec216ba796" Jun 20 19:00:10.217016 kubelet[2282]: E0620 19:00:10.216990 2282 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://157.180.74.176:6443/api/v1/nodes\": dial tcp 157.180.74.176:6443: connect: connection refused" node="ci-4230-2-0-4-ec216ba796" Jun 20 19:00:10.217199 kubelet[2282]: E0620 19:00:10.217160 2282 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-0-4-ec216ba796\" not found" node="ci-4230-2-0-4-ec216ba796" Jun 20 19:00:10.243394 kubelet[2282]: I0620 19:00:10.243215 2282 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/67b7f6ec6a09c20db189bb964469cf8a-kubeconfig\") pod \"kube-controller-manager-ci-4230-2-0-4-ec216ba796\" (UID: \"67b7f6ec6a09c20db189bb964469cf8a\") " pod="kube-system/kube-controller-manager-ci-4230-2-0-4-ec216ba796" Jun 20 19:00:10.243394 kubelet[2282]: I0620 19:00:10.243279 2282 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/67b7f6ec6a09c20db189bb964469cf8a-ca-certs\") pod \"kube-controller-manager-ci-4230-2-0-4-ec216ba796\" (UID: \"67b7f6ec6a09c20db189bb964469cf8a\") " pod="kube-system/kube-controller-manager-ci-4230-2-0-4-ec216ba796" Jun 20 19:00:10.243394 kubelet[2282]: I0620 19:00:10.243310 2282 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/67b7f6ec6a09c20db189bb964469cf8a-k8s-certs\") pod \"kube-controller-manager-ci-4230-2-0-4-ec216ba796\" (UID: \"67b7f6ec6a09c20db189bb964469cf8a\") " pod="kube-system/kube-controller-manager-ci-4230-2-0-4-ec216ba796" Jun 20 19:00:10.243394 kubelet[2282]: I0620 19:00:10.243333 2282 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/78e2a6298204b6cadcaf35bc7931a54c-kubeconfig\") pod \"kube-scheduler-ci-4230-2-0-4-ec216ba796\" (UID: \"78e2a6298204b6cadcaf35bc7931a54c\") " pod="kube-system/kube-scheduler-ci-4230-2-0-4-ec216ba796" Jun 20 19:00:10.243713 kubelet[2282]: I0620 19:00:10.243579 2282 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1ad1e74a8d5a5dab456b5d7961b02543-ca-certs\") pod \"kube-apiserver-ci-4230-2-0-4-ec216ba796\" (UID: \"1ad1e74a8d5a5dab456b5d7961b02543\") " pod="kube-system/kube-apiserver-ci-4230-2-0-4-ec216ba796" Jun 20 19:00:10.243752 kubelet[2282]: I0620 19:00:10.243684 2282 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1ad1e74a8d5a5dab456b5d7961b02543-k8s-certs\") pod \"kube-apiserver-ci-4230-2-0-4-ec216ba796\" (UID: \"1ad1e74a8d5a5dab456b5d7961b02543\") " pod="kube-system/kube-apiserver-ci-4230-2-0-4-ec216ba796" Jun 20 19:00:10.243794 kubelet[2282]: I0620 19:00:10.243766 2282 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1ad1e74a8d5a5dab456b5d7961b02543-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-2-0-4-ec216ba796\" (UID: \"1ad1e74a8d5a5dab456b5d7961b02543\") " pod="kube-system/kube-apiserver-ci-4230-2-0-4-ec216ba796" Jun 20 19:00:10.243900 kubelet[2282]: I0620 19:00:10.243849 2282 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/67b7f6ec6a09c20db189bb964469cf8a-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-2-0-4-ec216ba796\" (UID: \"67b7f6ec6a09c20db189bb964469cf8a\") " pod="kube-system/kube-controller-manager-ci-4230-2-0-4-ec216ba796" Jun 20 19:00:10.244115 kubelet[2282]: E0620 19:00:10.244059 2282 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://157.180.74.176:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-2-0-4-ec216ba796?timeout=10s\": dial tcp 157.180.74.176:6443: connect: connection refused" interval="400ms" Jun 20 19:00:10.244223 kubelet[2282]: I0620 19:00:10.244173 2282 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/67b7f6ec6a09c20db189bb964469cf8a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-2-0-4-ec216ba796\" (UID: \"67b7f6ec6a09c20db189bb964469cf8a\") " pod="kube-system/kube-controller-manager-ci-4230-2-0-4-ec216ba796" Jun 20 19:00:10.420247 kubelet[2282]: I0620 19:00:10.420183 2282 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-2-0-4-ec216ba796" Jun 20 19:00:10.420988 kubelet[2282]: E0620 19:00:10.420866 2282 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://157.180.74.176:6443/api/v1/nodes\": dial tcp 157.180.74.176:6443: connect: connection refused" node="ci-4230-2-0-4-ec216ba796" Jun 20 19:00:10.505627 containerd[1547]: time="2025-06-20T19:00:10.505404662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-2-0-4-ec216ba796,Uid:67b7f6ec6a09c20db189bb964469cf8a,Namespace:kube-system,Attempt:0,}" Jun 20 19:00:10.510153 containerd[1547]: time="2025-06-20T19:00:10.509678881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-2-0-4-ec216ba796,Uid:78e2a6298204b6cadcaf35bc7931a54c,Namespace:kube-system,Attempt:0,}" Jun 20 19:00:10.518893 containerd[1547]: time="2025-06-20T19:00:10.518822529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-2-0-4-ec216ba796,Uid:1ad1e74a8d5a5dab456b5d7961b02543,Namespace:kube-system,Attempt:0,}" Jun 20 19:00:10.645868 kubelet[2282]: E0620 19:00:10.645806 2282 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://157.180.74.176:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-2-0-4-ec216ba796?timeout=10s\": dial tcp 157.180.74.176:6443: connect: connection refused" interval="800ms" Jun 20 19:00:10.824238 kubelet[2282]: I0620 19:00:10.824082 2282 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-2-0-4-ec216ba796" Jun 20 19:00:10.824793 kubelet[2282]: E0620 19:00:10.824614 2282 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://157.180.74.176:6443/api/v1/nodes\": dial tcp 157.180.74.176:6443: connect: connection refused" node="ci-4230-2-0-4-ec216ba796" Jun 20 19:00:11.000034 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2533986551.mount: Deactivated successfully. Jun 20 19:00:11.010958 containerd[1547]: time="2025-06-20T19:00:11.010877308Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 19:00:11.015664 containerd[1547]: time="2025-06-20T19:00:11.015586537Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312078" Jun 20 19:00:11.018048 containerd[1547]: time="2025-06-20T19:00:11.018001106Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 19:00:11.021621 containerd[1547]: time="2025-06-20T19:00:11.021515853Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 19:00:11.023598 containerd[1547]: time="2025-06-20T19:00:11.023513632Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 20 19:00:11.026318 containerd[1547]: time="2025-06-20T19:00:11.026034012Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 20 19:00:11.026318 containerd[1547]: time="2025-06-20T19:00:11.026175885Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 19:00:11.030465 containerd[1547]: time="2025-06-20T19:00:11.029794680Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 524.145124ms" Jun 20 19:00:11.034302 containerd[1547]: time="2025-06-20T19:00:11.034219671Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 524.412377ms" Jun 20 19:00:11.038001 containerd[1547]: time="2025-06-20T19:00:11.037947349Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 19:00:11.040504 containerd[1547]: time="2025-06-20T19:00:11.040444797Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 521.429696ms" Jun 20 19:00:11.070569 kubelet[2282]: W0620 19:00:11.070464 2282 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://157.180.74.176:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-2-0-4-ec216ba796&limit=500&resourceVersion=0": dial tcp 157.180.74.176:6443: connect: connection refused Jun 20 19:00:11.070805 kubelet[2282]: E0620 19:00:11.070610 2282 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://157.180.74.176:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-2-0-4-ec216ba796&limit=500&resourceVersion=0\": dial tcp 157.180.74.176:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:00:11.220489 containerd[1547]: time="2025-06-20T19:00:11.217789125Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 19:00:11.220691 containerd[1547]: time="2025-06-20T19:00:11.220440750Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 19:00:11.220691 containerd[1547]: time="2025-06-20T19:00:11.220490369Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:00:11.220826 containerd[1547]: time="2025-06-20T19:00:11.220695812Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:00:11.221928 containerd[1547]: time="2025-06-20T19:00:11.221806652Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 19:00:11.222507 containerd[1547]: time="2025-06-20T19:00:11.222008252Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 19:00:11.222507 containerd[1547]: time="2025-06-20T19:00:11.222032871Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:00:11.222507 containerd[1547]: time="2025-06-20T19:00:11.222175417Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:00:11.223596 containerd[1547]: time="2025-06-20T19:00:11.222970042Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 19:00:11.223596 containerd[1547]: time="2025-06-20T19:00:11.223106315Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 19:00:11.223596 containerd[1547]: time="2025-06-20T19:00:11.223123236Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:00:11.223596 containerd[1547]: time="2025-06-20T19:00:11.223322096Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:00:11.245777 kubelet[2282]: W0620 19:00:11.245742 2282 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://157.180.74.176:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 157.180.74.176:6443: connect: connection refused Jun 20 19:00:11.245936 kubelet[2282]: E0620 19:00:11.245781 2282 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://157.180.74.176:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 157.180.74.176:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:00:11.247718 systemd[1]: Started cri-containerd-7c95909c7db8a11f5cb56fa6a63c1040838c4e729a1973d2644c4918e1e58c12.scope - libcontainer container 7c95909c7db8a11f5cb56fa6a63c1040838c4e729a1973d2644c4918e1e58c12. Jun 20 19:00:11.249616 systemd[1]: Started cri-containerd-8740e1b3c66368f5e2cacddd9a9773e1dee8ae4e420e225533029c9069da480e.scope - libcontainer container 8740e1b3c66368f5e2cacddd9a9773e1dee8ae4e420e225533029c9069da480e. Jun 20 19:00:11.252851 systemd[1]: Started cri-containerd-236760e3f6091dea48f122285617c7516ed40db2ea6ee2771c34fd2f948bdede.scope - libcontainer container 236760e3f6091dea48f122285617c7516ed40db2ea6ee2771c34fd2f948bdede. Jun 20 19:00:11.295281 containerd[1547]: time="2025-06-20T19:00:11.295189774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-2-0-4-ec216ba796,Uid:67b7f6ec6a09c20db189bb964469cf8a,Namespace:kube-system,Attempt:0,} returns sandbox id \"8740e1b3c66368f5e2cacddd9a9773e1dee8ae4e420e225533029c9069da480e\"" Jun 20 19:00:11.299494 containerd[1547]: time="2025-06-20T19:00:11.299358709Z" level=info msg="CreateContainer within sandbox \"8740e1b3c66368f5e2cacddd9a9773e1dee8ae4e420e225533029c9069da480e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 20 19:00:11.303211 containerd[1547]: time="2025-06-20T19:00:11.303102837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-2-0-4-ec216ba796,Uid:1ad1e74a8d5a5dab456b5d7961b02543,Namespace:kube-system,Attempt:0,} returns sandbox id \"236760e3f6091dea48f122285617c7516ed40db2ea6ee2771c34fd2f948bdede\"" Jun 20 19:00:11.305139 containerd[1547]: time="2025-06-20T19:00:11.305121441Z" level=info msg="CreateContainer within sandbox \"236760e3f6091dea48f122285617c7516ed40db2ea6ee2771c34fd2f948bdede\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 20 19:00:11.322565 containerd[1547]: time="2025-06-20T19:00:11.322456056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-2-0-4-ec216ba796,Uid:78e2a6298204b6cadcaf35bc7931a54c,Namespace:kube-system,Attempt:0,} returns sandbox id \"7c95909c7db8a11f5cb56fa6a63c1040838c4e729a1973d2644c4918e1e58c12\"" Jun 20 19:00:11.326377 containerd[1547]: time="2025-06-20T19:00:11.325893414Z" level=info msg="CreateContainer within sandbox \"7c95909c7db8a11f5cb56fa6a63c1040838c4e729a1973d2644c4918e1e58c12\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 20 19:00:11.334261 containerd[1547]: time="2025-06-20T19:00:11.334227660Z" level=info msg="CreateContainer within sandbox \"236760e3f6091dea48f122285617c7516ed40db2ea6ee2771c34fd2f948bdede\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"efe655239d3cf292509aa1dbc0159867b7f33b90eda0f8d637c245eee953e167\"" Jun 20 19:00:11.334975 containerd[1547]: time="2025-06-20T19:00:11.334954450Z" level=info msg="StartContainer for \"efe655239d3cf292509aa1dbc0159867b7f33b90eda0f8d637c245eee953e167\"" Jun 20 19:00:11.337364 containerd[1547]: time="2025-06-20T19:00:11.337342852Z" level=info msg="CreateContainer within sandbox \"8740e1b3c66368f5e2cacddd9a9773e1dee8ae4e420e225533029c9069da480e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"effad871f11491f9a7dd8e9646b7a5e4e0a8df678298839737e3984346017ab5\"" Jun 20 19:00:11.337968 containerd[1547]: time="2025-06-20T19:00:11.337952207Z" level=info msg="StartContainer for \"effad871f11491f9a7dd8e9646b7a5e4e0a8df678298839737e3984346017ab5\"" Jun 20 19:00:11.348850 containerd[1547]: time="2025-06-20T19:00:11.348803473Z" level=info msg="CreateContainer within sandbox \"7c95909c7db8a11f5cb56fa6a63c1040838c4e729a1973d2644c4918e1e58c12\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"73ee98ec5cfc4b8cb5b87e235d5d8b0ba3e27584bd12963a6bb593089a00d2df\"" Jun 20 19:00:11.350130 containerd[1547]: time="2025-06-20T19:00:11.350101198Z" level=info msg="StartContainer for \"73ee98ec5cfc4b8cb5b87e235d5d8b0ba3e27584bd12963a6bb593089a00d2df\"" Jun 20 19:00:11.366747 systemd[1]: Started cri-containerd-efe655239d3cf292509aa1dbc0159867b7f33b90eda0f8d637c245eee953e167.scope - libcontainer container efe655239d3cf292509aa1dbc0159867b7f33b90eda0f8d637c245eee953e167. Jun 20 19:00:11.375784 systemd[1]: Started cri-containerd-effad871f11491f9a7dd8e9646b7a5e4e0a8df678298839737e3984346017ab5.scope - libcontainer container effad871f11491f9a7dd8e9646b7a5e4e0a8df678298839737e3984346017ab5. Jun 20 19:00:11.396682 systemd[1]: Started cri-containerd-73ee98ec5cfc4b8cb5b87e235d5d8b0ba3e27584bd12963a6bb593089a00d2df.scope - libcontainer container 73ee98ec5cfc4b8cb5b87e235d5d8b0ba3e27584bd12963a6bb593089a00d2df. Jun 20 19:00:11.431269 containerd[1547]: time="2025-06-20T19:00:11.430825774Z" level=info msg="StartContainer for \"efe655239d3cf292509aa1dbc0159867b7f33b90eda0f8d637c245eee953e167\" returns successfully" Jun 20 19:00:11.448044 kubelet[2282]: E0620 19:00:11.447751 2282 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://157.180.74.176:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-2-0-4-ec216ba796?timeout=10s\": dial tcp 157.180.74.176:6443: connect: connection refused" interval="1.6s" Jun 20 19:00:11.449019 containerd[1547]: time="2025-06-20T19:00:11.448862579Z" level=info msg="StartContainer for \"effad871f11491f9a7dd8e9646b7a5e4e0a8df678298839737e3984346017ab5\" returns successfully" Jun 20 19:00:11.463542 containerd[1547]: time="2025-06-20T19:00:11.463452492Z" level=info msg="StartContainer for \"73ee98ec5cfc4b8cb5b87e235d5d8b0ba3e27584bd12963a6bb593089a00d2df\" returns successfully" Jun 20 19:00:11.523082 kubelet[2282]: W0620 19:00:11.522915 2282 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://157.180.74.176:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 157.180.74.176:6443: connect: connection refused Jun 20 19:00:11.523082 kubelet[2282]: E0620 19:00:11.522986 2282 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://157.180.74.176:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 157.180.74.176:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:00:11.612371 kubelet[2282]: W0620 19:00:11.612244 2282 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://157.180.74.176:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 157.180.74.176:6443: connect: connection refused Jun 20 19:00:11.612371 kubelet[2282]: E0620 19:00:11.612328 2282 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://157.180.74.176:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 157.180.74.176:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:00:11.627685 kubelet[2282]: I0620 19:00:11.627639 2282 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-2-0-4-ec216ba796" Jun 20 19:00:11.628280 kubelet[2282]: E0620 19:00:11.628239 2282 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://157.180.74.176:6443/api/v1/nodes\": dial tcp 157.180.74.176:6443: connect: connection refused" node="ci-4230-2-0-4-ec216ba796" Jun 20 19:00:12.087179 kubelet[2282]: E0620 19:00:12.085730 2282 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-0-4-ec216ba796\" not found" node="ci-4230-2-0-4-ec216ba796" Jun 20 19:00:12.087179 kubelet[2282]: E0620 19:00:12.085986 2282 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-0-4-ec216ba796\" not found" node="ci-4230-2-0-4-ec216ba796" Jun 20 19:00:12.088742 kubelet[2282]: E0620 19:00:12.088637 2282 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-0-4-ec216ba796\" not found" node="ci-4230-2-0-4-ec216ba796" Jun 20 19:00:13.057000 kubelet[2282]: E0620 19:00:13.056931 2282 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4230-2-0-4-ec216ba796\" not found" node="ci-4230-2-0-4-ec216ba796" Jun 20 19:00:13.092824 kubelet[2282]: E0620 19:00:13.092780 2282 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-0-4-ec216ba796\" not found" node="ci-4230-2-0-4-ec216ba796" Jun 20 19:00:13.093879 kubelet[2282]: E0620 19:00:13.093842 2282 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-0-4-ec216ba796\" not found" node="ci-4230-2-0-4-ec216ba796" Jun 20 19:00:13.231127 kubelet[2282]: I0620 19:00:13.231060 2282 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-2-0-4-ec216ba796" Jun 20 19:00:13.250471 kubelet[2282]: I0620 19:00:13.250394 2282 kubelet_node_status.go:78] "Successfully registered node" node="ci-4230-2-0-4-ec216ba796" Jun 20 19:00:13.250471 kubelet[2282]: E0620 19:00:13.250454 2282 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4230-2-0-4-ec216ba796\": node \"ci-4230-2-0-4-ec216ba796\" not found" Jun 20 19:00:13.269064 kubelet[2282]: E0620 19:00:13.269000 2282 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230-2-0-4-ec216ba796\" not found" Jun 20 19:00:13.440020 kubelet[2282]: I0620 19:00:13.439425 2282 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230-2-0-4-ec216ba796" Jun 20 19:00:13.448501 kubelet[2282]: E0620 19:00:13.448175 2282 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4230-2-0-4-ec216ba796\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4230-2-0-4-ec216ba796" Jun 20 19:00:13.448501 kubelet[2282]: I0620 19:00:13.448218 2282 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230-2-0-4-ec216ba796" Jun 20 19:00:13.450605 kubelet[2282]: E0620 19:00:13.450558 2282 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230-2-0-4-ec216ba796\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4230-2-0-4-ec216ba796" Jun 20 19:00:13.450700 kubelet[2282]: I0620 19:00:13.450621 2282 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230-2-0-4-ec216ba796" Jun 20 19:00:13.453034 kubelet[2282]: E0620 19:00:13.452981 2282 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230-2-0-4-ec216ba796\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4230-2-0-4-ec216ba796" Jun 20 19:00:14.010983 kubelet[2282]: I0620 19:00:14.010906 2282 apiserver.go:52] "Watching apiserver" Jun 20 19:00:14.043258 kubelet[2282]: I0620 19:00:14.043191 2282 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jun 20 19:00:15.158139 systemd[1]: Reload requested from client PID 2551 ('systemctl') (unit session-7.scope)... Jun 20 19:00:15.158169 systemd[1]: Reloading... Jun 20 19:00:15.296562 zram_generator::config[2599]: No configuration found. Jun 20 19:00:15.400751 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:00:15.518589 systemd[1]: Reloading finished in 359 ms. Jun 20 19:00:15.543552 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:00:15.543914 kubelet[2282]: I0620 19:00:15.543889 2282 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 20 19:00:15.555042 systemd[1]: kubelet.service: Deactivated successfully. Jun 20 19:00:15.555421 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:00:15.555502 systemd[1]: kubelet.service: Consumed 783ms CPU time, 127.5M memory peak. Jun 20 19:00:15.562964 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:00:15.712849 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:00:15.716287 (kubelet)[2647]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 20 19:00:15.813198 kubelet[2647]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:00:15.813198 kubelet[2647]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jun 20 19:00:15.813198 kubelet[2647]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:00:15.814026 kubelet[2647]: I0620 19:00:15.813146 2647 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 20 19:00:15.823314 kubelet[2647]: I0620 19:00:15.823262 2647 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jun 20 19:00:15.823314 kubelet[2647]: I0620 19:00:15.823304 2647 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 20 19:00:15.823789 kubelet[2647]: I0620 19:00:15.823753 2647 server.go:954] "Client rotation is on, will bootstrap in background" Jun 20 19:00:15.829491 kubelet[2647]: I0620 19:00:15.829020 2647 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jun 20 19:00:15.834924 kubelet[2647]: I0620 19:00:15.834895 2647 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 20 19:00:15.839703 kubelet[2647]: E0620 19:00:15.839652 2647 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jun 20 19:00:15.839920 kubelet[2647]: I0620 19:00:15.839899 2647 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jun 20 19:00:15.846174 kubelet[2647]: I0620 19:00:15.846146 2647 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 20 19:00:15.846636 kubelet[2647]: I0620 19:00:15.846599 2647 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 20 19:00:15.846980 kubelet[2647]: I0620 19:00:15.846716 2647 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-2-0-4-ec216ba796","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 20 19:00:15.847151 kubelet[2647]: I0620 19:00:15.847138 2647 topology_manager.go:138] "Creating topology manager with none policy" Jun 20 19:00:15.847575 kubelet[2647]: I0620 19:00:15.847206 2647 container_manager_linux.go:304] "Creating device plugin manager" Jun 20 19:00:15.847575 kubelet[2647]: I0620 19:00:15.847264 2647 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:00:15.847575 kubelet[2647]: I0620 19:00:15.847442 2647 kubelet.go:446] "Attempting to sync node with API server" Jun 20 19:00:15.847575 kubelet[2647]: I0620 19:00:15.847468 2647 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 20 19:00:15.847575 kubelet[2647]: I0620 19:00:15.847491 2647 kubelet.go:352] "Adding apiserver pod source" Jun 20 19:00:15.847575 kubelet[2647]: I0620 19:00:15.847503 2647 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 20 19:00:15.849239 kubelet[2647]: I0620 19:00:15.849223 2647 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jun 20 19:00:15.849984 kubelet[2647]: I0620 19:00:15.849931 2647 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 20 19:00:15.850775 kubelet[2647]: I0620 19:00:15.850760 2647 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jun 20 19:00:15.850898 kubelet[2647]: I0620 19:00:15.850885 2647 server.go:1287] "Started kubelet" Jun 20 19:00:15.856451 kubelet[2647]: I0620 19:00:15.856247 2647 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 20 19:00:15.862780 kubelet[2647]: I0620 19:00:15.862736 2647 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jun 20 19:00:15.865900 kubelet[2647]: I0620 19:00:15.864959 2647 server.go:479] "Adding debug handlers to kubelet server" Jun 20 19:00:15.870203 kubelet[2647]: I0620 19:00:15.870131 2647 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 20 19:00:15.870450 kubelet[2647]: I0620 19:00:15.870411 2647 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 20 19:00:15.872553 kubelet[2647]: I0620 19:00:15.870731 2647 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 20 19:00:15.874460 kubelet[2647]: I0620 19:00:15.874429 2647 volume_manager.go:297] "Starting Kubelet Volume Manager" Jun 20 19:00:15.874815 kubelet[2647]: E0620 19:00:15.874784 2647 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230-2-0-4-ec216ba796\" not found" Jun 20 19:00:15.877364 kubelet[2647]: I0620 19:00:15.877297 2647 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jun 20 19:00:15.878114 kubelet[2647]: I0620 19:00:15.877580 2647 reconciler.go:26] "Reconciler: start to sync state" Jun 20 19:00:15.899168 kubelet[2647]: I0620 19:00:15.899007 2647 factory.go:221] Registration of the containerd container factory successfully Jun 20 19:00:15.899168 kubelet[2647]: I0620 19:00:15.899029 2647 factory.go:221] Registration of the systemd container factory successfully Jun 20 19:00:15.899168 kubelet[2647]: I0620 19:00:15.899105 2647 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 20 19:00:15.908938 kubelet[2647]: I0620 19:00:15.908906 2647 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 20 19:00:15.910228 kubelet[2647]: I0620 19:00:15.910215 2647 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 20 19:00:15.910305 kubelet[2647]: I0620 19:00:15.910298 2647 status_manager.go:227] "Starting to sync pod status with apiserver" Jun 20 19:00:15.910359 kubelet[2647]: I0620 19:00:15.910353 2647 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jun 20 19:00:15.910399 kubelet[2647]: I0620 19:00:15.910394 2647 kubelet.go:2382] "Starting kubelet main sync loop" Jun 20 19:00:15.910476 kubelet[2647]: E0620 19:00:15.910462 2647 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 20 19:00:15.948764 kubelet[2647]: I0620 19:00:15.948718 2647 cpu_manager.go:221] "Starting CPU manager" policy="none" Jun 20 19:00:15.948764 kubelet[2647]: I0620 19:00:15.948742 2647 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jun 20 19:00:15.948764 kubelet[2647]: I0620 19:00:15.948761 2647 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:00:15.948994 kubelet[2647]: I0620 19:00:15.948949 2647 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 20 19:00:15.948994 kubelet[2647]: I0620 19:00:15.948959 2647 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 20 19:00:15.948994 kubelet[2647]: I0620 19:00:15.948981 2647 policy_none.go:49] "None policy: Start" Jun 20 19:00:15.948994 kubelet[2647]: I0620 19:00:15.948990 2647 memory_manager.go:186] "Starting memorymanager" policy="None" Jun 20 19:00:15.948994 kubelet[2647]: I0620 19:00:15.948998 2647 state_mem.go:35] "Initializing new in-memory state store" Jun 20 19:00:15.949153 kubelet[2647]: I0620 19:00:15.949118 2647 state_mem.go:75] "Updated machine memory state" Jun 20 19:00:15.954776 kubelet[2647]: I0620 19:00:15.954749 2647 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 20 19:00:15.954951 kubelet[2647]: I0620 19:00:15.954932 2647 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 20 19:00:15.954986 kubelet[2647]: I0620 19:00:15.954951 2647 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 20 19:00:15.956195 kubelet[2647]: I0620 19:00:15.956157 2647 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 20 19:00:15.959707 kubelet[2647]: E0620 19:00:15.958517 2647 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jun 20 19:00:16.013219 kubelet[2647]: I0620 19:00:16.012308 2647 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230-2-0-4-ec216ba796" Jun 20 19:00:16.014079 kubelet[2647]: I0620 19:00:16.013865 2647 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230-2-0-4-ec216ba796" Jun 20 19:00:16.015089 kubelet[2647]: I0620 19:00:16.013988 2647 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230-2-0-4-ec216ba796" Jun 20 19:00:16.060221 kubelet[2647]: I0620 19:00:16.060156 2647 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-2-0-4-ec216ba796" Jun 20 19:00:16.072922 kubelet[2647]: I0620 19:00:16.072714 2647 kubelet_node_status.go:124] "Node was previously registered" node="ci-4230-2-0-4-ec216ba796" Jun 20 19:00:16.072922 kubelet[2647]: I0620 19:00:16.072837 2647 kubelet_node_status.go:78] "Successfully registered node" node="ci-4230-2-0-4-ec216ba796" Jun 20 19:00:16.175999 sudo[2680]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jun 20 19:00:16.176391 sudo[2680]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jun 20 19:00:16.179085 kubelet[2647]: I0620 19:00:16.179034 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/67b7f6ec6a09c20db189bb964469cf8a-kubeconfig\") pod \"kube-controller-manager-ci-4230-2-0-4-ec216ba796\" (UID: \"67b7f6ec6a09c20db189bb964469cf8a\") " pod="kube-system/kube-controller-manager-ci-4230-2-0-4-ec216ba796" Jun 20 19:00:16.179231 kubelet[2647]: I0620 19:00:16.179119 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/67b7f6ec6a09c20db189bb964469cf8a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-2-0-4-ec216ba796\" (UID: \"67b7f6ec6a09c20db189bb964469cf8a\") " pod="kube-system/kube-controller-manager-ci-4230-2-0-4-ec216ba796" Jun 20 19:00:16.179231 kubelet[2647]: I0620 19:00:16.179152 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1ad1e74a8d5a5dab456b5d7961b02543-ca-certs\") pod \"kube-apiserver-ci-4230-2-0-4-ec216ba796\" (UID: \"1ad1e74a8d5a5dab456b5d7961b02543\") " pod="kube-system/kube-apiserver-ci-4230-2-0-4-ec216ba796" Jun 20 19:00:16.179231 kubelet[2647]: I0620 19:00:16.179205 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1ad1e74a8d5a5dab456b5d7961b02543-k8s-certs\") pod \"kube-apiserver-ci-4230-2-0-4-ec216ba796\" (UID: \"1ad1e74a8d5a5dab456b5d7961b02543\") " pod="kube-system/kube-apiserver-ci-4230-2-0-4-ec216ba796" Jun 20 19:00:16.179344 kubelet[2647]: I0620 19:00:16.179230 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1ad1e74a8d5a5dab456b5d7961b02543-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-2-0-4-ec216ba796\" (UID: \"1ad1e74a8d5a5dab456b5d7961b02543\") " pod="kube-system/kube-apiserver-ci-4230-2-0-4-ec216ba796" Jun 20 19:00:16.179344 kubelet[2647]: I0620 19:00:16.179275 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/67b7f6ec6a09c20db189bb964469cf8a-ca-certs\") pod \"kube-controller-manager-ci-4230-2-0-4-ec216ba796\" (UID: \"67b7f6ec6a09c20db189bb964469cf8a\") " pod="kube-system/kube-controller-manager-ci-4230-2-0-4-ec216ba796" Jun 20 19:00:16.179344 kubelet[2647]: I0620 19:00:16.179298 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/67b7f6ec6a09c20db189bb964469cf8a-k8s-certs\") pod \"kube-controller-manager-ci-4230-2-0-4-ec216ba796\" (UID: \"67b7f6ec6a09c20db189bb964469cf8a\") " pod="kube-system/kube-controller-manager-ci-4230-2-0-4-ec216ba796" Jun 20 19:00:16.179344 kubelet[2647]: I0620 19:00:16.179321 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/67b7f6ec6a09c20db189bb964469cf8a-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-2-0-4-ec216ba796\" (UID: \"67b7f6ec6a09c20db189bb964469cf8a\") " pod="kube-system/kube-controller-manager-ci-4230-2-0-4-ec216ba796" Jun 20 19:00:16.179480 kubelet[2647]: I0620 19:00:16.179370 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/78e2a6298204b6cadcaf35bc7931a54c-kubeconfig\") pod \"kube-scheduler-ci-4230-2-0-4-ec216ba796\" (UID: \"78e2a6298204b6cadcaf35bc7931a54c\") " pod="kube-system/kube-scheduler-ci-4230-2-0-4-ec216ba796" Jun 20 19:00:16.784880 sudo[2680]: pam_unix(sudo:session): session closed for user root Jun 20 19:00:16.849146 kubelet[2647]: I0620 19:00:16.848776 2647 apiserver.go:52] "Watching apiserver" Jun 20 19:00:16.877731 kubelet[2647]: I0620 19:00:16.877666 2647 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jun 20 19:00:16.932571 kubelet[2647]: I0620 19:00:16.930708 2647 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230-2-0-4-ec216ba796" Jun 20 19:00:16.932571 kubelet[2647]: I0620 19:00:16.932342 2647 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230-2-0-4-ec216ba796" Jun 20 19:00:16.933553 kubelet[2647]: I0620 19:00:16.933477 2647 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230-2-0-4-ec216ba796" Jun 20 19:00:16.944256 kubelet[2647]: E0620 19:00:16.944179 2647 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230-2-0-4-ec216ba796\" already exists" pod="kube-system/kube-apiserver-ci-4230-2-0-4-ec216ba796" Jun 20 19:00:16.948629 kubelet[2647]: E0620 19:00:16.948588 2647 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230-2-0-4-ec216ba796\" already exists" pod="kube-system/kube-scheduler-ci-4230-2-0-4-ec216ba796" Jun 20 19:00:16.948884 kubelet[2647]: E0620 19:00:16.948847 2647 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4230-2-0-4-ec216ba796\" already exists" pod="kube-system/kube-controller-manager-ci-4230-2-0-4-ec216ba796" Jun 20 19:00:16.971677 kubelet[2647]: I0620 19:00:16.971560 2647 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230-2-0-4-ec216ba796" podStartSLOduration=0.971538479 podStartE2EDuration="971.538479ms" podCreationTimestamp="2025-06-20 19:00:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:00:16.970850183 +0000 UTC m=+1.247338472" watchObservedRunningTime="2025-06-20 19:00:16.971538479 +0000 UTC m=+1.248026768" Jun 20 19:00:16.997268 kubelet[2647]: I0620 19:00:16.997181 2647 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230-2-0-4-ec216ba796" podStartSLOduration=0.997154231 podStartE2EDuration="997.154231ms" podCreationTimestamp="2025-06-20 19:00:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:00:16.995237623 +0000 UTC m=+1.271725912" watchObservedRunningTime="2025-06-20 19:00:16.997154231 +0000 UTC m=+1.273642510" Jun 20 19:00:16.997514 kubelet[2647]: I0620 19:00:16.997355 2647 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230-2-0-4-ec216ba796" podStartSLOduration=0.997350556 podStartE2EDuration="997.350556ms" podCreationTimestamp="2025-06-20 19:00:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:00:16.984685425 +0000 UTC m=+1.261173714" watchObservedRunningTime="2025-06-20 19:00:16.997350556 +0000 UTC m=+1.273838845" Jun 20 19:00:18.755109 sudo[1745]: pam_unix(sudo:session): session closed for user root Jun 20 19:00:18.913035 sshd[1741]: Connection closed by 139.178.68.195 port 54688 Jun 20 19:00:18.915262 sshd-session[1739]: pam_unix(sshd:session): session closed for user core Jun 20 19:00:18.919328 systemd[1]: sshd@6-157.180.74.176:22-139.178.68.195:54688.service: Deactivated successfully. Jun 20 19:00:18.922656 systemd[1]: session-7.scope: Deactivated successfully. Jun 20 19:00:18.922913 systemd[1]: session-7.scope: Consumed 5.609s CPU time, 211.7M memory peak. Jun 20 19:00:18.925960 systemd-logind[1529]: Session 7 logged out. Waiting for processes to exit. Jun 20 19:00:18.927602 systemd-logind[1529]: Removed session 7. Jun 20 19:00:21.881342 kubelet[2647]: I0620 19:00:21.881223 2647 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 20 19:00:21.882349 containerd[1547]: time="2025-06-20T19:00:21.882243815Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 20 19:00:21.882869 kubelet[2647]: I0620 19:00:21.882656 2647 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 20 19:00:22.899750 systemd[1]: Created slice kubepods-besteffort-pod131ca92e_c9fb_4854_90cc_b0d16ea3b0e2.slice - libcontainer container kubepods-besteffort-pod131ca92e_c9fb_4854_90cc_b0d16ea3b0e2.slice. Jun 20 19:00:22.914457 systemd[1]: Created slice kubepods-burstable-podb0ca000e_2bf9_4f69_8a9d_cf84eacc1b32.slice - libcontainer container kubepods-burstable-podb0ca000e_2bf9_4f69_8a9d_cf84eacc1b32.slice. Jun 20 19:00:22.929602 kubelet[2647]: I0620 19:00:22.928300 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32-etc-cni-netd\") pod \"cilium-zf4m2\" (UID: \"b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32\") " pod="kube-system/cilium-zf4m2" Jun 20 19:00:22.929602 kubelet[2647]: I0620 19:00:22.928348 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32-bpf-maps\") pod \"cilium-zf4m2\" (UID: \"b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32\") " pod="kube-system/cilium-zf4m2" Jun 20 19:00:22.929602 kubelet[2647]: I0620 19:00:22.928382 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32-cilium-run\") pod \"cilium-zf4m2\" (UID: \"b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32\") " pod="kube-system/cilium-zf4m2" Jun 20 19:00:22.929602 kubelet[2647]: I0620 19:00:22.928403 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32-cni-path\") pod \"cilium-zf4m2\" (UID: \"b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32\") " pod="kube-system/cilium-zf4m2" Jun 20 19:00:22.929602 kubelet[2647]: I0620 19:00:22.928423 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32-lib-modules\") pod \"cilium-zf4m2\" (UID: \"b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32\") " pod="kube-system/cilium-zf4m2" Jun 20 19:00:22.929602 kubelet[2647]: I0620 19:00:22.928444 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32-cilium-config-path\") pod \"cilium-zf4m2\" (UID: \"b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32\") " pod="kube-system/cilium-zf4m2" Jun 20 19:00:22.930054 kubelet[2647]: I0620 19:00:22.928460 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32-host-proc-sys-net\") pod \"cilium-zf4m2\" (UID: \"b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32\") " pod="kube-system/cilium-zf4m2" Jun 20 19:00:22.930054 kubelet[2647]: I0620 19:00:22.928477 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32-host-proc-sys-kernel\") pod \"cilium-zf4m2\" (UID: \"b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32\") " pod="kube-system/cilium-zf4m2" Jun 20 19:00:22.930054 kubelet[2647]: I0620 19:00:22.928495 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32-hubble-tls\") pod \"cilium-zf4m2\" (UID: \"b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32\") " pod="kube-system/cilium-zf4m2" Jun 20 19:00:22.930054 kubelet[2647]: I0620 19:00:22.928513 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8vfs\" (UniqueName: \"kubernetes.io/projected/131ca92e-c9fb-4854-90cc-b0d16ea3b0e2-kube-api-access-l8vfs\") pod \"kube-proxy-7gg47\" (UID: \"131ca92e-c9fb-4854-90cc-b0d16ea3b0e2\") " pod="kube-system/kube-proxy-7gg47" Jun 20 19:00:22.930054 kubelet[2647]: I0620 19:00:22.929915 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32-xtables-lock\") pod \"cilium-zf4m2\" (UID: \"b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32\") " pod="kube-system/cilium-zf4m2" Jun 20 19:00:22.930199 kubelet[2647]: I0620 19:00:22.929940 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxxd8\" (UniqueName: \"kubernetes.io/projected/b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32-kube-api-access-rxxd8\") pod \"cilium-zf4m2\" (UID: \"b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32\") " pod="kube-system/cilium-zf4m2" Jun 20 19:00:22.930199 kubelet[2647]: I0620 19:00:22.929978 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/131ca92e-c9fb-4854-90cc-b0d16ea3b0e2-lib-modules\") pod \"kube-proxy-7gg47\" (UID: \"131ca92e-c9fb-4854-90cc-b0d16ea3b0e2\") " pod="kube-system/kube-proxy-7gg47" Jun 20 19:00:22.930199 kubelet[2647]: I0620 19:00:22.929999 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32-cilium-cgroup\") pod \"cilium-zf4m2\" (UID: \"b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32\") " pod="kube-system/cilium-zf4m2" Jun 20 19:00:22.930199 kubelet[2647]: I0620 19:00:22.930016 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32-clustermesh-secrets\") pod \"cilium-zf4m2\" (UID: \"b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32\") " pod="kube-system/cilium-zf4m2" Jun 20 19:00:22.930199 kubelet[2647]: I0620 19:00:22.930047 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/131ca92e-c9fb-4854-90cc-b0d16ea3b0e2-xtables-lock\") pod \"kube-proxy-7gg47\" (UID: \"131ca92e-c9fb-4854-90cc-b0d16ea3b0e2\") " pod="kube-system/kube-proxy-7gg47" Jun 20 19:00:22.930301 kubelet[2647]: I0620 19:00:22.930066 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/131ca92e-c9fb-4854-90cc-b0d16ea3b0e2-kube-proxy\") pod \"kube-proxy-7gg47\" (UID: \"131ca92e-c9fb-4854-90cc-b0d16ea3b0e2\") " pod="kube-system/kube-proxy-7gg47" Jun 20 19:00:22.930301 kubelet[2647]: I0620 19:00:22.930085 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32-hostproc\") pod \"cilium-zf4m2\" (UID: \"b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32\") " pod="kube-system/cilium-zf4m2" Jun 20 19:00:23.065542 systemd[1]: Created slice kubepods-besteffort-podb6fc2633_e411_4526_aab5_5c4e55b6809f.slice - libcontainer container kubepods-besteffort-podb6fc2633_e411_4526_aab5_5c4e55b6809f.slice. Jun 20 19:00:23.132416 kubelet[2647]: I0620 19:00:23.132337 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b6fc2633-e411-4526-aab5-5c4e55b6809f-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-slplf\" (UID: \"b6fc2633-e411-4526-aab5-5c4e55b6809f\") " pod="kube-system/cilium-operator-6c4d7847fc-slplf" Jun 20 19:00:23.132416 kubelet[2647]: I0620 19:00:23.132387 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpg95\" (UniqueName: \"kubernetes.io/projected/b6fc2633-e411-4526-aab5-5c4e55b6809f-kube-api-access-hpg95\") pod \"cilium-operator-6c4d7847fc-slplf\" (UID: \"b6fc2633-e411-4526-aab5-5c4e55b6809f\") " pod="kube-system/cilium-operator-6c4d7847fc-slplf" Jun 20 19:00:23.194905 update_engine[1532]: I20250620 19:00:23.194789 1532 update_attempter.cc:509] Updating boot flags... Jun 20 19:00:23.209322 containerd[1547]: time="2025-06-20T19:00:23.208770704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7gg47,Uid:131ca92e-c9fb-4854-90cc-b0d16ea3b0e2,Namespace:kube-system,Attempt:0,}" Jun 20 19:00:23.220103 containerd[1547]: time="2025-06-20T19:00:23.220044825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zf4m2,Uid:b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32,Namespace:kube-system,Attempt:0,}" Jun 20 19:00:23.265963 containerd[1547]: time="2025-06-20T19:00:23.265636815Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 19:00:23.265963 containerd[1547]: time="2025-06-20T19:00:23.265713175Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 19:00:23.265963 containerd[1547]: time="2025-06-20T19:00:23.265734143Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:00:23.265963 containerd[1547]: time="2025-06-20T19:00:23.265820635Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:00:23.294872 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2755) Jun 20 19:00:23.309075 systemd[1]: Started cri-containerd-1b883f1438efea8e098ce887b5033e7460ce917e922af7906d792f41cff45178.scope - libcontainer container 1b883f1438efea8e098ce887b5033e7460ce917e922af7906d792f41cff45178. Jun 20 19:00:23.314534 containerd[1547]: time="2025-06-20T19:00:23.314443473Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 19:00:23.314598 containerd[1547]: time="2025-06-20T19:00:23.314539156Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 19:00:23.314598 containerd[1547]: time="2025-06-20T19:00:23.314562988Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:00:23.316839 containerd[1547]: time="2025-06-20T19:00:23.314657590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:00:23.367761 containerd[1547]: time="2025-06-20T19:00:23.367677357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-slplf,Uid:b6fc2633-e411-4526-aab5-5c4e55b6809f,Namespace:kube-system,Attempt:0,}" Jun 20 19:00:23.398553 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2759) Jun 20 19:00:23.451707 systemd[1]: Started cri-containerd-b66230fe0cbb4d8e05f6cf7f9e66520ec63f00774c82b9d8a91510522532c8b2.scope - libcontainer container b66230fe0cbb4d8e05f6cf7f9e66520ec63f00774c82b9d8a91510522532c8b2. Jun 20 19:00:23.452086 containerd[1547]: time="2025-06-20T19:00:23.451469980Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 19:00:23.452086 containerd[1547]: time="2025-06-20T19:00:23.451536859Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 19:00:23.453130 containerd[1547]: time="2025-06-20T19:00:23.452801295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:00:23.453130 containerd[1547]: time="2025-06-20T19:00:23.453057786Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:00:23.487051 containerd[1547]: time="2025-06-20T19:00:23.487001201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7gg47,Uid:131ca92e-c9fb-4854-90cc-b0d16ea3b0e2,Namespace:kube-system,Attempt:0,} returns sandbox id \"1b883f1438efea8e098ce887b5033e7460ce917e922af7906d792f41cff45178\"" Jun 20 19:00:23.501900 systemd[1]: Started cri-containerd-41c03888fe4f67e171e9bd937cb8ed724524a0af817a409da8ce8e42e36bb5ca.scope - libcontainer container 41c03888fe4f67e171e9bd937cb8ed724524a0af817a409da8ce8e42e36bb5ca. Jun 20 19:00:23.506211 containerd[1547]: time="2025-06-20T19:00:23.505635790Z" level=info msg="CreateContainer within sandbox \"1b883f1438efea8e098ce887b5033e7460ce917e922af7906d792f41cff45178\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 20 19:00:23.525652 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2759) Jun 20 19:00:23.557470 containerd[1547]: time="2025-06-20T19:00:23.556521737Z" level=info msg="CreateContainer within sandbox \"1b883f1438efea8e098ce887b5033e7460ce917e922af7906d792f41cff45178\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"73ed92b488662bfb6c0c7ffbc51222289aa0d9dd4575f830d64edfca83be2fa1\"" Jun 20 19:00:23.560808 containerd[1547]: time="2025-06-20T19:00:23.560774166Z" level=info msg="StartContainer for \"73ed92b488662bfb6c0c7ffbc51222289aa0d9dd4575f830d64edfca83be2fa1\"" Jun 20 19:00:23.598859 containerd[1547]: time="2025-06-20T19:00:23.598665801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zf4m2,Uid:b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32,Namespace:kube-system,Attempt:0,} returns sandbox id \"b66230fe0cbb4d8e05f6cf7f9e66520ec63f00774c82b9d8a91510522532c8b2\"" Jun 20 19:00:23.602479 containerd[1547]: time="2025-06-20T19:00:23.602228583Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jun 20 19:00:23.614727 containerd[1547]: time="2025-06-20T19:00:23.614685398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-slplf,Uid:b6fc2633-e411-4526-aab5-5c4e55b6809f,Namespace:kube-system,Attempt:0,} returns sandbox id \"41c03888fe4f67e171e9bd937cb8ed724524a0af817a409da8ce8e42e36bb5ca\"" Jun 20 19:00:23.627673 systemd[1]: Started cri-containerd-73ed92b488662bfb6c0c7ffbc51222289aa0d9dd4575f830d64edfca83be2fa1.scope - libcontainer container 73ed92b488662bfb6c0c7ffbc51222289aa0d9dd4575f830d64edfca83be2fa1. Jun 20 19:00:23.661773 containerd[1547]: time="2025-06-20T19:00:23.661742711Z" level=info msg="StartContainer for \"73ed92b488662bfb6c0c7ffbc51222289aa0d9dd4575f830d64edfca83be2fa1\" returns successfully" Jun 20 19:00:23.971386 kubelet[2647]: I0620 19:00:23.971067 2647 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7gg47" podStartSLOduration=1.971035998 podStartE2EDuration="1.971035998s" podCreationTimestamp="2025-06-20 19:00:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:00:23.970056277 +0000 UTC m=+8.246544607" watchObservedRunningTime="2025-06-20 19:00:23.971035998 +0000 UTC m=+8.247524317" Jun 20 19:00:30.569163 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount981554107.mount: Deactivated successfully. Jun 20 19:00:32.293561 containerd[1547]: time="2025-06-20T19:00:32.293455072Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:00:32.296786 containerd[1547]: time="2025-06-20T19:00:32.296697708Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jun 20 19:00:32.297286 containerd[1547]: time="2025-06-20T19:00:32.297194560Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:00:32.325701 containerd[1547]: time="2025-06-20T19:00:32.325480289Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.723217321s" Jun 20 19:00:32.325701 containerd[1547]: time="2025-06-20T19:00:32.325572247Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jun 20 19:00:32.327680 containerd[1547]: time="2025-06-20T19:00:32.327637694Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jun 20 19:00:32.330014 containerd[1547]: time="2025-06-20T19:00:32.329860229Z" level=info msg="CreateContainer within sandbox \"b66230fe0cbb4d8e05f6cf7f9e66520ec63f00774c82b9d8a91510522532c8b2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jun 20 19:00:32.443958 containerd[1547]: time="2025-06-20T19:00:32.443903698Z" level=info msg="CreateContainer within sandbox \"b66230fe0cbb4d8e05f6cf7f9e66520ec63f00774c82b9d8a91510522532c8b2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5c07c68c2f8a2af60e5ac737d684d9acfcebc7b393c66cd263761102b482dc64\"" Jun 20 19:00:32.444927 containerd[1547]: time="2025-06-20T19:00:32.444896109Z" level=info msg="StartContainer for \"5c07c68c2f8a2af60e5ac737d684d9acfcebc7b393c66cd263761102b482dc64\"" Jun 20 19:00:32.685245 systemd[1]: run-containerd-runc-k8s.io-5c07c68c2f8a2af60e5ac737d684d9acfcebc7b393c66cd263761102b482dc64-runc.FnOMib.mount: Deactivated successfully. Jun 20 19:00:32.693737 systemd[1]: Started cri-containerd-5c07c68c2f8a2af60e5ac737d684d9acfcebc7b393c66cd263761102b482dc64.scope - libcontainer container 5c07c68c2f8a2af60e5ac737d684d9acfcebc7b393c66cd263761102b482dc64. Jun 20 19:00:32.727461 containerd[1547]: time="2025-06-20T19:00:32.727384311Z" level=info msg="StartContainer for \"5c07c68c2f8a2af60e5ac737d684d9acfcebc7b393c66cd263761102b482dc64\" returns successfully" Jun 20 19:00:32.742701 systemd[1]: cri-containerd-5c07c68c2f8a2af60e5ac737d684d9acfcebc7b393c66cd263761102b482dc64.scope: Deactivated successfully. Jun 20 19:00:32.959052 containerd[1547]: time="2025-06-20T19:00:32.920114423Z" level=info msg="shim disconnected" id=5c07c68c2f8a2af60e5ac737d684d9acfcebc7b393c66cd263761102b482dc64 namespace=k8s.io Jun 20 19:00:32.959052 containerd[1547]: time="2025-06-20T19:00:32.958867848Z" level=warning msg="cleaning up after shim disconnected" id=5c07c68c2f8a2af60e5ac737d684d9acfcebc7b393c66cd263761102b482dc64 namespace=k8s.io Jun 20 19:00:32.959052 containerd[1547]: time="2025-06-20T19:00:32.958900638Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:00:33.436475 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5c07c68c2f8a2af60e5ac737d684d9acfcebc7b393c66cd263761102b482dc64-rootfs.mount: Deactivated successfully. Jun 20 19:00:33.993457 containerd[1547]: time="2025-06-20T19:00:33.993318816Z" level=info msg="CreateContainer within sandbox \"b66230fe0cbb4d8e05f6cf7f9e66520ec63f00774c82b9d8a91510522532c8b2\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jun 20 19:00:34.016393 containerd[1547]: time="2025-06-20T19:00:34.014291636Z" level=info msg="CreateContainer within sandbox \"b66230fe0cbb4d8e05f6cf7f9e66520ec63f00774c82b9d8a91510522532c8b2\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ae2bf74e058bef11a44b6675db720c5769037d0fe17aedf5046b32a95a0b6546\"" Jun 20 19:00:34.016393 containerd[1547]: time="2025-06-20T19:00:34.015328624Z" level=info msg="StartContainer for \"ae2bf74e058bef11a44b6675db720c5769037d0fe17aedf5046b32a95a0b6546\"" Jun 20 19:00:34.024906 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4131827122.mount: Deactivated successfully. Jun 20 19:00:34.077833 systemd[1]: Started cri-containerd-ae2bf74e058bef11a44b6675db720c5769037d0fe17aedf5046b32a95a0b6546.scope - libcontainer container ae2bf74e058bef11a44b6675db720c5769037d0fe17aedf5046b32a95a0b6546. Jun 20 19:00:34.125304 containerd[1547]: time="2025-06-20T19:00:34.125131992Z" level=info msg="StartContainer for \"ae2bf74e058bef11a44b6675db720c5769037d0fe17aedf5046b32a95a0b6546\" returns successfully" Jun 20 19:00:34.141294 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 20 19:00:34.142035 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:00:34.142367 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jun 20 19:00:34.152222 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 19:00:34.152498 systemd[1]: cri-containerd-ae2bf74e058bef11a44b6675db720c5769037d0fe17aedf5046b32a95a0b6546.scope: Deactivated successfully. Jun 20 19:00:34.191638 containerd[1547]: time="2025-06-20T19:00:34.191549032Z" level=info msg="shim disconnected" id=ae2bf74e058bef11a44b6675db720c5769037d0fe17aedf5046b32a95a0b6546 namespace=k8s.io Jun 20 19:00:34.191638 containerd[1547]: time="2025-06-20T19:00:34.191632601Z" level=warning msg="cleaning up after shim disconnected" id=ae2bf74e058bef11a44b6675db720c5769037d0fe17aedf5046b32a95a0b6546 namespace=k8s.io Jun 20 19:00:34.191638 containerd[1547]: time="2025-06-20T19:00:34.191642431Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:00:34.196807 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:00:34.438500 systemd[1]: run-containerd-runc-k8s.io-ae2bf74e058bef11a44b6675db720c5769037d0fe17aedf5046b32a95a0b6546-runc.F8Er2I.mount: Deactivated successfully. Jun 20 19:00:34.438766 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ae2bf74e058bef11a44b6675db720c5769037d0fe17aedf5046b32a95a0b6546-rootfs.mount: Deactivated successfully. Jun 20 19:00:35.008362 containerd[1547]: time="2025-06-20T19:00:35.008262792Z" level=info msg="CreateContainer within sandbox \"b66230fe0cbb4d8e05f6cf7f9e66520ec63f00774c82b9d8a91510522532c8b2\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jun 20 19:00:35.133486 containerd[1547]: time="2025-06-20T19:00:35.133348693Z" level=info msg="CreateContainer within sandbox \"b66230fe0cbb4d8e05f6cf7f9e66520ec63f00774c82b9d8a91510522532c8b2\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b47ec046d0792b061304c210e8fa44f13b1461752df038cfd99a6b5115e4b808\"" Jun 20 19:00:35.135561 containerd[1547]: time="2025-06-20T19:00:35.134032670Z" level=info msg="StartContainer for \"b47ec046d0792b061304c210e8fa44f13b1461752df038cfd99a6b5115e4b808\"" Jun 20 19:00:35.187696 systemd[1]: Started cri-containerd-b47ec046d0792b061304c210e8fa44f13b1461752df038cfd99a6b5115e4b808.scope - libcontainer container b47ec046d0792b061304c210e8fa44f13b1461752df038cfd99a6b5115e4b808. Jun 20 19:00:35.222827 containerd[1547]: time="2025-06-20T19:00:35.222783717Z" level=info msg="StartContainer for \"b47ec046d0792b061304c210e8fa44f13b1461752df038cfd99a6b5115e4b808\" returns successfully" Jun 20 19:00:35.226219 systemd[1]: cri-containerd-b47ec046d0792b061304c210e8fa44f13b1461752df038cfd99a6b5115e4b808.scope: Deactivated successfully. Jun 20 19:00:35.256086 containerd[1547]: time="2025-06-20T19:00:35.255980340Z" level=info msg="shim disconnected" id=b47ec046d0792b061304c210e8fa44f13b1461752df038cfd99a6b5115e4b808 namespace=k8s.io Jun 20 19:00:35.256086 containerd[1547]: time="2025-06-20T19:00:35.256058986Z" level=warning msg="cleaning up after shim disconnected" id=b47ec046d0792b061304c210e8fa44f13b1461752df038cfd99a6b5115e4b808 namespace=k8s.io Jun 20 19:00:35.256086 containerd[1547]: time="2025-06-20T19:00:35.256070771Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:00:35.437633 systemd[1]: run-containerd-runc-k8s.io-b47ec046d0792b061304c210e8fa44f13b1461752df038cfd99a6b5115e4b808-runc.xCbhzu.mount: Deactivated successfully. Jun 20 19:00:35.437814 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b47ec046d0792b061304c210e8fa44f13b1461752df038cfd99a6b5115e4b808-rootfs.mount: Deactivated successfully. Jun 20 19:00:36.023396 containerd[1547]: time="2025-06-20T19:00:36.023105822Z" level=info msg="CreateContainer within sandbox \"b66230fe0cbb4d8e05f6cf7f9e66520ec63f00774c82b9d8a91510522532c8b2\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jun 20 19:00:36.059214 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1613815526.mount: Deactivated successfully. Jun 20 19:00:36.060620 containerd[1547]: time="2025-06-20T19:00:36.060422941Z" level=info msg="CreateContainer within sandbox \"b66230fe0cbb4d8e05f6cf7f9e66520ec63f00774c82b9d8a91510522532c8b2\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5f5ca3a229e597676c86b3b48abca6f4cee5780e9bd61a33ff040746b7e80897\"" Jun 20 19:00:36.062231 containerd[1547]: time="2025-06-20T19:00:36.061231050Z" level=info msg="StartContainer for \"5f5ca3a229e597676c86b3b48abca6f4cee5780e9bd61a33ff040746b7e80897\"" Jun 20 19:00:36.114819 systemd[1]: Started cri-containerd-5f5ca3a229e597676c86b3b48abca6f4cee5780e9bd61a33ff040746b7e80897.scope - libcontainer container 5f5ca3a229e597676c86b3b48abca6f4cee5780e9bd61a33ff040746b7e80897. Jun 20 19:00:36.150768 systemd[1]: cri-containerd-5f5ca3a229e597676c86b3b48abca6f4cee5780e9bd61a33ff040746b7e80897.scope: Deactivated successfully. Jun 20 19:00:36.155281 containerd[1547]: time="2025-06-20T19:00:36.155157926Z" level=info msg="StartContainer for \"5f5ca3a229e597676c86b3b48abca6f4cee5780e9bd61a33ff040746b7e80897\" returns successfully" Jun 20 19:00:36.190303 containerd[1547]: time="2025-06-20T19:00:36.190226825Z" level=info msg="shim disconnected" id=5f5ca3a229e597676c86b3b48abca6f4cee5780e9bd61a33ff040746b7e80897 namespace=k8s.io Jun 20 19:00:36.190701 containerd[1547]: time="2025-06-20T19:00:36.190648833Z" level=warning msg="cleaning up after shim disconnected" id=5f5ca3a229e597676c86b3b48abca6f4cee5780e9bd61a33ff040746b7e80897 namespace=k8s.io Jun 20 19:00:36.190701 containerd[1547]: time="2025-06-20T19:00:36.190676039Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:00:36.437788 systemd[1]: run-containerd-runc-k8s.io-5f5ca3a229e597676c86b3b48abca6f4cee5780e9bd61a33ff040746b7e80897-runc.xdVp4b.mount: Deactivated successfully. Jun 20 19:00:36.437928 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5f5ca3a229e597676c86b3b48abca6f4cee5780e9bd61a33ff040746b7e80897-rootfs.mount: Deactivated successfully. Jun 20 19:00:37.032602 containerd[1547]: time="2025-06-20T19:00:37.032467497Z" level=info msg="CreateContainer within sandbox \"b66230fe0cbb4d8e05f6cf7f9e66520ec63f00774c82b9d8a91510522532c8b2\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jun 20 19:00:37.081101 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2711973185.mount: Deactivated successfully. Jun 20 19:00:37.087715 containerd[1547]: time="2025-06-20T19:00:37.084883623Z" level=info msg="CreateContainer within sandbox \"b66230fe0cbb4d8e05f6cf7f9e66520ec63f00774c82b9d8a91510522532c8b2\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b0ce6e22c2c0621e5ef60e6a39dad06d4655467819b741f3100098780b200c69\"" Jun 20 19:00:37.089221 containerd[1547]: time="2025-06-20T19:00:37.089180193Z" level=info msg="StartContainer for \"b0ce6e22c2c0621e5ef60e6a39dad06d4655467819b741f3100098780b200c69\"" Jun 20 19:00:37.125301 systemd[1]: Started cri-containerd-b0ce6e22c2c0621e5ef60e6a39dad06d4655467819b741f3100098780b200c69.scope - libcontainer container b0ce6e22c2c0621e5ef60e6a39dad06d4655467819b741f3100098780b200c69. Jun 20 19:00:37.169299 containerd[1547]: time="2025-06-20T19:00:37.169215951Z" level=info msg="StartContainer for \"b0ce6e22c2c0621e5ef60e6a39dad06d4655467819b741f3100098780b200c69\" returns successfully" Jun 20 19:00:37.382119 kubelet[2647]: I0620 19:00:37.381688 2647 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jun 20 19:00:37.473298 systemd[1]: Created slice kubepods-burstable-pod466a9534_fe12_476e_9f50_a4bae6981ca2.slice - libcontainer container kubepods-burstable-pod466a9534_fe12_476e_9f50_a4bae6981ca2.slice. Jun 20 19:00:37.482296 systemd[1]: Created slice kubepods-burstable-pod9bf08c48_72aa_41ef_b008_a01dff680d2a.slice - libcontainer container kubepods-burstable-pod9bf08c48_72aa_41ef_b008_a01dff680d2a.slice. Jun 20 19:00:37.551956 kubelet[2647]: I0620 19:00:37.551896 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/466a9534-fe12-476e-9f50-a4bae6981ca2-config-volume\") pod \"coredns-668d6bf9bc-gg7gs\" (UID: \"466a9534-fe12-476e-9f50-a4bae6981ca2\") " pod="kube-system/coredns-668d6bf9bc-gg7gs" Jun 20 19:00:37.551956 kubelet[2647]: I0620 19:00:37.551937 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fz78x\" (UniqueName: \"kubernetes.io/projected/9bf08c48-72aa-41ef-b008-a01dff680d2a-kube-api-access-fz78x\") pod \"coredns-668d6bf9bc-vm5th\" (UID: \"9bf08c48-72aa-41ef-b008-a01dff680d2a\") " pod="kube-system/coredns-668d6bf9bc-vm5th" Jun 20 19:00:37.551956 kubelet[2647]: I0620 19:00:37.551956 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9bf08c48-72aa-41ef-b008-a01dff680d2a-config-volume\") pod \"coredns-668d6bf9bc-vm5th\" (UID: \"9bf08c48-72aa-41ef-b008-a01dff680d2a\") " pod="kube-system/coredns-668d6bf9bc-vm5th" Jun 20 19:00:37.553225 kubelet[2647]: I0620 19:00:37.551974 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcs44\" (UniqueName: \"kubernetes.io/projected/466a9534-fe12-476e-9f50-a4bae6981ca2-kube-api-access-hcs44\") pod \"coredns-668d6bf9bc-gg7gs\" (UID: \"466a9534-fe12-476e-9f50-a4bae6981ca2\") " pod="kube-system/coredns-668d6bf9bc-gg7gs" Jun 20 19:00:37.781350 containerd[1547]: time="2025-06-20T19:00:37.781270018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gg7gs,Uid:466a9534-fe12-476e-9f50-a4bae6981ca2,Namespace:kube-system,Attempt:0,}" Jun 20 19:00:37.787897 containerd[1547]: time="2025-06-20T19:00:37.787837129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vm5th,Uid:9bf08c48-72aa-41ef-b008-a01dff680d2a,Namespace:kube-system,Attempt:0,}" Jun 20 19:00:38.059493 kubelet[2647]: I0620 19:00:38.058937 2647 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-zf4m2" podStartSLOduration=7.333434087 podStartE2EDuration="16.058874875s" podCreationTimestamp="2025-06-20 19:00:22 +0000 UTC" firstStartedPulling="2025-06-20 19:00:23.601797884 +0000 UTC m=+7.878286173" lastFinishedPulling="2025-06-20 19:00:32.327238633 +0000 UTC m=+16.603726961" observedRunningTime="2025-06-20 19:00:38.057906939 +0000 UTC m=+22.334395259" watchObservedRunningTime="2025-06-20 19:00:38.058874875 +0000 UTC m=+22.335363203" Jun 20 19:00:47.215626 containerd[1547]: time="2025-06-20T19:00:47.215554327Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:00:47.217051 containerd[1547]: time="2025-06-20T19:00:47.217019324Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jun 20 19:00:47.218446 containerd[1547]: time="2025-06-20T19:00:47.218410962Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:00:47.220121 containerd[1547]: time="2025-06-20T19:00:47.219997152Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 14.892314822s" Jun 20 19:00:47.220121 containerd[1547]: time="2025-06-20T19:00:47.220030559Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jun 20 19:00:47.222321 containerd[1547]: time="2025-06-20T19:00:47.222283009Z" level=info msg="CreateContainer within sandbox \"41c03888fe4f67e171e9bd937cb8ed724524a0af817a409da8ce8e42e36bb5ca\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jun 20 19:00:47.247653 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount112191790.mount: Deactivated successfully. Jun 20 19:00:47.249203 containerd[1547]: time="2025-06-20T19:00:47.249160480Z" level=info msg="CreateContainer within sandbox \"41c03888fe4f67e171e9bd937cb8ed724524a0af817a409da8ce8e42e36bb5ca\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"f0f9e5cb478d82f3ddee67395b415a56a33e3e43737efcfee292502ab1e79fe2\"" Jun 20 19:00:47.249854 containerd[1547]: time="2025-06-20T19:00:47.249715606Z" level=info msg="StartContainer for \"f0f9e5cb478d82f3ddee67395b415a56a33e3e43737efcfee292502ab1e79fe2\"" Jun 20 19:00:47.278657 systemd[1]: Started cri-containerd-f0f9e5cb478d82f3ddee67395b415a56a33e3e43737efcfee292502ab1e79fe2.scope - libcontainer container f0f9e5cb478d82f3ddee67395b415a56a33e3e43737efcfee292502ab1e79fe2. Jun 20 19:00:47.304569 containerd[1547]: time="2025-06-20T19:00:47.304507593Z" level=info msg="StartContainer for \"f0f9e5cb478d82f3ddee67395b415a56a33e3e43737efcfee292502ab1e79fe2\" returns successfully" Jun 20 19:00:51.259063 systemd-networkd[1429]: cilium_host: Link UP Jun 20 19:00:51.259311 systemd-networkd[1429]: cilium_net: Link UP Jun 20 19:00:51.259596 systemd-networkd[1429]: cilium_net: Gained carrier Jun 20 19:00:51.259911 systemd-networkd[1429]: cilium_host: Gained carrier Jun 20 19:00:51.373514 systemd-networkd[1429]: cilium_vxlan: Link UP Jun 20 19:00:51.373884 systemd-networkd[1429]: cilium_vxlan: Gained carrier Jun 20 19:00:51.793913 systemd-networkd[1429]: cilium_host: Gained IPv6LL Jun 20 19:00:51.986802 systemd-networkd[1429]: cilium_net: Gained IPv6LL Jun 20 19:00:52.009671 kernel: NET: Registered PF_ALG protocol family Jun 20 19:00:52.813735 systemd-networkd[1429]: lxc_health: Link UP Jun 20 19:00:52.818630 systemd-networkd[1429]: lxc_health: Gained carrier Jun 20 19:00:52.945161 systemd-networkd[1429]: lxc3018af32733b: Link UP Jun 20 19:00:52.949551 kernel: eth0: renamed from tmp784d9 Jun 20 19:00:52.952513 systemd-networkd[1429]: lxc3018af32733b: Gained carrier Jun 20 19:00:52.972409 kernel: eth0: renamed from tmp6f5e8 Jun 20 19:00:52.971783 systemd-networkd[1429]: lxce510ce04448a: Link UP Jun 20 19:00:52.979665 systemd-networkd[1429]: lxce510ce04448a: Gained carrier Jun 20 19:00:53.073664 systemd-networkd[1429]: cilium_vxlan: Gained IPv6LL Jun 20 19:00:53.257986 kubelet[2647]: I0620 19:00:53.256667 2647 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-slplf" podStartSLOduration=6.651898539 podStartE2EDuration="30.256646133s" podCreationTimestamp="2025-06-20 19:00:23 +0000 UTC" firstStartedPulling="2025-06-20 19:00:23.616327564 +0000 UTC m=+7.892815854" lastFinishedPulling="2025-06-20 19:00:47.221075139 +0000 UTC m=+31.497563448" observedRunningTime="2025-06-20 19:00:48.067172004 +0000 UTC m=+32.343660323" watchObservedRunningTime="2025-06-20 19:00:53.256646133 +0000 UTC m=+37.533134423" Jun 20 19:00:54.033865 systemd-networkd[1429]: lxce510ce04448a: Gained IPv6LL Jun 20 19:00:54.481747 systemd-networkd[1429]: lxc_health: Gained IPv6LL Jun 20 19:00:54.801759 systemd-networkd[1429]: lxc3018af32733b: Gained IPv6LL Jun 20 19:00:56.589809 containerd[1547]: time="2025-06-20T19:00:56.589595188Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 19:00:56.589809 containerd[1547]: time="2025-06-20T19:00:56.589645367Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 19:00:56.589809 containerd[1547]: time="2025-06-20T19:00:56.589658684Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:00:56.589809 containerd[1547]: time="2025-06-20T19:00:56.589724915Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:00:56.620647 containerd[1547]: time="2025-06-20T19:00:56.619831829Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 19:00:56.620647 containerd[1547]: time="2025-06-20T19:00:56.619880835Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 19:00:56.620647 containerd[1547]: time="2025-06-20T19:00:56.619894653Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:00:56.620647 containerd[1547]: time="2025-06-20T19:00:56.619960263Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:00:56.641345 systemd[1]: Started cri-containerd-6f5e83f2bd0d5fada1d8f255d95ec0534a271b68f748b14f3b6e70cc162a02d4.scope - libcontainer container 6f5e83f2bd0d5fada1d8f255d95ec0534a271b68f748b14f3b6e70cc162a02d4. Jun 20 19:00:56.660891 systemd[1]: Started cri-containerd-784d90ad846f49dcf9cadc613d744d6ae67cbfcbb36159906f21fc16a66feedf.scope - libcontainer container 784d90ad846f49dcf9cadc613d744d6ae67cbfcbb36159906f21fc16a66feedf. Jun 20 19:00:56.735977 containerd[1547]: time="2025-06-20T19:00:56.735858119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vm5th,Uid:9bf08c48-72aa-41ef-b008-a01dff680d2a,Namespace:kube-system,Attempt:0,} returns sandbox id \"784d90ad846f49dcf9cadc613d744d6ae67cbfcbb36159906f21fc16a66feedf\"" Jun 20 19:00:56.742989 containerd[1547]: time="2025-06-20T19:00:56.742843491Z" level=info msg="CreateContainer within sandbox \"784d90ad846f49dcf9cadc613d744d6ae67cbfcbb36159906f21fc16a66feedf\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 20 19:00:56.746395 containerd[1547]: time="2025-06-20T19:00:56.746318192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gg7gs,Uid:466a9534-fe12-476e-9f50-a4bae6981ca2,Namespace:kube-system,Attempt:0,} returns sandbox id \"6f5e83f2bd0d5fada1d8f255d95ec0534a271b68f748b14f3b6e70cc162a02d4\"" Jun 20 19:00:56.749642 containerd[1547]: time="2025-06-20T19:00:56.749594982Z" level=info msg="CreateContainer within sandbox \"6f5e83f2bd0d5fada1d8f255d95ec0534a271b68f748b14f3b6e70cc162a02d4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 20 19:00:56.775571 containerd[1547]: time="2025-06-20T19:00:56.775498680Z" level=info msg="CreateContainer within sandbox \"6f5e83f2bd0d5fada1d8f255d95ec0534a271b68f748b14f3b6e70cc162a02d4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6943ef262d3f58a5a5f23f24ab1f47e8ac8fdc753b184d1c83876fee77d275fd\"" Jun 20 19:00:56.776282 containerd[1547]: time="2025-06-20T19:00:56.776255351Z" level=info msg="StartContainer for \"6943ef262d3f58a5a5f23f24ab1f47e8ac8fdc753b184d1c83876fee77d275fd\"" Jun 20 19:00:56.777734 containerd[1547]: time="2025-06-20T19:00:56.777303739Z" level=info msg="CreateContainer within sandbox \"784d90ad846f49dcf9cadc613d744d6ae67cbfcbb36159906f21fc16a66feedf\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8c7d3b3e7123506a9e113f652fceb9085bd91cf67b655c1877ec50810023fbf6\"" Jun 20 19:00:56.778821 containerd[1547]: time="2025-06-20T19:00:56.778806009Z" level=info msg="StartContainer for \"8c7d3b3e7123506a9e113f652fceb9085bd91cf67b655c1877ec50810023fbf6\"" Jun 20 19:00:56.803736 systemd[1]: Started cri-containerd-6943ef262d3f58a5a5f23f24ab1f47e8ac8fdc753b184d1c83876fee77d275fd.scope - libcontainer container 6943ef262d3f58a5a5f23f24ab1f47e8ac8fdc753b184d1c83876fee77d275fd. Jun 20 19:00:56.807980 systemd[1]: Started cri-containerd-8c7d3b3e7123506a9e113f652fceb9085bd91cf67b655c1877ec50810023fbf6.scope - libcontainer container 8c7d3b3e7123506a9e113f652fceb9085bd91cf67b655c1877ec50810023fbf6. Jun 20 19:00:56.838051 containerd[1547]: time="2025-06-20T19:00:56.837845154Z" level=info msg="StartContainer for \"8c7d3b3e7123506a9e113f652fceb9085bd91cf67b655c1877ec50810023fbf6\" returns successfully" Jun 20 19:00:56.843728 containerd[1547]: time="2025-06-20T19:00:56.843506324Z" level=info msg="StartContainer for \"6943ef262d3f58a5a5f23f24ab1f47e8ac8fdc753b184d1c83876fee77d275fd\" returns successfully" Jun 20 19:00:57.144108 kubelet[2647]: I0620 19:00:57.143545 2647 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-vm5th" podStartSLOduration=34.143483662 podStartE2EDuration="34.143483662s" podCreationTimestamp="2025-06-20 19:00:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:00:57.113820074 +0000 UTC m=+41.390308413" watchObservedRunningTime="2025-06-20 19:00:57.143483662 +0000 UTC m=+41.419971981" Jun 20 19:00:57.602689 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1603213267.mount: Deactivated successfully. Jun 20 19:00:58.109813 kubelet[2647]: I0620 19:00:58.108860 2647 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-gg7gs" podStartSLOduration=35.108831601 podStartE2EDuration="35.108831601s" podCreationTimestamp="2025-06-20 19:00:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:00:57.145425554 +0000 UTC m=+41.421913863" watchObservedRunningTime="2025-06-20 19:00:58.108831601 +0000 UTC m=+42.385319930" Jun 20 19:02:12.236193 systemd[1]: Started sshd@7-157.180.74.176:22-115.247.46.121:41288.service - OpenSSH per-connection server daemon (115.247.46.121:41288). Jun 20 19:02:12.637893 sshd[4046]: Connection closed by 115.247.46.121 port 41288 [preauth] Jun 20 19:02:12.640357 systemd[1]: sshd@7-157.180.74.176:22-115.247.46.121:41288.service: Deactivated successfully. Jun 20 19:02:19.812371 systemd[1]: Started sshd@8-157.180.74.176:22-129.159.15.98:34112.service - OpenSSH per-connection server daemon (129.159.15.98:34112). Jun 20 19:02:20.348742 sshd[4053]: maximum authentication attempts exceeded for root from 129.159.15.98 port 34112 ssh2 [preauth] Jun 20 19:02:20.348742 sshd[4053]: Disconnecting authenticating user root 129.159.15.98 port 34112: Too many authentication failures [preauth] Jun 20 19:02:20.352189 systemd[1]: sshd@8-157.180.74.176:22-129.159.15.98:34112.service: Deactivated successfully. Jun 20 19:02:20.428870 systemd[1]: Started sshd@9-157.180.74.176:22-129.159.15.98:34124.service - OpenSSH per-connection server daemon (129.159.15.98:34124). Jun 20 19:02:21.040448 sshd[4058]: maximum authentication attempts exceeded for root from 129.159.15.98 port 34124 ssh2 [preauth] Jun 20 19:02:21.040448 sshd[4058]: Disconnecting authenticating user root 129.159.15.98 port 34124: Too many authentication failures [preauth] Jun 20 19:02:21.043048 systemd[1]: sshd@9-157.180.74.176:22-129.159.15.98:34124.service: Deactivated successfully. Jun 20 19:02:21.115985 systemd[1]: Started sshd@10-157.180.74.176:22-129.159.15.98:34128.service - OpenSSH per-connection server daemon (129.159.15.98:34128). Jun 20 19:02:21.704623 sshd[4063]: maximum authentication attempts exceeded for root from 129.159.15.98 port 34128 ssh2 [preauth] Jun 20 19:02:21.704623 sshd[4063]: Disconnecting authenticating user root 129.159.15.98 port 34128: Too many authentication failures [preauth] Jun 20 19:02:21.706826 systemd[1]: sshd@10-157.180.74.176:22-129.159.15.98:34128.service: Deactivated successfully. Jun 20 19:02:21.780513 systemd[1]: Started sshd@11-157.180.74.176:22-129.159.15.98:34134.service - OpenSSH per-connection server daemon (129.159.15.98:34134). Jun 20 19:02:22.218414 sshd[4068]: Received disconnect from 129.159.15.98 port 34134:11: disconnected by user [preauth] Jun 20 19:02:22.218414 sshd[4068]: Disconnected from authenticating user root 129.159.15.98 port 34134 [preauth] Jun 20 19:02:22.220734 systemd[1]: sshd@11-157.180.74.176:22-129.159.15.98:34134.service: Deactivated successfully. Jun 20 19:02:22.268308 systemd[1]: Started sshd@12-157.180.74.176:22-129.159.15.98:34150.service - OpenSSH per-connection server daemon (129.159.15.98:34150). Jun 20 19:02:22.603661 sshd[4073]: Invalid user admin from 129.159.15.98 port 34150 Jun 20 19:02:22.802641 sshd[4073]: maximum authentication attempts exceeded for invalid user admin from 129.159.15.98 port 34150 ssh2 [preauth] Jun 20 19:02:22.802641 sshd[4073]: Disconnecting invalid user admin 129.159.15.98 port 34150: Too many authentication failures [preauth] Jun 20 19:02:22.806776 systemd[1]: sshd@12-157.180.74.176:22-129.159.15.98:34150.service: Deactivated successfully. Jun 20 19:02:22.874992 systemd[1]: Started sshd@13-157.180.74.176:22-129.159.15.98:48930.service - OpenSSH per-connection server daemon (129.159.15.98:48930). Jun 20 19:02:23.124706 sshd[4078]: Invalid user admin from 129.159.15.98 port 48930 Jun 20 19:02:23.317286 sshd[4078]: maximum authentication attempts exceeded for invalid user admin from 129.159.15.98 port 48930 ssh2 [preauth] Jun 20 19:02:23.317286 sshd[4078]: Disconnecting invalid user admin 129.159.15.98 port 48930: Too many authentication failures [preauth] Jun 20 19:02:23.317039 systemd[1]: sshd@13-157.180.74.176:22-129.159.15.98:48930.service: Deactivated successfully. Jun 20 19:02:23.389147 systemd[1]: Started sshd@14-157.180.74.176:22-129.159.15.98:48940.service - OpenSSH per-connection server daemon (129.159.15.98:48940). Jun 20 19:02:23.676643 sshd[4083]: Invalid user admin from 129.159.15.98 port 48940 Jun 20 19:02:23.823350 sshd[4083]: Received disconnect from 129.159.15.98 port 48940:11: disconnected by user [preauth] Jun 20 19:02:23.823350 sshd[4083]: Disconnected from invalid user admin 129.159.15.98 port 48940 [preauth] Jun 20 19:02:23.826545 systemd[1]: sshd@14-157.180.74.176:22-129.159.15.98:48940.service: Deactivated successfully. Jun 20 19:02:23.875951 systemd[1]: Started sshd@15-157.180.74.176:22-129.159.15.98:48956.service - OpenSSH per-connection server daemon (129.159.15.98:48956). Jun 20 19:02:24.167714 sshd[4088]: Invalid user oracle from 129.159.15.98 port 48956 Jun 20 19:02:24.343714 sshd[4088]: maximum authentication attempts exceeded for invalid user oracle from 129.159.15.98 port 48956 ssh2 [preauth] Jun 20 19:02:24.343714 sshd[4088]: Disconnecting invalid user oracle 129.159.15.98 port 48956: Too many authentication failures [preauth] Jun 20 19:02:24.346186 systemd[1]: sshd@15-157.180.74.176:22-129.159.15.98:48956.service: Deactivated successfully. Jun 20 19:02:24.418949 systemd[1]: Started sshd@16-157.180.74.176:22-129.159.15.98:48964.service - OpenSSH per-connection server daemon (129.159.15.98:48964). Jun 20 19:02:24.713797 sshd[4095]: Invalid user oracle from 129.159.15.98 port 48964 Jun 20 19:02:24.906898 sshd[4095]: maximum authentication attempts exceeded for invalid user oracle from 129.159.15.98 port 48964 ssh2 [preauth] Jun 20 19:02:24.906898 sshd[4095]: Disconnecting invalid user oracle 129.159.15.98 port 48964: Too many authentication failures [preauth] Jun 20 19:02:24.910658 systemd[1]: sshd@16-157.180.74.176:22-129.159.15.98:48964.service: Deactivated successfully. Jun 20 19:02:24.989216 systemd[1]: Started sshd@17-157.180.74.176:22-129.159.15.98:48970.service - OpenSSH per-connection server daemon (129.159.15.98:48970). Jun 20 19:02:25.451017 sshd[4100]: Invalid user oracle from 129.159.15.98 port 48970 Jun 20 19:02:25.515254 sshd[4100]: Received disconnect from 129.159.15.98 port 48970:11: disconnected by user [preauth] Jun 20 19:02:25.515254 sshd[4100]: Disconnected from invalid user oracle 129.159.15.98 port 48970 [preauth] Jun 20 19:02:25.518233 systemd[1]: sshd@17-157.180.74.176:22-129.159.15.98:48970.service: Deactivated successfully. Jun 20 19:02:25.568150 systemd[1]: Started sshd@18-157.180.74.176:22-129.159.15.98:48980.service - OpenSSH per-connection server daemon (129.159.15.98:48980). Jun 20 19:02:26.028947 sshd[4105]: Invalid user usuario from 129.159.15.98 port 48980 Jun 20 19:02:26.210580 sshd[4105]: maximum authentication attempts exceeded for invalid user usuario from 129.159.15.98 port 48980 ssh2 [preauth] Jun 20 19:02:26.210580 sshd[4105]: Disconnecting invalid user usuario 129.159.15.98 port 48980: Too many authentication failures [preauth] Jun 20 19:02:26.213807 systemd[1]: sshd@18-157.180.74.176:22-129.159.15.98:48980.service: Deactivated successfully. Jun 20 19:02:26.285959 systemd[1]: Started sshd@19-157.180.74.176:22-129.159.15.98:48996.service - OpenSSH per-connection server daemon (129.159.15.98:48996). Jun 20 19:02:26.685365 sshd[4110]: Invalid user usuario from 129.159.15.98 port 48996 Jun 20 19:02:26.879679 sshd[4110]: maximum authentication attempts exceeded for invalid user usuario from 129.159.15.98 port 48996 ssh2 [preauth] Jun 20 19:02:26.879679 sshd[4110]: Disconnecting invalid user usuario 129.159.15.98 port 48996: Too many authentication failures [preauth] Jun 20 19:02:26.883083 systemd[1]: sshd@19-157.180.74.176:22-129.159.15.98:48996.service: Deactivated successfully. Jun 20 19:02:26.960192 systemd[1]: Started sshd@20-157.180.74.176:22-129.159.15.98:49006.service - OpenSSH per-connection server daemon (129.159.15.98:49006). Jun 20 19:02:27.346979 sshd[4115]: Invalid user usuario from 129.159.15.98 port 49006 Jun 20 19:02:27.418181 sshd[4115]: Received disconnect from 129.159.15.98 port 49006:11: disconnected by user [preauth] Jun 20 19:02:27.418181 sshd[4115]: Disconnected from invalid user usuario 129.159.15.98 port 49006 [preauth] Jun 20 19:02:27.421244 systemd[1]: sshd@20-157.180.74.176:22-129.159.15.98:49006.service: Deactivated successfully. Jun 20 19:02:27.470216 systemd[1]: Started sshd@21-157.180.74.176:22-129.159.15.98:49016.service - OpenSSH per-connection server daemon (129.159.15.98:49016). Jun 20 19:02:27.831584 sshd[4120]: Invalid user test from 129.159.15.98 port 49016 Jun 20 19:02:28.015982 sshd[4120]: maximum authentication attempts exceeded for invalid user test from 129.159.15.98 port 49016 ssh2 [preauth] Jun 20 19:02:28.015982 sshd[4120]: Disconnecting invalid user test 129.159.15.98 port 49016: Too many authentication failures [preauth] Jun 20 19:02:28.019662 systemd[1]: sshd@21-157.180.74.176:22-129.159.15.98:49016.service: Deactivated successfully. Jun 20 19:02:28.089929 systemd[1]: Started sshd@22-157.180.74.176:22-129.159.15.98:49028.service - OpenSSH per-connection server daemon (129.159.15.98:49028). Jun 20 19:02:28.388859 sshd[4125]: Invalid user test from 129.159.15.98 port 49028 Jun 20 19:02:28.581361 sshd[4125]: maximum authentication attempts exceeded for invalid user test from 129.159.15.98 port 49028 ssh2 [preauth] Jun 20 19:02:28.581361 sshd[4125]: Disconnecting invalid user test 129.159.15.98 port 49028: Too many authentication failures [preauth] Jun 20 19:02:28.584602 systemd[1]: sshd@22-157.180.74.176:22-129.159.15.98:49028.service: Deactivated successfully. Jun 20 19:02:28.655938 systemd[1]: Started sshd@23-157.180.74.176:22-129.159.15.98:49038.service - OpenSSH per-connection server daemon (129.159.15.98:49038). Jun 20 19:02:29.003961 sshd[4130]: Invalid user test from 129.159.15.98 port 49038 Jun 20 19:02:29.088303 sshd[4130]: Received disconnect from 129.159.15.98 port 49038:11: disconnected by user [preauth] Jun 20 19:02:29.088303 sshd[4130]: Disconnected from invalid user test 129.159.15.98 port 49038 [preauth] Jun 20 19:02:29.090870 systemd[1]: sshd@23-157.180.74.176:22-129.159.15.98:49038.service: Deactivated successfully. Jun 20 19:02:29.138004 systemd[1]: Started sshd@24-157.180.74.176:22-129.159.15.98:49050.service - OpenSSH per-connection server daemon (129.159.15.98:49050). Jun 20 19:02:29.507440 sshd[4135]: Invalid user user from 129.159.15.98 port 49050 Jun 20 19:02:29.681797 sshd[4135]: maximum authentication attempts exceeded for invalid user user from 129.159.15.98 port 49050 ssh2 [preauth] Jun 20 19:02:29.681797 sshd[4135]: Disconnecting invalid user user 129.159.15.98 port 49050: Too many authentication failures [preauth] Jun 20 19:02:29.683928 systemd[1]: sshd@24-157.180.74.176:22-129.159.15.98:49050.service: Deactivated successfully. Jun 20 19:02:29.751796 systemd[1]: Started sshd@25-157.180.74.176:22-129.159.15.98:49066.service - OpenSSH per-connection server daemon (129.159.15.98:49066). Jun 20 19:02:30.113179 sshd[4140]: Invalid user user from 129.159.15.98 port 49066 Jun 20 19:02:30.306770 sshd[4140]: maximum authentication attempts exceeded for invalid user user from 129.159.15.98 port 49066 ssh2 [preauth] Jun 20 19:02:30.306770 sshd[4140]: Disconnecting invalid user user 129.159.15.98 port 49066: Too many authentication failures [preauth] Jun 20 19:02:30.309543 systemd[1]: sshd@25-157.180.74.176:22-129.159.15.98:49066.service: Deactivated successfully. Jun 20 19:02:30.381961 systemd[1]: Started sshd@26-157.180.74.176:22-129.159.15.98:49070.service - OpenSSH per-connection server daemon (129.159.15.98:49070). Jun 20 19:02:30.774695 sshd[4145]: Invalid user user from 129.159.15.98 port 49070 Jun 20 19:02:30.913668 sshd[4145]: Received disconnect from 129.159.15.98 port 49070:11: disconnected by user [preauth] Jun 20 19:02:30.913668 sshd[4145]: Disconnected from invalid user user 129.159.15.98 port 49070 [preauth] Jun 20 19:02:30.916604 systemd[1]: sshd@26-157.180.74.176:22-129.159.15.98:49070.service: Deactivated successfully. Jun 20 19:02:30.959991 systemd[1]: Started sshd@27-157.180.74.176:22-129.159.15.98:49078.service - OpenSSH per-connection server daemon (129.159.15.98:49078). Jun 20 19:02:31.605841 sshd[4150]: Invalid user ftpuser from 129.159.15.98 port 49078 Jun 20 19:02:31.785408 sshd[4150]: maximum authentication attempts exceeded for invalid user ftpuser from 129.159.15.98 port 49078 ssh2 [preauth] Jun 20 19:02:31.785408 sshd[4150]: Disconnecting invalid user ftpuser 129.159.15.98 port 49078: Too many authentication failures [preauth] Jun 20 19:02:31.788752 systemd[1]: sshd@27-157.180.74.176:22-129.159.15.98:49078.service: Deactivated successfully. Jun 20 19:02:31.873998 systemd[1]: Started sshd@28-157.180.74.176:22-129.159.15.98:49080.service - OpenSSH per-connection server daemon (129.159.15.98:49080). Jun 20 19:02:32.365993 sshd[4155]: Invalid user ftpuser from 129.159.15.98 port 49080 Jun 20 19:02:32.557038 sshd[4155]: maximum authentication attempts exceeded for invalid user ftpuser from 129.159.15.98 port 49080 ssh2 [preauth] Jun 20 19:02:32.557038 sshd[4155]: Disconnecting invalid user ftpuser 129.159.15.98 port 49080: Too many authentication failures [preauth] Jun 20 19:02:32.560332 systemd[1]: sshd@28-157.180.74.176:22-129.159.15.98:49080.service: Deactivated successfully. Jun 20 19:02:32.633063 systemd[1]: Started sshd@29-157.180.74.176:22-129.159.15.98:57970.service - OpenSSH per-connection server daemon (129.159.15.98:57970). Jun 20 19:02:33.191939 sshd[4160]: Invalid user ftpuser from 129.159.15.98 port 57970 Jun 20 19:02:33.329272 sshd[4160]: Received disconnect from 129.159.15.98 port 57970:11: disconnected by user [preauth] Jun 20 19:02:33.329272 sshd[4160]: Disconnected from invalid user ftpuser 129.159.15.98 port 57970 [preauth] Jun 20 19:02:33.331693 systemd[1]: sshd@29-157.180.74.176:22-129.159.15.98:57970.service: Deactivated successfully. Jun 20 19:02:33.375702 systemd[1]: Started sshd@30-157.180.74.176:22-129.159.15.98:57984.service - OpenSSH per-connection server daemon (129.159.15.98:57984). Jun 20 19:02:33.821102 sshd[4165]: Invalid user test1 from 129.159.15.98 port 57984 Jun 20 19:02:34.017437 sshd[4165]: maximum authentication attempts exceeded for invalid user test1 from 129.159.15.98 port 57984 ssh2 [preauth] Jun 20 19:02:34.017437 sshd[4165]: Disconnecting invalid user test1 129.159.15.98 port 57984: Too many authentication failures [preauth] Jun 20 19:02:34.020339 systemd[1]: sshd@30-157.180.74.176:22-129.159.15.98:57984.service: Deactivated successfully. Jun 20 19:02:34.097940 systemd[1]: Started sshd@31-157.180.74.176:22-129.159.15.98:57988.service - OpenSSH per-connection server daemon (129.159.15.98:57988). Jun 20 19:02:34.502755 sshd[4170]: Invalid user test1 from 129.159.15.98 port 57988 Jun 20 19:02:34.698886 sshd[4170]: maximum authentication attempts exceeded for invalid user test1 from 129.159.15.98 port 57988 ssh2 [preauth] Jun 20 19:02:34.698886 sshd[4170]: Disconnecting invalid user test1 129.159.15.98 port 57988: Too many authentication failures [preauth] Jun 20 19:02:34.701061 systemd[1]: sshd@31-157.180.74.176:22-129.159.15.98:57988.service: Deactivated successfully. Jun 20 19:02:34.787132 systemd[1]: Started sshd@32-157.180.74.176:22-129.159.15.98:57994.service - OpenSSH per-connection server daemon (129.159.15.98:57994). Jun 20 19:02:35.159208 sshd[4175]: Invalid user test1 from 129.159.15.98 port 57994 Jun 20 19:02:35.234097 sshd[4175]: Received disconnect from 129.159.15.98 port 57994:11: disconnected by user [preauth] Jun 20 19:02:35.234097 sshd[4175]: Disconnected from invalid user test1 129.159.15.98 port 57994 [preauth] Jun 20 19:02:35.236777 systemd[1]: sshd@32-157.180.74.176:22-129.159.15.98:57994.service: Deactivated successfully. Jun 20 19:02:35.285092 systemd[1]: Started sshd@33-157.180.74.176:22-129.159.15.98:58002.service - OpenSSH per-connection server daemon (129.159.15.98:58002). Jun 20 19:02:35.581984 sshd[4180]: Invalid user test2 from 129.159.15.98 port 58002 Jun 20 19:02:35.756749 sshd[4180]: maximum authentication attempts exceeded for invalid user test2 from 129.159.15.98 port 58002 ssh2 [preauth] Jun 20 19:02:35.756749 sshd[4180]: Disconnecting invalid user test2 129.159.15.98 port 58002: Too many authentication failures [preauth] Jun 20 19:02:35.760029 systemd[1]: sshd@33-157.180.74.176:22-129.159.15.98:58002.service: Deactivated successfully. Jun 20 19:02:35.833007 systemd[1]: Started sshd@34-157.180.74.176:22-129.159.15.98:58008.service - OpenSSH per-connection server daemon (129.159.15.98:58008). Jun 20 19:02:36.219293 sshd[4185]: Invalid user test2 from 129.159.15.98 port 58008 Jun 20 19:02:36.406110 sshd[4185]: maximum authentication attempts exceeded for invalid user test2 from 129.159.15.98 port 58008 ssh2 [preauth] Jun 20 19:02:36.406110 sshd[4185]: Disconnecting invalid user test2 129.159.15.98 port 58008: Too many authentication failures [preauth] Jun 20 19:02:36.409834 systemd[1]: sshd@34-157.180.74.176:22-129.159.15.98:58008.service: Deactivated successfully. Jun 20 19:02:36.484935 systemd[1]: Started sshd@35-157.180.74.176:22-129.159.15.98:58018.service - OpenSSH per-connection server daemon (129.159.15.98:58018). Jun 20 19:02:36.870214 sshd[4190]: Invalid user test2 from 129.159.15.98 port 58018 Jun 20 19:02:36.939520 sshd[4190]: Received disconnect from 129.159.15.98 port 58018:11: disconnected by user [preauth] Jun 20 19:02:36.939520 sshd[4190]: Disconnected from invalid user test2 129.159.15.98 port 58018 [preauth] Jun 20 19:02:36.943443 systemd[1]: sshd@35-157.180.74.176:22-129.159.15.98:58018.service: Deactivated successfully. Jun 20 19:02:36.989071 systemd[1]: Started sshd@36-157.180.74.176:22-129.159.15.98:58020.service - OpenSSH per-connection server daemon (129.159.15.98:58020). Jun 20 19:02:37.389069 sshd[4195]: Invalid user ubuntu from 129.159.15.98 port 58020 Jun 20 19:02:37.586935 sshd[4195]: maximum authentication attempts exceeded for invalid user ubuntu from 129.159.15.98 port 58020 ssh2 [preauth] Jun 20 19:02:37.586935 sshd[4195]: Disconnecting invalid user ubuntu 129.159.15.98 port 58020: Too many authentication failures [preauth] Jun 20 19:02:37.590224 systemd[1]: sshd@36-157.180.74.176:22-129.159.15.98:58020.service: Deactivated successfully. Jun 20 19:02:37.663231 systemd[1]: Started sshd@37-157.180.74.176:22-129.159.15.98:58026.service - OpenSSH per-connection server daemon (129.159.15.98:58026). Jun 20 19:02:38.072462 sshd[4200]: Invalid user ubuntu from 129.159.15.98 port 58026 Jun 20 19:02:38.265200 sshd[4200]: maximum authentication attempts exceeded for invalid user ubuntu from 129.159.15.98 port 58026 ssh2 [preauth] Jun 20 19:02:38.265200 sshd[4200]: Disconnecting invalid user ubuntu 129.159.15.98 port 58026: Too many authentication failures [preauth] Jun 20 19:02:38.268363 systemd[1]: sshd@37-157.180.74.176:22-129.159.15.98:58026.service: Deactivated successfully. Jun 20 19:02:38.341952 systemd[1]: Started sshd@38-157.180.74.176:22-129.159.15.98:58034.service - OpenSSH per-connection server daemon (129.159.15.98:58034). Jun 20 19:02:38.716651 sshd[4205]: Invalid user ubuntu from 129.159.15.98 port 58034 Jun 20 19:02:38.854352 sshd[4205]: Received disconnect from 129.159.15.98 port 58034:11: disconnected by user [preauth] Jun 20 19:02:38.854352 sshd[4205]: Disconnected from invalid user ubuntu 129.159.15.98 port 58034 [preauth] Jun 20 19:02:38.857273 systemd[1]: sshd@38-157.180.74.176:22-129.159.15.98:58034.service: Deactivated successfully. Jun 20 19:02:38.900951 systemd[1]: Started sshd@39-157.180.74.176:22-129.159.15.98:58046.service - OpenSSH per-connection server daemon (129.159.15.98:58046). Jun 20 19:02:39.236146 sshd[4210]: Invalid user pi from 129.159.15.98 port 58046 Jun 20 19:02:39.377847 sshd[4210]: Received disconnect from 129.159.15.98 port 58046:11: disconnected by user [preauth] Jun 20 19:02:39.377847 sshd[4210]: Disconnected from invalid user pi 129.159.15.98 port 58046 [preauth] Jun 20 19:02:39.380029 systemd[1]: sshd@39-157.180.74.176:22-129.159.15.98:58046.service: Deactivated successfully. Jun 20 19:02:39.423934 systemd[1]: Started sshd@40-157.180.74.176:22-129.159.15.98:58060.service - OpenSSH per-connection server daemon (129.159.15.98:58060). Jun 20 19:02:39.753559 sshd[4215]: Invalid user baikal from 129.159.15.98 port 58060 Jun 20 19:02:39.786623 sshd[4215]: Received disconnect from 129.159.15.98 port 58060:11: disconnected by user [preauth] Jun 20 19:02:39.786623 sshd[4215]: Disconnected from invalid user baikal 129.159.15.98 port 58060 [preauth] Jun 20 19:02:39.790146 systemd[1]: sshd@40-157.180.74.176:22-129.159.15.98:58060.service: Deactivated successfully. Jun 20 19:04:57.207129 systemd[1]: Started sshd@41-157.180.74.176:22-139.178.68.195:37264.service - OpenSSH per-connection server daemon (139.178.68.195:37264). Jun 20 19:04:58.218239 sshd[4240]: Accepted publickey for core from 139.178.68.195 port 37264 ssh2: RSA SHA256:ttetsIDeDmoOcWQonGIv9sNpOX/TzQSVP7aimMY1zAQ Jun 20 19:04:58.221138 sshd-session[4240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:04:58.228600 systemd-logind[1529]: New session 8 of user core. Jun 20 19:04:58.233826 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 20 19:04:59.556116 sshd[4242]: Connection closed by 139.178.68.195 port 37264 Jun 20 19:04:59.557296 sshd-session[4240]: pam_unix(sshd:session): session closed for user core Jun 20 19:04:59.561321 systemd-logind[1529]: Session 8 logged out. Waiting for processes to exit. Jun 20 19:04:59.561892 systemd[1]: sshd@41-157.180.74.176:22-139.178.68.195:37264.service: Deactivated successfully. Jun 20 19:04:59.565718 systemd[1]: session-8.scope: Deactivated successfully. Jun 20 19:04:59.567429 systemd-logind[1529]: Removed session 8. Jun 20 19:05:04.737076 systemd[1]: Started sshd@42-157.180.74.176:22-139.178.68.195:33820.service - OpenSSH per-connection server daemon (139.178.68.195:33820). Jun 20 19:05:05.719964 sshd[4255]: Accepted publickey for core from 139.178.68.195 port 33820 ssh2: RSA SHA256:ttetsIDeDmoOcWQonGIv9sNpOX/TzQSVP7aimMY1zAQ Jun 20 19:05:05.722111 sshd-session[4255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:05:05.729371 systemd-logind[1529]: New session 9 of user core. Jun 20 19:05:05.736768 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 20 19:05:06.530743 sshd[4257]: Connection closed by 139.178.68.195 port 33820 Jun 20 19:05:06.531688 sshd-session[4255]: pam_unix(sshd:session): session closed for user core Jun 20 19:05:06.536747 systemd[1]: sshd@42-157.180.74.176:22-139.178.68.195:33820.service: Deactivated successfully. Jun 20 19:05:06.539990 systemd[1]: session-9.scope: Deactivated successfully. Jun 20 19:05:06.541899 systemd-logind[1529]: Session 9 logged out. Waiting for processes to exit. Jun 20 19:05:06.544280 systemd-logind[1529]: Removed session 9. Jun 20 19:05:11.708198 systemd[1]: Started sshd@43-157.180.74.176:22-139.178.68.195:33828.service - OpenSSH per-connection server daemon (139.178.68.195:33828). Jun 20 19:05:12.703207 sshd[4270]: Accepted publickey for core from 139.178.68.195 port 33828 ssh2: RSA SHA256:ttetsIDeDmoOcWQonGIv9sNpOX/TzQSVP7aimMY1zAQ Jun 20 19:05:12.705489 sshd-session[4270]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:05:12.714621 systemd-logind[1529]: New session 10 of user core. Jun 20 19:05:12.723784 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 20 19:05:13.514378 sshd[4272]: Connection closed by 139.178.68.195 port 33828 Jun 20 19:05:13.515103 sshd-session[4270]: pam_unix(sshd:session): session closed for user core Jun 20 19:05:13.518905 systemd-logind[1529]: Session 10 logged out. Waiting for processes to exit. Jun 20 19:05:13.519355 systemd[1]: sshd@43-157.180.74.176:22-139.178.68.195:33828.service: Deactivated successfully. Jun 20 19:05:13.521393 systemd[1]: session-10.scope: Deactivated successfully. Jun 20 19:05:13.522651 systemd-logind[1529]: Removed session 10. Jun 20 19:05:13.693088 systemd[1]: Started sshd@44-157.180.74.176:22-139.178.68.195:33834.service - OpenSSH per-connection server daemon (139.178.68.195:33834). Jun 20 19:05:14.693219 sshd[4285]: Accepted publickey for core from 139.178.68.195 port 33834 ssh2: RSA SHA256:ttetsIDeDmoOcWQonGIv9sNpOX/TzQSVP7aimMY1zAQ Jun 20 19:05:14.695667 sshd-session[4285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:05:14.705360 systemd-logind[1529]: New session 11 of user core. Jun 20 19:05:14.714797 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 20 19:05:15.560021 sshd[4287]: Connection closed by 139.178.68.195 port 33834 Jun 20 19:05:15.561736 sshd-session[4285]: pam_unix(sshd:session): session closed for user core Jun 20 19:05:15.565701 systemd[1]: sshd@44-157.180.74.176:22-139.178.68.195:33834.service: Deactivated successfully. Jun 20 19:05:15.568086 systemd[1]: session-11.scope: Deactivated successfully. Jun 20 19:05:15.569199 systemd-logind[1529]: Session 11 logged out. Waiting for processes to exit. Jun 20 19:05:15.571155 systemd-logind[1529]: Removed session 11. Jun 20 19:05:15.741402 systemd[1]: Started sshd@45-157.180.74.176:22-139.178.68.195:54856.service - OpenSSH per-connection server daemon (139.178.68.195:54856). Jun 20 19:05:16.744595 sshd[4297]: Accepted publickey for core from 139.178.68.195 port 54856 ssh2: RSA SHA256:ttetsIDeDmoOcWQonGIv9sNpOX/TzQSVP7aimMY1zAQ Jun 20 19:05:16.746417 sshd-session[4297]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:05:16.752777 systemd-logind[1529]: New session 12 of user core. Jun 20 19:05:16.761800 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 20 19:05:17.563974 sshd[4301]: Connection closed by 139.178.68.195 port 54856 Jun 20 19:05:17.564993 sshd-session[4297]: pam_unix(sshd:session): session closed for user core Jun 20 19:05:17.570143 systemd-logind[1529]: Session 12 logged out. Waiting for processes to exit. Jun 20 19:05:17.571176 systemd[1]: sshd@45-157.180.74.176:22-139.178.68.195:54856.service: Deactivated successfully. Jun 20 19:05:17.575166 systemd[1]: session-12.scope: Deactivated successfully. Jun 20 19:05:17.577030 systemd-logind[1529]: Removed session 12. Jun 20 19:05:22.745186 systemd[1]: Started sshd@46-157.180.74.176:22-139.178.68.195:54868.service - OpenSSH per-connection server daemon (139.178.68.195:54868). Jun 20 19:05:23.738072 sshd[4313]: Accepted publickey for core from 139.178.68.195 port 54868 ssh2: RSA SHA256:ttetsIDeDmoOcWQonGIv9sNpOX/TzQSVP7aimMY1zAQ Jun 20 19:05:23.740485 sshd-session[4313]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:05:23.750131 systemd-logind[1529]: New session 13 of user core. Jun 20 19:05:23.755767 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 20 19:05:24.543561 sshd[4315]: Connection closed by 139.178.68.195 port 54868 Jun 20 19:05:24.544622 sshd-session[4313]: pam_unix(sshd:session): session closed for user core Jun 20 19:05:24.550399 systemd[1]: sshd@46-157.180.74.176:22-139.178.68.195:54868.service: Deactivated successfully. Jun 20 19:05:24.554713 systemd[1]: session-13.scope: Deactivated successfully. Jun 20 19:05:24.556387 systemd-logind[1529]: Session 13 logged out. Waiting for processes to exit. Jun 20 19:05:24.558758 systemd-logind[1529]: Removed session 13. Jun 20 19:05:24.723434 systemd[1]: Started sshd@47-157.180.74.176:22-139.178.68.195:36628.service - OpenSSH per-connection server daemon (139.178.68.195:36628). Jun 20 19:05:25.706881 sshd[4329]: Accepted publickey for core from 139.178.68.195 port 36628 ssh2: RSA SHA256:ttetsIDeDmoOcWQonGIv9sNpOX/TzQSVP7aimMY1zAQ Jun 20 19:05:25.708698 sshd-session[4329]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:05:25.717278 systemd-logind[1529]: New session 14 of user core. Jun 20 19:05:25.724777 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 20 19:05:26.734287 sshd[4331]: Connection closed by 139.178.68.195 port 36628 Jun 20 19:05:26.736460 sshd-session[4329]: pam_unix(sshd:session): session closed for user core Jun 20 19:05:26.743702 systemd-logind[1529]: Session 14 logged out. Waiting for processes to exit. Jun 20 19:05:26.744459 systemd[1]: sshd@47-157.180.74.176:22-139.178.68.195:36628.service: Deactivated successfully. Jun 20 19:05:26.748662 systemd[1]: session-14.scope: Deactivated successfully. Jun 20 19:05:26.751696 systemd-logind[1529]: Removed session 14. Jun 20 19:05:26.914086 systemd[1]: Started sshd@48-157.180.74.176:22-139.178.68.195:36630.service - OpenSSH per-connection server daemon (139.178.68.195:36630). Jun 20 19:05:27.910345 sshd[4341]: Accepted publickey for core from 139.178.68.195 port 36630 ssh2: RSA SHA256:ttetsIDeDmoOcWQonGIv9sNpOX/TzQSVP7aimMY1zAQ Jun 20 19:05:27.912654 sshd-session[4341]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:05:27.922636 systemd-logind[1529]: New session 15 of user core. Jun 20 19:05:27.927758 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 20 19:05:29.780282 sshd[4343]: Connection closed by 139.178.68.195 port 36630 Jun 20 19:05:29.781202 sshd-session[4341]: pam_unix(sshd:session): session closed for user core Jun 20 19:05:29.785918 systemd[1]: sshd@48-157.180.74.176:22-139.178.68.195:36630.service: Deactivated successfully. Jun 20 19:05:29.789031 systemd[1]: session-15.scope: Deactivated successfully. Jun 20 19:05:29.791562 systemd-logind[1529]: Session 15 logged out. Waiting for processes to exit. Jun 20 19:05:29.793105 systemd-logind[1529]: Removed session 15. Jun 20 19:05:29.958014 systemd[1]: Started sshd@49-157.180.74.176:22-139.178.68.195:36642.service - OpenSSH per-connection server daemon (139.178.68.195:36642). Jun 20 19:05:30.968834 sshd[4360]: Accepted publickey for core from 139.178.68.195 port 36642 ssh2: RSA SHA256:ttetsIDeDmoOcWQonGIv9sNpOX/TzQSVP7aimMY1zAQ Jun 20 19:05:30.971308 sshd-session[4360]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:05:30.980374 systemd-logind[1529]: New session 16 of user core. Jun 20 19:05:30.985817 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 20 19:05:31.985013 sshd[4362]: Connection closed by 139.178.68.195 port 36642 Jun 20 19:05:31.985891 sshd-session[4360]: pam_unix(sshd:session): session closed for user core Jun 20 19:05:31.991457 systemd[1]: sshd@49-157.180.74.176:22-139.178.68.195:36642.service: Deactivated successfully. Jun 20 19:05:31.995382 systemd[1]: session-16.scope: Deactivated successfully. Jun 20 19:05:31.996944 systemd-logind[1529]: Session 16 logged out. Waiting for processes to exit. Jun 20 19:05:31.998646 systemd-logind[1529]: Removed session 16. Jun 20 19:05:32.164979 systemd[1]: Started sshd@50-157.180.74.176:22-139.178.68.195:36656.service - OpenSSH per-connection server daemon (139.178.68.195:36656). Jun 20 19:05:33.160722 sshd[4372]: Accepted publickey for core from 139.178.68.195 port 36656 ssh2: RSA SHA256:ttetsIDeDmoOcWQonGIv9sNpOX/TzQSVP7aimMY1zAQ Jun 20 19:05:33.163345 sshd-session[4372]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:05:33.173587 systemd-logind[1529]: New session 17 of user core. Jun 20 19:05:33.178905 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 20 19:05:33.960818 sshd[4374]: Connection closed by 139.178.68.195 port 36656 Jun 20 19:05:33.961733 sshd-session[4372]: pam_unix(sshd:session): session closed for user core Jun 20 19:05:33.966045 systemd[1]: sshd@50-157.180.74.176:22-139.178.68.195:36656.service: Deactivated successfully. Jun 20 19:05:33.969518 systemd[1]: session-17.scope: Deactivated successfully. Jun 20 19:05:33.972510 systemd-logind[1529]: Session 17 logged out. Waiting for processes to exit. Jun 20 19:05:33.974590 systemd-logind[1529]: Removed session 17. Jun 20 19:05:39.140549 systemd[1]: Started sshd@51-157.180.74.176:22-139.178.68.195:40562.service - OpenSSH per-connection server daemon (139.178.68.195:40562). Jun 20 19:05:40.141357 sshd[4387]: Accepted publickey for core from 139.178.68.195 port 40562 ssh2: RSA SHA256:ttetsIDeDmoOcWQonGIv9sNpOX/TzQSVP7aimMY1zAQ Jun 20 19:05:40.144043 sshd-session[4387]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:05:40.154242 systemd-logind[1529]: New session 18 of user core. Jun 20 19:05:40.158221 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 20 19:05:40.959496 sshd[4389]: Connection closed by 139.178.68.195 port 40562 Jun 20 19:05:40.960869 sshd-session[4387]: pam_unix(sshd:session): session closed for user core Jun 20 19:05:40.966523 systemd[1]: sshd@51-157.180.74.176:22-139.178.68.195:40562.service: Deactivated successfully. Jun 20 19:05:40.970344 systemd[1]: session-18.scope: Deactivated successfully. Jun 20 19:05:40.972516 systemd-logind[1529]: Session 18 logged out. Waiting for processes to exit. Jun 20 19:05:40.974437 systemd-logind[1529]: Removed session 18. Jun 20 19:05:46.139994 systemd[1]: Started sshd@52-157.180.74.176:22-139.178.68.195:50558.service - OpenSSH per-connection server daemon (139.178.68.195:50558). Jun 20 19:05:47.131644 sshd[4402]: Accepted publickey for core from 139.178.68.195 port 50558 ssh2: RSA SHA256:ttetsIDeDmoOcWQonGIv9sNpOX/TzQSVP7aimMY1zAQ Jun 20 19:05:47.133840 sshd-session[4402]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:05:47.141463 systemd-logind[1529]: New session 19 of user core. Jun 20 19:05:47.145880 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 20 19:05:47.894288 sshd[4404]: Connection closed by 139.178.68.195 port 50558 Jun 20 19:05:47.895788 sshd-session[4402]: pam_unix(sshd:session): session closed for user core Jun 20 19:05:47.899024 systemd[1]: sshd@52-157.180.74.176:22-139.178.68.195:50558.service: Deactivated successfully. Jun 20 19:05:47.901505 systemd[1]: session-19.scope: Deactivated successfully. Jun 20 19:05:47.903429 systemd-logind[1529]: Session 19 logged out. Waiting for processes to exit. Jun 20 19:05:47.904884 systemd-logind[1529]: Removed session 19. Jun 20 19:05:48.073892 systemd[1]: Started sshd@53-157.180.74.176:22-139.178.68.195:50562.service - OpenSSH per-connection server daemon (139.178.68.195:50562). Jun 20 19:05:49.065309 sshd[4416]: Accepted publickey for core from 139.178.68.195 port 50562 ssh2: RSA SHA256:ttetsIDeDmoOcWQonGIv9sNpOX/TzQSVP7aimMY1zAQ Jun 20 19:05:49.067509 sshd-session[4416]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:05:49.074941 systemd-logind[1529]: New session 20 of user core. Jun 20 19:05:49.089866 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 20 19:05:50.989236 systemd[1]: run-containerd-runc-k8s.io-b0ce6e22c2c0621e5ef60e6a39dad06d4655467819b741f3100098780b200c69-runc.sxDSVa.mount: Deactivated successfully. Jun 20 19:05:51.001955 containerd[1547]: time="2025-06-20T19:05:51.001830113Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 20 19:05:51.100112 containerd[1547]: time="2025-06-20T19:05:51.099982086Z" level=info msg="StopContainer for \"f0f9e5cb478d82f3ddee67395b415a56a33e3e43737efcfee292502ab1e79fe2\" with timeout 30 (s)" Jun 20 19:05:51.100429 containerd[1547]: time="2025-06-20T19:05:51.100382520Z" level=info msg="StopContainer for \"b0ce6e22c2c0621e5ef60e6a39dad06d4655467819b741f3100098780b200c69\" with timeout 2 (s)" Jun 20 19:05:51.103586 kubelet[2647]: E0620 19:05:51.074843 2647 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jun 20 19:05:51.111687 containerd[1547]: time="2025-06-20T19:05:51.111617837Z" level=info msg="Stop container \"f0f9e5cb478d82f3ddee67395b415a56a33e3e43737efcfee292502ab1e79fe2\" with signal terminated" Jun 20 19:05:51.112200 containerd[1547]: time="2025-06-20T19:05:51.111969027Z" level=info msg="Stop container \"b0ce6e22c2c0621e5ef60e6a39dad06d4655467819b741f3100098780b200c69\" with signal terminated" Jun 20 19:05:51.125491 systemd-networkd[1429]: lxc_health: Link DOWN Jun 20 19:05:51.125497 systemd-networkd[1429]: lxc_health: Lost carrier Jun 20 19:05:51.129385 systemd[1]: cri-containerd-f0f9e5cb478d82f3ddee67395b415a56a33e3e43737efcfee292502ab1e79fe2.scope: Deactivated successfully. Jun 20 19:05:51.147884 systemd[1]: cri-containerd-b0ce6e22c2c0621e5ef60e6a39dad06d4655467819b741f3100098780b200c69.scope: Deactivated successfully. Jun 20 19:05:51.148499 systemd[1]: cri-containerd-b0ce6e22c2c0621e5ef60e6a39dad06d4655467819b741f3100098780b200c69.scope: Consumed 8.263s CPU time, 192.9M memory peak, 68.7M read from disk, 13.3M written to disk. Jun 20 19:05:51.168818 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b0ce6e22c2c0621e5ef60e6a39dad06d4655467819b741f3100098780b200c69-rootfs.mount: Deactivated successfully. Jun 20 19:05:51.178079 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f0f9e5cb478d82f3ddee67395b415a56a33e3e43737efcfee292502ab1e79fe2-rootfs.mount: Deactivated successfully. Jun 20 19:05:51.183085 containerd[1547]: time="2025-06-20T19:05:51.182985167Z" level=info msg="shim disconnected" id=b0ce6e22c2c0621e5ef60e6a39dad06d4655467819b741f3100098780b200c69 namespace=k8s.io Jun 20 19:05:51.183293 containerd[1547]: time="2025-06-20T19:05:51.183143896Z" level=warning msg="cleaning up after shim disconnected" id=b0ce6e22c2c0621e5ef60e6a39dad06d4655467819b741f3100098780b200c69 namespace=k8s.io Jun 20 19:05:51.183417 containerd[1547]: time="2025-06-20T19:05:51.183153134Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:05:51.183537 containerd[1547]: time="2025-06-20T19:05:51.183175917Z" level=info msg="shim disconnected" id=f0f9e5cb478d82f3ddee67395b415a56a33e3e43737efcfee292502ab1e79fe2 namespace=k8s.io Jun 20 19:05:51.183537 containerd[1547]: time="2025-06-20T19:05:51.183496851Z" level=warning msg="cleaning up after shim disconnected" id=f0f9e5cb478d82f3ddee67395b415a56a33e3e43737efcfee292502ab1e79fe2 namespace=k8s.io Jun 20 19:05:51.183537 containerd[1547]: time="2025-06-20T19:05:51.183507631Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:05:51.211100 containerd[1547]: time="2025-06-20T19:05:51.210382312Z" level=info msg="StopContainer for \"b0ce6e22c2c0621e5ef60e6a39dad06d4655467819b741f3100098780b200c69\" returns successfully" Jun 20 19:05:51.214115 containerd[1547]: time="2025-06-20T19:05:51.214034306Z" level=info msg="StopContainer for \"f0f9e5cb478d82f3ddee67395b415a56a33e3e43737efcfee292502ab1e79fe2\" returns successfully" Jun 20 19:05:51.216228 containerd[1547]: time="2025-06-20T19:05:51.215742360Z" level=info msg="StopPodSandbox for \"41c03888fe4f67e171e9bd937cb8ed724524a0af817a409da8ce8e42e36bb5ca\"" Jun 20 19:05:51.220757 containerd[1547]: time="2025-06-20T19:05:51.220740819Z" level=info msg="StopPodSandbox for \"b66230fe0cbb4d8e05f6cf7f9e66520ec63f00774c82b9d8a91510522532c8b2\"" Jun 20 19:05:51.220967 containerd[1547]: time="2025-06-20T19:05:51.220829986Z" level=info msg="Container to stop \"5f5ca3a229e597676c86b3b48abca6f4cee5780e9bd61a33ff040746b7e80897\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 19:05:51.220967 containerd[1547]: time="2025-06-20T19:05:51.220864000Z" level=info msg="Container to stop \"b0ce6e22c2c0621e5ef60e6a39dad06d4655467819b741f3100098780b200c69\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 19:05:51.220967 containerd[1547]: time="2025-06-20T19:05:51.220873558Z" level=info msg="Container to stop \"5c07c68c2f8a2af60e5ac737d684d9acfcebc7b393c66cd263761102b482dc64\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 19:05:51.220967 containerd[1547]: time="2025-06-20T19:05:51.220882425Z" level=info msg="Container to stop \"ae2bf74e058bef11a44b6675db720c5769037d0fe17aedf5046b32a95a0b6546\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 19:05:51.220967 containerd[1547]: time="2025-06-20T19:05:51.220889568Z" level=info msg="Container to stop \"b47ec046d0792b061304c210e8fa44f13b1461752df038cfd99a6b5115e4b808\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 19:05:51.224912 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b66230fe0cbb4d8e05f6cf7f9e66520ec63f00774c82b9d8a91510522532c8b2-shm.mount: Deactivated successfully. Jun 20 19:05:51.225835 containerd[1547]: time="2025-06-20T19:05:51.218219502Z" level=info msg="Container to stop \"f0f9e5cb478d82f3ddee67395b415a56a33e3e43737efcfee292502ab1e79fe2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 19:05:51.233859 systemd[1]: cri-containerd-41c03888fe4f67e171e9bd937cb8ed724524a0af817a409da8ce8e42e36bb5ca.scope: Deactivated successfully. Jun 20 19:05:51.242199 systemd[1]: cri-containerd-b66230fe0cbb4d8e05f6cf7f9e66520ec63f00774c82b9d8a91510522532c8b2.scope: Deactivated successfully. Jun 20 19:05:51.269064 containerd[1547]: time="2025-06-20T19:05:51.268970312Z" level=info msg="shim disconnected" id=b66230fe0cbb4d8e05f6cf7f9e66520ec63f00774c82b9d8a91510522532c8b2 namespace=k8s.io Jun 20 19:05:51.269064 containerd[1547]: time="2025-06-20T19:05:51.269038720Z" level=warning msg="cleaning up after shim disconnected" id=b66230fe0cbb4d8e05f6cf7f9e66520ec63f00774c82b9d8a91510522532c8b2 namespace=k8s.io Jun 20 19:05:51.269064 containerd[1547]: time="2025-06-20T19:05:51.269046033Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:05:51.269852 containerd[1547]: time="2025-06-20T19:05:51.269569649Z" level=info msg="shim disconnected" id=41c03888fe4f67e171e9bd937cb8ed724524a0af817a409da8ce8e42e36bb5ca namespace=k8s.io Jun 20 19:05:51.269852 containerd[1547]: time="2025-06-20T19:05:51.269700996Z" level=warning msg="cleaning up after shim disconnected" id=41c03888fe4f67e171e9bd937cb8ed724524a0af817a409da8ce8e42e36bb5ca namespace=k8s.io Jun 20 19:05:51.269852 containerd[1547]: time="2025-06-20T19:05:51.269709372Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:05:51.284571 containerd[1547]: time="2025-06-20T19:05:51.284494099Z" level=info msg="TearDown network for sandbox \"b66230fe0cbb4d8e05f6cf7f9e66520ec63f00774c82b9d8a91510522532c8b2\" successfully" Jun 20 19:05:51.284684 containerd[1547]: time="2025-06-20T19:05:51.284661364Z" level=info msg="StopPodSandbox for \"b66230fe0cbb4d8e05f6cf7f9e66520ec63f00774c82b9d8a91510522532c8b2\" returns successfully" Jun 20 19:05:51.289268 containerd[1547]: time="2025-06-20T19:05:51.289223951Z" level=info msg="TearDown network for sandbox \"41c03888fe4f67e171e9bd937cb8ed724524a0af817a409da8ce8e42e36bb5ca\" successfully" Jun 20 19:05:51.289268 containerd[1547]: time="2025-06-20T19:05:51.289242335Z" level=info msg="StopPodSandbox for \"41c03888fe4f67e171e9bd937cb8ed724524a0af817a409da8ce8e42e36bb5ca\" returns successfully" Jun 20 19:05:51.464017 kubelet[2647]: I0620 19:05:51.463913 2647 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32-etc-cni-netd\") pod \"b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32\" (UID: \"b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32\") " Jun 20 19:05:51.464017 kubelet[2647]: I0620 19:05:51.463986 2647 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32-hubble-tls\") pod \"b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32\" (UID: \"b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32\") " Jun 20 19:05:51.464358 kubelet[2647]: I0620 19:05:51.464038 2647 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32-xtables-lock\") pod \"b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32\" (UID: \"b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32\") " Jun 20 19:05:51.464358 kubelet[2647]: I0620 19:05:51.464075 2647 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32-bpf-maps\") pod \"b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32\" (UID: \"b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32\") " Jun 20 19:05:51.464358 kubelet[2647]: I0620 19:05:51.464100 2647 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32-host-proc-sys-kernel\") pod \"b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32\" (UID: \"b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32\") " Jun 20 19:05:51.464358 kubelet[2647]: I0620 19:05:51.464132 2647 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32-clustermesh-secrets\") pod \"b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32\" (UID: \"b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32\") " Jun 20 19:05:51.464358 kubelet[2647]: I0620 19:05:51.464160 2647 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32-cni-path\") pod \"b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32\" (UID: \"b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32\") " Jun 20 19:05:51.464358 kubelet[2647]: I0620 19:05:51.464191 2647 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32-host-proc-sys-net\") pod \"b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32\" (UID: \"b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32\") " Jun 20 19:05:51.464738 kubelet[2647]: I0620 19:05:51.464223 2647 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxxd8\" (UniqueName: \"kubernetes.io/projected/b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32-kube-api-access-rxxd8\") pod \"b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32\" (UID: \"b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32\") " Jun 20 19:05:51.464738 kubelet[2647]: I0620 19:05:51.464267 2647 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hpg95\" (UniqueName: \"kubernetes.io/projected/b6fc2633-e411-4526-aab5-5c4e55b6809f-kube-api-access-hpg95\") pod \"b6fc2633-e411-4526-aab5-5c4e55b6809f\" (UID: \"b6fc2633-e411-4526-aab5-5c4e55b6809f\") " Jun 20 19:05:51.464738 kubelet[2647]: I0620 19:05:51.464300 2647 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b6fc2633-e411-4526-aab5-5c4e55b6809f-cilium-config-path\") pod \"b6fc2633-e411-4526-aab5-5c4e55b6809f\" (UID: \"b6fc2633-e411-4526-aab5-5c4e55b6809f\") " Jun 20 19:05:51.464738 kubelet[2647]: I0620 19:05:51.464329 2647 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32-cilium-config-path\") pod \"b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32\" (UID: \"b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32\") " Jun 20 19:05:51.464738 kubelet[2647]: I0620 19:05:51.464358 2647 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32-cilium-run\") pod \"b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32\" (UID: \"b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32\") " Jun 20 19:05:51.464738 kubelet[2647]: I0620 19:05:51.464384 2647 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32-cilium-cgroup\") pod \"b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32\" (UID: \"b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32\") " Jun 20 19:05:51.465185 kubelet[2647]: I0620 19:05:51.464411 2647 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32-hostproc\") pod \"b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32\" (UID: \"b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32\") " Jun 20 19:05:51.465185 kubelet[2647]: I0620 19:05:51.464435 2647 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32-lib-modules\") pod \"b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32\" (UID: \"b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32\") " Jun 20 19:05:51.478148 kubelet[2647]: I0620 19:05:51.474421 2647 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32" (UID: "b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:05:51.479562 kubelet[2647]: I0620 19:05:51.474409 2647 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32" (UID: "b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:05:51.479562 kubelet[2647]: I0620 19:05:51.478936 2647 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32" (UID: "b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:05:51.484017 kubelet[2647]: I0620 19:05:51.483949 2647 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32" (UID: "b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:05:51.484203 kubelet[2647]: I0620 19:05:51.484181 2647 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32" (UID: "b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:05:51.485928 kubelet[2647]: I0620 19:05:51.485673 2647 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32" (UID: "b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:05:51.491954 kubelet[2647]: I0620 19:05:51.490448 2647 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32" (UID: "b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jun 20 19:05:51.491954 kubelet[2647]: I0620 19:05:51.490561 2647 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32-cni-path" (OuterVolumeSpecName: "cni-path") pod "b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32" (UID: "b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:05:51.497583 kubelet[2647]: I0620 19:05:51.496714 2647 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32" (UID: "b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jun 20 19:05:51.506586 kubelet[2647]: I0620 19:05:51.505168 2647 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32" (UID: "b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:05:51.506943 kubelet[2647]: I0620 19:05:51.506913 2647 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32" (UID: "b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:05:51.507097 kubelet[2647]: I0620 19:05:51.507077 2647 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32-hostproc" (OuterVolumeSpecName: "hostproc") pod "b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32" (UID: "b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:05:51.507242 kubelet[2647]: I0620 19:05:51.507194 2647 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32-kube-api-access-rxxd8" (OuterVolumeSpecName: "kube-api-access-rxxd8") pod "b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32" (UID: "b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32"). InnerVolumeSpecName "kube-api-access-rxxd8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jun 20 19:05:51.507391 kubelet[2647]: I0620 19:05:51.507349 2647 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32" (UID: "b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jun 20 19:05:51.507467 kubelet[2647]: I0620 19:05:51.507397 2647 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6fc2633-e411-4526-aab5-5c4e55b6809f-kube-api-access-hpg95" (OuterVolumeSpecName: "kube-api-access-hpg95") pod "b6fc2633-e411-4526-aab5-5c4e55b6809f" (UID: "b6fc2633-e411-4526-aab5-5c4e55b6809f"). InnerVolumeSpecName "kube-api-access-hpg95". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jun 20 19:05:51.507558 kubelet[2647]: I0620 19:05:51.507305 2647 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6fc2633-e411-4526-aab5-5c4e55b6809f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b6fc2633-e411-4526-aab5-5c4e55b6809f" (UID: "b6fc2633-e411-4526-aab5-5c4e55b6809f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jun 20 19:05:51.564888 kubelet[2647]: I0620 19:05:51.564842 2647 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32-host-proc-sys-net\") on node \"ci-4230-2-0-4-ec216ba796\" DevicePath \"\"" Jun 20 19:05:51.566570 kubelet[2647]: I0620 19:05:51.566556 2647 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rxxd8\" (UniqueName: \"kubernetes.io/projected/b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32-kube-api-access-rxxd8\") on node \"ci-4230-2-0-4-ec216ba796\" DevicePath \"\"" Jun 20 19:05:51.566636 kubelet[2647]: I0620 19:05:51.566628 2647 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hpg95\" (UniqueName: \"kubernetes.io/projected/b6fc2633-e411-4526-aab5-5c4e55b6809f-kube-api-access-hpg95\") on node \"ci-4230-2-0-4-ec216ba796\" DevicePath \"\"" Jun 20 19:05:51.566691 kubelet[2647]: I0620 19:05:51.566682 2647 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32-cilium-config-path\") on node \"ci-4230-2-0-4-ec216ba796\" DevicePath \"\"" Jun 20 19:05:51.566730 kubelet[2647]: I0620 19:05:51.566723 2647 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b6fc2633-e411-4526-aab5-5c4e55b6809f-cilium-config-path\") on node \"ci-4230-2-0-4-ec216ba796\" DevicePath \"\"" Jun 20 19:05:51.566773 kubelet[2647]: I0620 19:05:51.566765 2647 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32-cilium-run\") on node \"ci-4230-2-0-4-ec216ba796\" DevicePath \"\"" Jun 20 19:05:51.566814 kubelet[2647]: I0620 19:05:51.566807 2647 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32-cilium-cgroup\") on node \"ci-4230-2-0-4-ec216ba796\" DevicePath \"\"" Jun 20 19:05:51.566855 kubelet[2647]: I0620 19:05:51.566849 2647 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32-hostproc\") on node \"ci-4230-2-0-4-ec216ba796\" DevicePath \"\"" Jun 20 19:05:51.566901 kubelet[2647]: I0620 19:05:51.566894 2647 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32-lib-modules\") on node \"ci-4230-2-0-4-ec216ba796\" DevicePath \"\"" Jun 20 19:05:51.566943 kubelet[2647]: I0620 19:05:51.566937 2647 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32-etc-cni-netd\") on node \"ci-4230-2-0-4-ec216ba796\" DevicePath \"\"" Jun 20 19:05:51.566980 kubelet[2647]: I0620 19:05:51.566974 2647 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32-hubble-tls\") on node \"ci-4230-2-0-4-ec216ba796\" DevicePath \"\"" Jun 20 19:05:51.567073 kubelet[2647]: I0620 19:05:51.567026 2647 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32-xtables-lock\") on node \"ci-4230-2-0-4-ec216ba796\" DevicePath \"\"" Jun 20 19:05:51.567073 kubelet[2647]: I0620 19:05:51.567035 2647 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32-bpf-maps\") on node \"ci-4230-2-0-4-ec216ba796\" DevicePath \"\"" Jun 20 19:05:51.567073 kubelet[2647]: I0620 19:05:51.567042 2647 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32-host-proc-sys-kernel\") on node \"ci-4230-2-0-4-ec216ba796\" DevicePath \"\"" Jun 20 19:05:51.567073 kubelet[2647]: I0620 19:05:51.567049 2647 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32-clustermesh-secrets\") on node \"ci-4230-2-0-4-ec216ba796\" DevicePath \"\"" Jun 20 19:05:51.567073 kubelet[2647]: I0620 19:05:51.567057 2647 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32-cni-path\") on node \"ci-4230-2-0-4-ec216ba796\" DevicePath \"\"" Jun 20 19:05:51.793359 systemd[1]: Removed slice kubepods-besteffort-podb6fc2633_e411_4526_aab5_5c4e55b6809f.slice - libcontainer container kubepods-besteffort-podb6fc2633_e411_4526_aab5_5c4e55b6809f.slice. Jun 20 19:05:51.803616 kubelet[2647]: I0620 19:05:51.803562 2647 scope.go:117] "RemoveContainer" containerID="f0f9e5cb478d82f3ddee67395b415a56a33e3e43737efcfee292502ab1e79fe2" Jun 20 19:05:51.821514 systemd[1]: Removed slice kubepods-burstable-podb0ca000e_2bf9_4f69_8a9d_cf84eacc1b32.slice - libcontainer container kubepods-burstable-podb0ca000e_2bf9_4f69_8a9d_cf84eacc1b32.slice. Jun 20 19:05:51.823577 systemd[1]: kubepods-burstable-podb0ca000e_2bf9_4f69_8a9d_cf84eacc1b32.slice: Consumed 8.359s CPU time, 193.2M memory peak, 68.7M read from disk, 13.3M written to disk. Jun 20 19:05:51.874409 containerd[1547]: time="2025-06-20T19:05:51.874322440Z" level=info msg="RemoveContainer for \"f0f9e5cb478d82f3ddee67395b415a56a33e3e43737efcfee292502ab1e79fe2\"" Jun 20 19:05:51.880043 containerd[1547]: time="2025-06-20T19:05:51.879964048Z" level=info msg="RemoveContainer for \"f0f9e5cb478d82f3ddee67395b415a56a33e3e43737efcfee292502ab1e79fe2\" returns successfully" Jun 20 19:05:51.880666 kubelet[2647]: I0620 19:05:51.880616 2647 scope.go:117] "RemoveContainer" containerID="f0f9e5cb478d82f3ddee67395b415a56a33e3e43737efcfee292502ab1e79fe2" Jun 20 19:05:51.881198 containerd[1547]: time="2025-06-20T19:05:51.881130143Z" level=error msg="ContainerStatus for \"f0f9e5cb478d82f3ddee67395b415a56a33e3e43737efcfee292502ab1e79fe2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f0f9e5cb478d82f3ddee67395b415a56a33e3e43737efcfee292502ab1e79fe2\": not found" Jun 20 19:05:51.881681 kubelet[2647]: E0620 19:05:51.881630 2647 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f0f9e5cb478d82f3ddee67395b415a56a33e3e43737efcfee292502ab1e79fe2\": not found" containerID="f0f9e5cb478d82f3ddee67395b415a56a33e3e43737efcfee292502ab1e79fe2" Jun 20 19:05:51.882087 kubelet[2647]: I0620 19:05:51.881818 2647 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f0f9e5cb478d82f3ddee67395b415a56a33e3e43737efcfee292502ab1e79fe2"} err="failed to get container status \"f0f9e5cb478d82f3ddee67395b415a56a33e3e43737efcfee292502ab1e79fe2\": rpc error: code = NotFound desc = an error occurred when try to find container \"f0f9e5cb478d82f3ddee67395b415a56a33e3e43737efcfee292502ab1e79fe2\": not found" Jun 20 19:05:51.882087 kubelet[2647]: I0620 19:05:51.881972 2647 scope.go:117] "RemoveContainer" containerID="b0ce6e22c2c0621e5ef60e6a39dad06d4655467819b741f3100098780b200c69" Jun 20 19:05:51.884040 containerd[1547]: time="2025-06-20T19:05:51.883937847Z" level=info msg="RemoveContainer for \"b0ce6e22c2c0621e5ef60e6a39dad06d4655467819b741f3100098780b200c69\"" Jun 20 19:05:51.889137 containerd[1547]: time="2025-06-20T19:05:51.888981119Z" level=info msg="RemoveContainer for \"b0ce6e22c2c0621e5ef60e6a39dad06d4655467819b741f3100098780b200c69\" returns successfully" Jun 20 19:05:51.889678 kubelet[2647]: I0620 19:05:51.889504 2647 scope.go:117] "RemoveContainer" containerID="5f5ca3a229e597676c86b3b48abca6f4cee5780e9bd61a33ff040746b7e80897" Jun 20 19:05:51.891721 containerd[1547]: time="2025-06-20T19:05:51.891592334Z" level=info msg="RemoveContainer for \"5f5ca3a229e597676c86b3b48abca6f4cee5780e9bd61a33ff040746b7e80897\"" Jun 20 19:05:51.897335 containerd[1547]: time="2025-06-20T19:05:51.897214184Z" level=info msg="RemoveContainer for \"5f5ca3a229e597676c86b3b48abca6f4cee5780e9bd61a33ff040746b7e80897\" returns successfully" Jun 20 19:05:51.897637 kubelet[2647]: I0620 19:05:51.897521 2647 scope.go:117] "RemoveContainer" containerID="b47ec046d0792b061304c210e8fa44f13b1461752df038cfd99a6b5115e4b808" Jun 20 19:05:51.899114 containerd[1547]: time="2025-06-20T19:05:51.898981481Z" level=info msg="RemoveContainer for \"b47ec046d0792b061304c210e8fa44f13b1461752df038cfd99a6b5115e4b808\"" Jun 20 19:05:51.904318 containerd[1547]: time="2025-06-20T19:05:51.904287227Z" level=info msg="RemoveContainer for \"b47ec046d0792b061304c210e8fa44f13b1461752df038cfd99a6b5115e4b808\" returns successfully" Jun 20 19:05:51.904768 kubelet[2647]: I0620 19:05:51.904738 2647 scope.go:117] "RemoveContainer" containerID="ae2bf74e058bef11a44b6675db720c5769037d0fe17aedf5046b32a95a0b6546" Jun 20 19:05:51.906880 containerd[1547]: time="2025-06-20T19:05:51.906507355Z" level=info msg="RemoveContainer for \"ae2bf74e058bef11a44b6675db720c5769037d0fe17aedf5046b32a95a0b6546\"" Jun 20 19:05:51.912892 containerd[1547]: time="2025-06-20T19:05:51.912741488Z" level=info msg="RemoveContainer for \"ae2bf74e058bef11a44b6675db720c5769037d0fe17aedf5046b32a95a0b6546\" returns successfully" Jun 20 19:05:51.913331 kubelet[2647]: E0620 19:05:51.912500 2647 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-gg7gs" podUID="466a9534-fe12-476e-9f50-a4bae6981ca2" Jun 20 19:05:51.913433 kubelet[2647]: I0620 19:05:51.913285 2647 scope.go:117] "RemoveContainer" containerID="5c07c68c2f8a2af60e5ac737d684d9acfcebc7b393c66cd263761102b482dc64" Jun 20 19:05:51.915291 containerd[1547]: time="2025-06-20T19:05:51.915256893Z" level=info msg="RemoveContainer for \"5c07c68c2f8a2af60e5ac737d684d9acfcebc7b393c66cd263761102b482dc64\"" Jun 20 19:05:51.916173 kubelet[2647]: I0620 19:05:51.916137 2647 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32" path="/var/lib/kubelet/pods/b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32/volumes" Jun 20 19:05:51.917026 kubelet[2647]: I0620 19:05:51.916976 2647 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6fc2633-e411-4526-aab5-5c4e55b6809f" path="/var/lib/kubelet/pods/b6fc2633-e411-4526-aab5-5c4e55b6809f/volumes" Jun 20 19:05:51.920957 containerd[1547]: time="2025-06-20T19:05:51.920852965Z" level=info msg="RemoveContainer for \"5c07c68c2f8a2af60e5ac737d684d9acfcebc7b393c66cd263761102b482dc64\" returns successfully" Jun 20 19:05:51.921420 containerd[1547]: time="2025-06-20T19:05:51.921299647Z" level=error msg="ContainerStatus for \"b0ce6e22c2c0621e5ef60e6a39dad06d4655467819b741f3100098780b200c69\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b0ce6e22c2c0621e5ef60e6a39dad06d4655467819b741f3100098780b200c69\": not found" Jun 20 19:05:51.921480 kubelet[2647]: I0620 19:05:51.921062 2647 scope.go:117] "RemoveContainer" containerID="b0ce6e22c2c0621e5ef60e6a39dad06d4655467819b741f3100098780b200c69" Jun 20 19:05:51.921750 kubelet[2647]: E0620 19:05:51.921685 2647 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b0ce6e22c2c0621e5ef60e6a39dad06d4655467819b741f3100098780b200c69\": not found" containerID="b0ce6e22c2c0621e5ef60e6a39dad06d4655467819b741f3100098780b200c69" Jun 20 19:05:51.921750 kubelet[2647]: I0620 19:05:51.921735 2647 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b0ce6e22c2c0621e5ef60e6a39dad06d4655467819b741f3100098780b200c69"} err="failed to get container status \"b0ce6e22c2c0621e5ef60e6a39dad06d4655467819b741f3100098780b200c69\": rpc error: code = NotFound desc = an error occurred when try to find container \"b0ce6e22c2c0621e5ef60e6a39dad06d4655467819b741f3100098780b200c69\": not found" Jun 20 19:05:51.922126 kubelet[2647]: I0620 19:05:51.921765 2647 scope.go:117] "RemoveContainer" containerID="5f5ca3a229e597676c86b3b48abca6f4cee5780e9bd61a33ff040746b7e80897" Jun 20 19:05:51.922214 containerd[1547]: time="2025-06-20T19:05:51.922043436Z" level=error msg="ContainerStatus for \"5f5ca3a229e597676c86b3b48abca6f4cee5780e9bd61a33ff040746b7e80897\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5f5ca3a229e597676c86b3b48abca6f4cee5780e9bd61a33ff040746b7e80897\": not found" Jun 20 19:05:51.922304 kubelet[2647]: E0620 19:05:51.922258 2647 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5f5ca3a229e597676c86b3b48abca6f4cee5780e9bd61a33ff040746b7e80897\": not found" containerID="5f5ca3a229e597676c86b3b48abca6f4cee5780e9bd61a33ff040746b7e80897" Jun 20 19:05:51.922366 kubelet[2647]: I0620 19:05:51.922299 2647 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5f5ca3a229e597676c86b3b48abca6f4cee5780e9bd61a33ff040746b7e80897"} err="failed to get container status \"5f5ca3a229e597676c86b3b48abca6f4cee5780e9bd61a33ff040746b7e80897\": rpc error: code = NotFound desc = an error occurred when try to find container \"5f5ca3a229e597676c86b3b48abca6f4cee5780e9bd61a33ff040746b7e80897\": not found" Jun 20 19:05:51.922366 kubelet[2647]: I0620 19:05:51.922325 2647 scope.go:117] "RemoveContainer" containerID="b47ec046d0792b061304c210e8fa44f13b1461752df038cfd99a6b5115e4b808" Jun 20 19:05:51.922755 containerd[1547]: time="2025-06-20T19:05:51.922515224Z" level=error msg="ContainerStatus for \"b47ec046d0792b061304c210e8fa44f13b1461752df038cfd99a6b5115e4b808\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b47ec046d0792b061304c210e8fa44f13b1461752df038cfd99a6b5115e4b808\": not found" Jun 20 19:05:51.922957 kubelet[2647]: E0620 19:05:51.922918 2647 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b47ec046d0792b061304c210e8fa44f13b1461752df038cfd99a6b5115e4b808\": not found" containerID="b47ec046d0792b061304c210e8fa44f13b1461752df038cfd99a6b5115e4b808" Jun 20 19:05:51.923167 kubelet[2647]: I0620 19:05:51.923060 2647 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b47ec046d0792b061304c210e8fa44f13b1461752df038cfd99a6b5115e4b808"} err="failed to get container status \"b47ec046d0792b061304c210e8fa44f13b1461752df038cfd99a6b5115e4b808\": rpc error: code = NotFound desc = an error occurred when try to find container \"b47ec046d0792b061304c210e8fa44f13b1461752df038cfd99a6b5115e4b808\": not found" Jun 20 19:05:51.923167 kubelet[2647]: I0620 19:05:51.923083 2647 scope.go:117] "RemoveContainer" containerID="ae2bf74e058bef11a44b6675db720c5769037d0fe17aedf5046b32a95a0b6546" Jun 20 19:05:51.923363 containerd[1547]: time="2025-06-20T19:05:51.923294781Z" level=error msg="ContainerStatus for \"ae2bf74e058bef11a44b6675db720c5769037d0fe17aedf5046b32a95a0b6546\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ae2bf74e058bef11a44b6675db720c5769037d0fe17aedf5046b32a95a0b6546\": not found" Jun 20 19:05:51.923551 kubelet[2647]: E0620 19:05:51.923476 2647 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ae2bf74e058bef11a44b6675db720c5769037d0fe17aedf5046b32a95a0b6546\": not found" containerID="ae2bf74e058bef11a44b6675db720c5769037d0fe17aedf5046b32a95a0b6546" Jun 20 19:05:51.923608 kubelet[2647]: I0620 19:05:51.923566 2647 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ae2bf74e058bef11a44b6675db720c5769037d0fe17aedf5046b32a95a0b6546"} err="failed to get container status \"ae2bf74e058bef11a44b6675db720c5769037d0fe17aedf5046b32a95a0b6546\": rpc error: code = NotFound desc = an error occurred when try to find container \"ae2bf74e058bef11a44b6675db720c5769037d0fe17aedf5046b32a95a0b6546\": not found" Jun 20 19:05:51.923608 kubelet[2647]: I0620 19:05:51.923599 2647 scope.go:117] "RemoveContainer" containerID="5c07c68c2f8a2af60e5ac737d684d9acfcebc7b393c66cd263761102b482dc64" Jun 20 19:05:51.924325 containerd[1547]: time="2025-06-20T19:05:51.924273544Z" level=error msg="ContainerStatus for \"5c07c68c2f8a2af60e5ac737d684d9acfcebc7b393c66cd263761102b482dc64\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5c07c68c2f8a2af60e5ac737d684d9acfcebc7b393c66cd263761102b482dc64\": not found" Jun 20 19:05:51.924620 kubelet[2647]: E0620 19:05:51.924517 2647 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5c07c68c2f8a2af60e5ac737d684d9acfcebc7b393c66cd263761102b482dc64\": not found" containerID="5c07c68c2f8a2af60e5ac737d684d9acfcebc7b393c66cd263761102b482dc64" Jun 20 19:05:51.924768 kubelet[2647]: I0620 19:05:51.924636 2647 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5c07c68c2f8a2af60e5ac737d684d9acfcebc7b393c66cd263761102b482dc64"} err="failed to get container status \"5c07c68c2f8a2af60e5ac737d684d9acfcebc7b393c66cd263761102b482dc64\": rpc error: code = NotFound desc = an error occurred when try to find container \"5c07c68c2f8a2af60e5ac737d684d9acfcebc7b393c66cd263761102b482dc64\": not found" Jun 20 19:05:51.979683 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-41c03888fe4f67e171e9bd937cb8ed724524a0af817a409da8ce8e42e36bb5ca-rootfs.mount: Deactivated successfully. Jun 20 19:05:51.979846 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-41c03888fe4f67e171e9bd937cb8ed724524a0af817a409da8ce8e42e36bb5ca-shm.mount: Deactivated successfully. Jun 20 19:05:51.979970 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b66230fe0cbb4d8e05f6cf7f9e66520ec63f00774c82b9d8a91510522532c8b2-rootfs.mount: Deactivated successfully. Jun 20 19:05:51.980102 systemd[1]: var-lib-kubelet-pods-b6fc2633\x2de411\x2d4526\x2daab5\x2d5c4e55b6809f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhpg95.mount: Deactivated successfully. Jun 20 19:05:51.980229 systemd[1]: var-lib-kubelet-pods-b0ca000e\x2d2bf9\x2d4f69\x2d8a9d\x2dcf84eacc1b32-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drxxd8.mount: Deactivated successfully. Jun 20 19:05:51.980352 systemd[1]: var-lib-kubelet-pods-b0ca000e\x2d2bf9\x2d4f69\x2d8a9d\x2dcf84eacc1b32-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jun 20 19:05:51.980469 systemd[1]: var-lib-kubelet-pods-b0ca000e\x2d2bf9\x2d4f69\x2d8a9d\x2dcf84eacc1b32-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jun 20 19:05:53.033179 sshd[4418]: Connection closed by 139.178.68.195 port 50562 Jun 20 19:05:53.034150 sshd-session[4416]: pam_unix(sshd:session): session closed for user core Jun 20 19:05:53.045633 systemd-logind[1529]: Session 20 logged out. Waiting for processes to exit. Jun 20 19:05:53.047058 systemd[1]: sshd@53-157.180.74.176:22-139.178.68.195:50562.service: Deactivated successfully. Jun 20 19:05:53.050093 systemd[1]: session-20.scope: Deactivated successfully. Jun 20 19:05:53.052300 systemd-logind[1529]: Removed session 20. Jun 20 19:05:53.212946 systemd[1]: Started sshd@54-157.180.74.176:22-139.178.68.195:50568.service - OpenSSH per-connection server daemon (139.178.68.195:50568). Jun 20 19:05:53.783577 kubelet[2647]: I0620 19:05:53.782384 2647 setters.go:602] "Node became not ready" node="ci-4230-2-0-4-ec216ba796" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-06-20T19:05:53Z","lastTransitionTime":"2025-06-20T19:05:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jun 20 19:05:53.911748 kubelet[2647]: E0620 19:05:53.911579 2647 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-gg7gs" podUID="466a9534-fe12-476e-9f50-a4bae6981ca2" Jun 20 19:05:54.228959 sshd[4577]: Accepted publickey for core from 139.178.68.195 port 50568 ssh2: RSA SHA256:ttetsIDeDmoOcWQonGIv9sNpOX/TzQSVP7aimMY1zAQ Jun 20 19:05:54.231291 sshd-session[4577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:05:54.240006 systemd-logind[1529]: New session 21 of user core. Jun 20 19:05:54.245829 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 20 19:05:55.391415 kubelet[2647]: I0620 19:05:55.391360 2647 memory_manager.go:355] "RemoveStaleState removing state" podUID="b0ca000e-2bf9-4f69-8a9d-cf84eacc1b32" containerName="cilium-agent" Jun 20 19:05:55.392069 kubelet[2647]: I0620 19:05:55.391555 2647 memory_manager.go:355] "RemoveStaleState removing state" podUID="b6fc2633-e411-4526-aab5-5c4e55b6809f" containerName="cilium-operator" Jun 20 19:05:55.462097 systemd[1]: Created slice kubepods-burstable-pod014c37e0_071b_404c_b58d_60c2dd7bb4b0.slice - libcontainer container kubepods-burstable-pod014c37e0_071b_404c_b58d_60c2dd7bb4b0.slice. Jun 20 19:05:55.500462 kubelet[2647]: I0620 19:05:55.500407 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/014c37e0-071b-404c-b58d-60c2dd7bb4b0-cilium-config-path\") pod \"cilium-9vgj8\" (UID: \"014c37e0-071b-404c-b58d-60c2dd7bb4b0\") " pod="kube-system/cilium-9vgj8" Jun 20 19:05:55.503163 kubelet[2647]: I0620 19:05:55.503078 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/014c37e0-071b-404c-b58d-60c2dd7bb4b0-cilium-cgroup\") pod \"cilium-9vgj8\" (UID: \"014c37e0-071b-404c-b58d-60c2dd7bb4b0\") " pod="kube-system/cilium-9vgj8" Jun 20 19:05:55.503163 kubelet[2647]: I0620 19:05:55.503151 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/014c37e0-071b-404c-b58d-60c2dd7bb4b0-hubble-tls\") pod \"cilium-9vgj8\" (UID: \"014c37e0-071b-404c-b58d-60c2dd7bb4b0\") " pod="kube-system/cilium-9vgj8" Jun 20 19:05:55.503301 kubelet[2647]: I0620 19:05:55.503199 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/014c37e0-071b-404c-b58d-60c2dd7bb4b0-cilium-run\") pod \"cilium-9vgj8\" (UID: \"014c37e0-071b-404c-b58d-60c2dd7bb4b0\") " pod="kube-system/cilium-9vgj8" Jun 20 19:05:55.503301 kubelet[2647]: I0620 19:05:55.503235 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/014c37e0-071b-404c-b58d-60c2dd7bb4b0-cilium-ipsec-secrets\") pod \"cilium-9vgj8\" (UID: \"014c37e0-071b-404c-b58d-60c2dd7bb4b0\") " pod="kube-system/cilium-9vgj8" Jun 20 19:05:55.503301 kubelet[2647]: I0620 19:05:55.503275 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/014c37e0-071b-404c-b58d-60c2dd7bb4b0-bpf-maps\") pod \"cilium-9vgj8\" (UID: \"014c37e0-071b-404c-b58d-60c2dd7bb4b0\") " pod="kube-system/cilium-9vgj8" Jun 20 19:05:55.503425 kubelet[2647]: I0620 19:05:55.503315 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/014c37e0-071b-404c-b58d-60c2dd7bb4b0-host-proc-sys-kernel\") pod \"cilium-9vgj8\" (UID: \"014c37e0-071b-404c-b58d-60c2dd7bb4b0\") " pod="kube-system/cilium-9vgj8" Jun 20 19:05:55.503425 kubelet[2647]: I0620 19:05:55.503358 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/014c37e0-071b-404c-b58d-60c2dd7bb4b0-cni-path\") pod \"cilium-9vgj8\" (UID: \"014c37e0-071b-404c-b58d-60c2dd7bb4b0\") " pod="kube-system/cilium-9vgj8" Jun 20 19:05:55.503425 kubelet[2647]: I0620 19:05:55.503395 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/014c37e0-071b-404c-b58d-60c2dd7bb4b0-lib-modules\") pod \"cilium-9vgj8\" (UID: \"014c37e0-071b-404c-b58d-60c2dd7bb4b0\") " pod="kube-system/cilium-9vgj8" Jun 20 19:05:55.503594 kubelet[2647]: I0620 19:05:55.503430 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5qw8\" (UniqueName: \"kubernetes.io/projected/014c37e0-071b-404c-b58d-60c2dd7bb4b0-kube-api-access-d5qw8\") pod \"cilium-9vgj8\" (UID: \"014c37e0-071b-404c-b58d-60c2dd7bb4b0\") " pod="kube-system/cilium-9vgj8" Jun 20 19:05:55.503594 kubelet[2647]: I0620 19:05:55.503470 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/014c37e0-071b-404c-b58d-60c2dd7bb4b0-hostproc\") pod \"cilium-9vgj8\" (UID: \"014c37e0-071b-404c-b58d-60c2dd7bb4b0\") " pod="kube-system/cilium-9vgj8" Jun 20 19:05:55.503594 kubelet[2647]: I0620 19:05:55.503511 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/014c37e0-071b-404c-b58d-60c2dd7bb4b0-xtables-lock\") pod \"cilium-9vgj8\" (UID: \"014c37e0-071b-404c-b58d-60c2dd7bb4b0\") " pod="kube-system/cilium-9vgj8" Jun 20 19:05:55.503725 kubelet[2647]: I0620 19:05:55.503598 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/014c37e0-071b-404c-b58d-60c2dd7bb4b0-clustermesh-secrets\") pod \"cilium-9vgj8\" (UID: \"014c37e0-071b-404c-b58d-60c2dd7bb4b0\") " pod="kube-system/cilium-9vgj8" Jun 20 19:05:55.503725 kubelet[2647]: I0620 19:05:55.503637 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/014c37e0-071b-404c-b58d-60c2dd7bb4b0-host-proc-sys-net\") pod \"cilium-9vgj8\" (UID: \"014c37e0-071b-404c-b58d-60c2dd7bb4b0\") " pod="kube-system/cilium-9vgj8" Jun 20 19:05:55.503725 kubelet[2647]: I0620 19:05:55.503679 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/014c37e0-071b-404c-b58d-60c2dd7bb4b0-etc-cni-netd\") pod \"cilium-9vgj8\" (UID: \"014c37e0-071b-404c-b58d-60c2dd7bb4b0\") " pod="kube-system/cilium-9vgj8" Jun 20 19:05:55.570714 sshd[4581]: Connection closed by 139.178.68.195 port 50568 Jun 20 19:05:55.571642 sshd-session[4577]: pam_unix(sshd:session): session closed for user core Jun 20 19:05:55.576097 systemd[1]: sshd@54-157.180.74.176:22-139.178.68.195:50568.service: Deactivated successfully. Jun 20 19:05:55.580334 systemd[1]: session-21.scope: Deactivated successfully. Jun 20 19:05:55.584413 systemd-logind[1529]: Session 21 logged out. Waiting for processes to exit. Jun 20 19:05:55.586692 systemd-logind[1529]: Removed session 21. Jun 20 19:05:55.749674 systemd[1]: Started sshd@55-157.180.74.176:22-139.178.68.195:51230.service - OpenSSH per-connection server daemon (139.178.68.195:51230). Jun 20 19:05:55.765773 containerd[1547]: time="2025-06-20T19:05:55.765143086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9vgj8,Uid:014c37e0-071b-404c-b58d-60c2dd7bb4b0,Namespace:kube-system,Attempt:0,}" Jun 20 19:05:55.808015 containerd[1547]: time="2025-06-20T19:05:55.807877084Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 19:05:55.808835 containerd[1547]: time="2025-06-20T19:05:55.808761690Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 19:05:55.809166 containerd[1547]: time="2025-06-20T19:05:55.809091370Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:05:55.809660 containerd[1547]: time="2025-06-20T19:05:55.809596813Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:05:55.835752 systemd[1]: Started cri-containerd-7150b4c20519e5e571102e325c005f9c3df5f0cbc8e98501fe7fee7e979d9c23.scope - libcontainer container 7150b4c20519e5e571102e325c005f9c3df5f0cbc8e98501fe7fee7e979d9c23. Jun 20 19:05:55.867430 containerd[1547]: time="2025-06-20T19:05:55.867368908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9vgj8,Uid:014c37e0-071b-404c-b58d-60c2dd7bb4b0,Namespace:kube-system,Attempt:0,} returns sandbox id \"7150b4c20519e5e571102e325c005f9c3df5f0cbc8e98501fe7fee7e979d9c23\"" Jun 20 19:05:55.872367 containerd[1547]: time="2025-06-20T19:05:55.872317878Z" level=info msg="CreateContainer within sandbox \"7150b4c20519e5e571102e325c005f9c3df5f0cbc8e98501fe7fee7e979d9c23\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jun 20 19:05:55.911894 kubelet[2647]: E0620 19:05:55.911284 2647 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-gg7gs" podUID="466a9534-fe12-476e-9f50-a4bae6981ca2" Jun 20 19:05:55.917216 containerd[1547]: time="2025-06-20T19:05:55.917155257Z" level=info msg="CreateContainer within sandbox \"7150b4c20519e5e571102e325c005f9c3df5f0cbc8e98501fe7fee7e979d9c23\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ebe20781eccaa3b7f8742b92c097b1164f8216a6379d02be71c674781de5c627\"" Jun 20 19:05:55.917722 containerd[1547]: time="2025-06-20T19:05:55.917698169Z" level=info msg="StartContainer for \"ebe20781eccaa3b7f8742b92c097b1164f8216a6379d02be71c674781de5c627\"" Jun 20 19:05:55.953732 systemd[1]: Started cri-containerd-ebe20781eccaa3b7f8742b92c097b1164f8216a6379d02be71c674781de5c627.scope - libcontainer container ebe20781eccaa3b7f8742b92c097b1164f8216a6379d02be71c674781de5c627. Jun 20 19:05:55.989818 containerd[1547]: time="2025-06-20T19:05:55.989728112Z" level=info msg="StartContainer for \"ebe20781eccaa3b7f8742b92c097b1164f8216a6379d02be71c674781de5c627\" returns successfully" Jun 20 19:05:56.005977 systemd[1]: cri-containerd-ebe20781eccaa3b7f8742b92c097b1164f8216a6379d02be71c674781de5c627.scope: Deactivated successfully. Jun 20 19:05:56.006326 systemd[1]: cri-containerd-ebe20781eccaa3b7f8742b92c097b1164f8216a6379d02be71c674781de5c627.scope: Consumed 28ms CPU time, 8.6M memory peak, 2.2M read from disk. Jun 20 19:05:56.056781 containerd[1547]: time="2025-06-20T19:05:56.056713179Z" level=info msg="shim disconnected" id=ebe20781eccaa3b7f8742b92c097b1164f8216a6379d02be71c674781de5c627 namespace=k8s.io Jun 20 19:05:56.057134 containerd[1547]: time="2025-06-20T19:05:56.057095358Z" level=warning msg="cleaning up after shim disconnected" id=ebe20781eccaa3b7f8742b92c097b1164f8216a6379d02be71c674781de5c627 namespace=k8s.io Jun 20 19:05:56.057134 containerd[1547]: time="2025-06-20T19:05:56.057117720Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:05:56.073113 containerd[1547]: time="2025-06-20T19:05:56.073021931Z" level=warning msg="cleanup warnings time=\"2025-06-20T19:05:56Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jun 20 19:05:56.105003 kubelet[2647]: E0620 19:05:56.104933 2647 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jun 20 19:05:56.738761 sshd[4596]: Accepted publickey for core from 139.178.68.195 port 51230 ssh2: RSA SHA256:ttetsIDeDmoOcWQonGIv9sNpOX/TzQSVP7aimMY1zAQ Jun 20 19:05:56.740563 sshd-session[4596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:05:56.748809 systemd-logind[1529]: New session 22 of user core. Jun 20 19:05:56.752755 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 20 19:05:56.841554 containerd[1547]: time="2025-06-20T19:05:56.841098389Z" level=info msg="CreateContainer within sandbox \"7150b4c20519e5e571102e325c005f9c3df5f0cbc8e98501fe7fee7e979d9c23\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jun 20 19:05:56.874954 containerd[1547]: time="2025-06-20T19:05:56.870006730Z" level=info msg="CreateContainer within sandbox \"7150b4c20519e5e571102e325c005f9c3df5f0cbc8e98501fe7fee7e979d9c23\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f1f056fc015ba4358e3a00c9158a54692850ec350a9556ad3c3dd1736b3a4692\"" Jun 20 19:05:56.880566 containerd[1547]: time="2025-06-20T19:05:56.878304838Z" level=info msg="StartContainer for \"f1f056fc015ba4358e3a00c9158a54692850ec350a9556ad3c3dd1736b3a4692\"" Jun 20 19:05:56.942901 systemd[1]: Started cri-containerd-f1f056fc015ba4358e3a00c9158a54692850ec350a9556ad3c3dd1736b3a4692.scope - libcontainer container f1f056fc015ba4358e3a00c9158a54692850ec350a9556ad3c3dd1736b3a4692. Jun 20 19:05:56.970021 containerd[1547]: time="2025-06-20T19:05:56.969857542Z" level=info msg="StartContainer for \"f1f056fc015ba4358e3a00c9158a54692850ec350a9556ad3c3dd1736b3a4692\" returns successfully" Jun 20 19:05:56.980303 systemd[1]: cri-containerd-f1f056fc015ba4358e3a00c9158a54692850ec350a9556ad3c3dd1736b3a4692.scope: Deactivated successfully. Jun 20 19:05:56.981013 systemd[1]: cri-containerd-f1f056fc015ba4358e3a00c9158a54692850ec350a9556ad3c3dd1736b3a4692.scope: Consumed 21ms CPU time, 7.5M memory peak, 2M read from disk. Jun 20 19:05:57.007001 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f1f056fc015ba4358e3a00c9158a54692850ec350a9556ad3c3dd1736b3a4692-rootfs.mount: Deactivated successfully. Jun 20 19:05:57.013846 containerd[1547]: time="2025-06-20T19:05:57.013766346Z" level=info msg="shim disconnected" id=f1f056fc015ba4358e3a00c9158a54692850ec350a9556ad3c3dd1736b3a4692 namespace=k8s.io Jun 20 19:05:57.013846 containerd[1547]: time="2025-06-20T19:05:57.013836157Z" level=warning msg="cleaning up after shim disconnected" id=f1f056fc015ba4358e3a00c9158a54692850ec350a9556ad3c3dd1736b3a4692 namespace=k8s.io Jun 20 19:05:57.013846 containerd[1547]: time="2025-06-20T19:05:57.013847620Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:05:57.411507 sshd[4700]: Connection closed by 139.178.68.195 port 51230 Jun 20 19:05:57.412343 sshd-session[4596]: pam_unix(sshd:session): session closed for user core Jun 20 19:05:57.416562 systemd[1]: sshd@55-157.180.74.176:22-139.178.68.195:51230.service: Deactivated successfully. Jun 20 19:05:57.419154 systemd[1]: session-22.scope: Deactivated successfully. Jun 20 19:05:57.421360 systemd-logind[1529]: Session 22 logged out. Waiting for processes to exit. Jun 20 19:05:57.423042 systemd-logind[1529]: Removed session 22. Jun 20 19:05:57.588247 systemd[1]: Started sshd@56-157.180.74.176:22-139.178.68.195:51240.service - OpenSSH per-connection server daemon (139.178.68.195:51240). Jun 20 19:05:57.843152 containerd[1547]: time="2025-06-20T19:05:57.842661047Z" level=info msg="CreateContainer within sandbox \"7150b4c20519e5e571102e325c005f9c3df5f0cbc8e98501fe7fee7e979d9c23\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jun 20 19:05:57.873577 containerd[1547]: time="2025-06-20T19:05:57.872914737Z" level=info msg="CreateContainer within sandbox \"7150b4c20519e5e571102e325c005f9c3df5f0cbc8e98501fe7fee7e979d9c23\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4345dfc00577c04ae2af0cca5469cb222edc52fe633cd72b7a949c65a970f587\"" Jun 20 19:05:57.876282 containerd[1547]: time="2025-06-20T19:05:57.876050253Z" level=info msg="StartContainer for \"4345dfc00577c04ae2af0cca5469cb222edc52fe633cd72b7a949c65a970f587\"" Jun 20 19:05:57.918461 kubelet[2647]: E0620 19:05:57.915736 2647 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-gg7gs" podUID="466a9534-fe12-476e-9f50-a4bae6981ca2" Jun 20 19:05:57.949760 systemd[1]: Started cri-containerd-4345dfc00577c04ae2af0cca5469cb222edc52fe633cd72b7a949c65a970f587.scope - libcontainer container 4345dfc00577c04ae2af0cca5469cb222edc52fe633cd72b7a949c65a970f587. Jun 20 19:05:57.992311 containerd[1547]: time="2025-06-20T19:05:57.992232947Z" level=info msg="StartContainer for \"4345dfc00577c04ae2af0cca5469cb222edc52fe633cd72b7a949c65a970f587\" returns successfully" Jun 20 19:05:57.997309 systemd[1]: cri-containerd-4345dfc00577c04ae2af0cca5469cb222edc52fe633cd72b7a949c65a970f587.scope: Deactivated successfully. Jun 20 19:05:58.016213 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4345dfc00577c04ae2af0cca5469cb222edc52fe633cd72b7a949c65a970f587-rootfs.mount: Deactivated successfully. Jun 20 19:05:58.027116 containerd[1547]: time="2025-06-20T19:05:58.027028729Z" level=info msg="shim disconnected" id=4345dfc00577c04ae2af0cca5469cb222edc52fe633cd72b7a949c65a970f587 namespace=k8s.io Jun 20 19:05:58.027116 containerd[1547]: time="2025-06-20T19:05:58.027113648Z" level=warning msg="cleaning up after shim disconnected" id=4345dfc00577c04ae2af0cca5469cb222edc52fe633cd72b7a949c65a970f587 namespace=k8s.io Jun 20 19:05:58.027284 containerd[1547]: time="2025-06-20T19:05:58.027123798Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:05:58.589998 sshd[4768]: Accepted publickey for core from 139.178.68.195 port 51240 ssh2: RSA SHA256:ttetsIDeDmoOcWQonGIv9sNpOX/TzQSVP7aimMY1zAQ Jun 20 19:05:58.591923 sshd-session[4768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:05:58.599217 systemd-logind[1529]: New session 23 of user core. Jun 20 19:05:58.606786 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 20 19:05:58.848818 containerd[1547]: time="2025-06-20T19:05:58.848319822Z" level=info msg="CreateContainer within sandbox \"7150b4c20519e5e571102e325c005f9c3df5f0cbc8e98501fe7fee7e979d9c23\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jun 20 19:05:58.868044 containerd[1547]: time="2025-06-20T19:05:58.867948375Z" level=info msg="CreateContainer within sandbox \"7150b4c20519e5e571102e325c005f9c3df5f0cbc8e98501fe7fee7e979d9c23\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e5c854d5fdf3470793247da098582ba8cbea9811fc7ea80e659bc3eeee8fe6b8\"" Jun 20 19:05:58.870103 containerd[1547]: time="2025-06-20T19:05:58.868759192Z" level=info msg="StartContainer for \"e5c854d5fdf3470793247da098582ba8cbea9811fc7ea80e659bc3eeee8fe6b8\"" Jun 20 19:05:58.916868 systemd[1]: Started cri-containerd-e5c854d5fdf3470793247da098582ba8cbea9811fc7ea80e659bc3eeee8fe6b8.scope - libcontainer container e5c854d5fdf3470793247da098582ba8cbea9811fc7ea80e659bc3eeee8fe6b8. Jun 20 19:05:58.955233 systemd[1]: cri-containerd-e5c854d5fdf3470793247da098582ba8cbea9811fc7ea80e659bc3eeee8fe6b8.scope: Deactivated successfully. Jun 20 19:05:58.959638 containerd[1547]: time="2025-06-20T19:05:58.958424341Z" level=info msg="StartContainer for \"e5c854d5fdf3470793247da098582ba8cbea9811fc7ea80e659bc3eeee8fe6b8\" returns successfully" Jun 20 19:05:58.986149 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e5c854d5fdf3470793247da098582ba8cbea9811fc7ea80e659bc3eeee8fe6b8-rootfs.mount: Deactivated successfully. Jun 20 19:05:58.997440 containerd[1547]: time="2025-06-20T19:05:58.997354843Z" level=info msg="shim disconnected" id=e5c854d5fdf3470793247da098582ba8cbea9811fc7ea80e659bc3eeee8fe6b8 namespace=k8s.io Jun 20 19:05:58.997440 containerd[1547]: time="2025-06-20T19:05:58.997424003Z" level=warning msg="cleaning up after shim disconnected" id=e5c854d5fdf3470793247da098582ba8cbea9811fc7ea80e659bc3eeee8fe6b8 namespace=k8s.io Jun 20 19:05:58.997440 containerd[1547]: time="2025-06-20T19:05:58.997436337Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:05:59.856289 containerd[1547]: time="2025-06-20T19:05:59.856230932Z" level=info msg="CreateContainer within sandbox \"7150b4c20519e5e571102e325c005f9c3df5f0cbc8e98501fe7fee7e979d9c23\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jun 20 19:05:59.891259 containerd[1547]: time="2025-06-20T19:05:59.890770869Z" level=info msg="CreateContainer within sandbox \"7150b4c20519e5e571102e325c005f9c3df5f0cbc8e98501fe7fee7e979d9c23\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"151b13bd9959d695e665a0f46b4d35e9d0644081cd3a0028688b64abba8c2e21\"" Jun 20 19:05:59.891177 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2564847803.mount: Deactivated successfully. Jun 20 19:05:59.895457 containerd[1547]: time="2025-06-20T19:05:59.894649096Z" level=info msg="StartContainer for \"151b13bd9959d695e665a0f46b4d35e9d0644081cd3a0028688b64abba8c2e21\"" Jun 20 19:05:59.923318 kubelet[2647]: E0620 19:05:59.923034 2647 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-gg7gs" podUID="466a9534-fe12-476e-9f50-a4bae6981ca2" Jun 20 19:05:59.959849 systemd[1]: Started cri-containerd-151b13bd9959d695e665a0f46b4d35e9d0644081cd3a0028688b64abba8c2e21.scope - libcontainer container 151b13bd9959d695e665a0f46b4d35e9d0644081cd3a0028688b64abba8c2e21. Jun 20 19:06:00.032262 containerd[1547]: time="2025-06-20T19:06:00.032211302Z" level=info msg="StartContainer for \"151b13bd9959d695e665a0f46b4d35e9d0644081cd3a0028688b64abba8c2e21\" returns successfully" Jun 20 19:06:00.463572 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jun 20 19:06:00.887140 kubelet[2647]: I0620 19:06:00.886314 2647 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-9vgj8" podStartSLOduration=5.886289134 podStartE2EDuration="5.886289134s" podCreationTimestamp="2025-06-20 19:05:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:06:00.885303958 +0000 UTC m=+345.161792277" watchObservedRunningTime="2025-06-20 19:06:00.886289134 +0000 UTC m=+345.162777453" Jun 20 19:06:03.581627 systemd-networkd[1429]: lxc_health: Link UP Jun 20 19:06:03.589060 systemd-networkd[1429]: lxc_health: Gained carrier Jun 20 19:06:03.823970 systemd[1]: run-containerd-runc-k8s.io-151b13bd9959d695e665a0f46b4d35e9d0644081cd3a0028688b64abba8c2e21-runc.bUse1u.mount: Deactivated successfully. Jun 20 19:06:04.948486 systemd-networkd[1429]: lxc_health: Gained IPv6LL Jun 20 19:06:10.357112 systemd[1]: run-containerd-runc-k8s.io-151b13bd9959d695e665a0f46b4d35e9d0644081cd3a0028688b64abba8c2e21-runc.TuFE9f.mount: Deactivated successfully. Jun 20 19:06:10.573965 sshd[4824]: Connection closed by 139.178.68.195 port 51240 Jun 20 19:06:10.575356 sshd-session[4768]: pam_unix(sshd:session): session closed for user core Jun 20 19:06:10.585914 systemd[1]: sshd@56-157.180.74.176:22-139.178.68.195:51240.service: Deactivated successfully. Jun 20 19:06:10.588839 systemd[1]: session-23.scope: Deactivated successfully. Jun 20 19:06:10.590416 systemd-logind[1529]: Session 23 logged out. Waiting for processes to exit. Jun 20 19:06:10.593018 systemd-logind[1529]: Removed session 23. Jun 20 19:06:15.920585 containerd[1547]: time="2025-06-20T19:06:15.920006870Z" level=info msg="StopPodSandbox for \"b66230fe0cbb4d8e05f6cf7f9e66520ec63f00774c82b9d8a91510522532c8b2\"" Jun 20 19:06:15.922911 containerd[1547]: time="2025-06-20T19:06:15.921502903Z" level=info msg="TearDown network for sandbox \"b66230fe0cbb4d8e05f6cf7f9e66520ec63f00774c82b9d8a91510522532c8b2\" successfully" Jun 20 19:06:15.922911 containerd[1547]: time="2025-06-20T19:06:15.921582803Z" level=info msg="StopPodSandbox for \"b66230fe0cbb4d8e05f6cf7f9e66520ec63f00774c82b9d8a91510522532c8b2\" returns successfully" Jun 20 19:06:15.922911 containerd[1547]: time="2025-06-20T19:06:15.922693470Z" level=info msg="RemovePodSandbox for \"b66230fe0cbb4d8e05f6cf7f9e66520ec63f00774c82b9d8a91510522532c8b2\"" Jun 20 19:06:15.922911 containerd[1547]: time="2025-06-20T19:06:15.922727603Z" level=info msg="Forcibly stopping sandbox \"b66230fe0cbb4d8e05f6cf7f9e66520ec63f00774c82b9d8a91510522532c8b2\"" Jun 20 19:06:15.924267 containerd[1547]: time="2025-06-20T19:06:15.922807344Z" level=info msg="TearDown network for sandbox \"b66230fe0cbb4d8e05f6cf7f9e66520ec63f00774c82b9d8a91510522532c8b2\" successfully" Jun 20 19:06:15.936334 containerd[1547]: time="2025-06-20T19:06:15.936211445Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b66230fe0cbb4d8e05f6cf7f9e66520ec63f00774c82b9d8a91510522532c8b2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 20 19:06:15.936595 containerd[1547]: time="2025-06-20T19:06:15.936370134Z" level=info msg="RemovePodSandbox \"b66230fe0cbb4d8e05f6cf7f9e66520ec63f00774c82b9d8a91510522532c8b2\" returns successfully" Jun 20 19:06:15.937361 containerd[1547]: time="2025-06-20T19:06:15.937092837Z" level=info msg="StopPodSandbox for \"41c03888fe4f67e171e9bd937cb8ed724524a0af817a409da8ce8e42e36bb5ca\"" Jun 20 19:06:15.937361 containerd[1547]: time="2025-06-20T19:06:15.937212233Z" level=info msg="TearDown network for sandbox \"41c03888fe4f67e171e9bd937cb8ed724524a0af817a409da8ce8e42e36bb5ca\" successfully" Jun 20 19:06:15.937361 containerd[1547]: time="2025-06-20T19:06:15.937229846Z" level=info msg="StopPodSandbox for \"41c03888fe4f67e171e9bd937cb8ed724524a0af817a409da8ce8e42e36bb5ca\" returns successfully" Jun 20 19:06:15.939614 containerd[1547]: time="2025-06-20T19:06:15.937723528Z" level=info msg="RemovePodSandbox for \"41c03888fe4f67e171e9bd937cb8ed724524a0af817a409da8ce8e42e36bb5ca\"" Jun 20 19:06:15.939614 containerd[1547]: time="2025-06-20T19:06:15.937759555Z" level=info msg="Forcibly stopping sandbox \"41c03888fe4f67e171e9bd937cb8ed724524a0af817a409da8ce8e42e36bb5ca\"" Jun 20 19:06:15.939614 containerd[1547]: time="2025-06-20T19:06:15.937837773Z" level=info msg="TearDown network for sandbox \"41c03888fe4f67e171e9bd937cb8ed724524a0af817a409da8ce8e42e36bb5ca\" successfully" Jun 20 19:06:15.945049 containerd[1547]: time="2025-06-20T19:06:15.945003804Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"41c03888fe4f67e171e9bd937cb8ed724524a0af817a409da8ce8e42e36bb5ca\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 20 19:06:15.945245 containerd[1547]: time="2025-06-20T19:06:15.945222176Z" level=info msg="RemovePodSandbox \"41c03888fe4f67e171e9bd937cb8ed724524a0af817a409da8ce8e42e36bb5ca\" returns successfully" Jun 20 19:06:26.443177 systemd[1]: cri-containerd-effad871f11491f9a7dd8e9646b7a5e4e0a8df678298839737e3984346017ab5.scope: Deactivated successfully. Jun 20 19:06:26.444409 systemd[1]: cri-containerd-effad871f11491f9a7dd8e9646b7a5e4e0a8df678298839737e3984346017ab5.scope: Consumed 6.687s CPU time, 78.7M memory peak, 24.8M read from disk. Jun 20 19:06:26.492321 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-effad871f11491f9a7dd8e9646b7a5e4e0a8df678298839737e3984346017ab5-rootfs.mount: Deactivated successfully. Jun 20 19:06:26.516478 containerd[1547]: time="2025-06-20T19:06:26.516372635Z" level=info msg="shim disconnected" id=effad871f11491f9a7dd8e9646b7a5e4e0a8df678298839737e3984346017ab5 namespace=k8s.io Jun 20 19:06:26.516478 containerd[1547]: time="2025-06-20T19:06:26.516442737Z" level=warning msg="cleaning up after shim disconnected" id=effad871f11491f9a7dd8e9646b7a5e4e0a8df678298839737e3984346017ab5 namespace=k8s.io Jun 20 19:06:26.516478 containerd[1547]: time="2025-06-20T19:06:26.516467214Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:06:26.838342 kubelet[2647]: E0620 19:06:26.837846 2647 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:49876->10.0.0.2:2379: read: connection timed out" Jun 20 19:06:26.851884 systemd[1]: cri-containerd-73ee98ec5cfc4b8cb5b87e235d5d8b0ba3e27584bd12963a6bb593089a00d2df.scope: Deactivated successfully. Jun 20 19:06:26.852523 systemd[1]: cri-containerd-73ee98ec5cfc4b8cb5b87e235d5d8b0ba3e27584bd12963a6bb593089a00d2df.scope: Consumed 4.343s CPU time, 32.8M memory peak, 13.3M read from disk. Jun 20 19:06:26.891403 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-73ee98ec5cfc4b8cb5b87e235d5d8b0ba3e27584bd12963a6bb593089a00d2df-rootfs.mount: Deactivated successfully. Jun 20 19:06:26.904222 containerd[1547]: time="2025-06-20T19:06:26.903992889Z" level=info msg="shim disconnected" id=73ee98ec5cfc4b8cb5b87e235d5d8b0ba3e27584bd12963a6bb593089a00d2df namespace=k8s.io Jun 20 19:06:26.904222 containerd[1547]: time="2025-06-20T19:06:26.904169342Z" level=warning msg="cleaning up after shim disconnected" id=73ee98ec5cfc4b8cb5b87e235d5d8b0ba3e27584bd12963a6bb593089a00d2df namespace=k8s.io Jun 20 19:06:26.904222 containerd[1547]: time="2025-06-20T19:06:26.904186084Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:06:26.929868 kubelet[2647]: I0620 19:06:26.929638 2647 scope.go:117] "RemoveContainer" containerID="effad871f11491f9a7dd8e9646b7a5e4e0a8df678298839737e3984346017ab5" Jun 20 19:06:26.932080 kubelet[2647]: I0620 19:06:26.932014 2647 scope.go:117] "RemoveContainer" containerID="73ee98ec5cfc4b8cb5b87e235d5d8b0ba3e27584bd12963a6bb593089a00d2df" Jun 20 19:06:26.937605 containerd[1547]: time="2025-06-20T19:06:26.937562352Z" level=info msg="CreateContainer within sandbox \"7c95909c7db8a11f5cb56fa6a63c1040838c4e729a1973d2644c4918e1e58c12\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jun 20 19:06:26.937831 containerd[1547]: time="2025-06-20T19:06:26.937762259Z" level=info msg="CreateContainer within sandbox \"8740e1b3c66368f5e2cacddd9a9773e1dee8ae4e420e225533029c9069da480e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jun 20 19:06:26.965495 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount452342567.mount: Deactivated successfully. Jun 20 19:06:26.974947 containerd[1547]: time="2025-06-20T19:06:26.974199056Z" level=info msg="CreateContainer within sandbox \"8740e1b3c66368f5e2cacddd9a9773e1dee8ae4e420e225533029c9069da480e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"22fe88523ec9c09f550c574eb752ae2f0fd834bc85762d7454a8065f02ed4e2c\"" Jun 20 19:06:26.975433 containerd[1547]: time="2025-06-20T19:06:26.975398582Z" level=info msg="StartContainer for \"22fe88523ec9c09f550c574eb752ae2f0fd834bc85762d7454a8065f02ed4e2c\"" Jun 20 19:06:26.977707 containerd[1547]: time="2025-06-20T19:06:26.977674369Z" level=info msg="CreateContainer within sandbox \"7c95909c7db8a11f5cb56fa6a63c1040838c4e729a1973d2644c4918e1e58c12\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"73d4181cc5a5fa3472146cadd4fdd9153c7cd50049d92deaa658f695d72e10be\"" Jun 20 19:06:26.978261 containerd[1547]: time="2025-06-20T19:06:26.978226662Z" level=info msg="StartContainer for \"73d4181cc5a5fa3472146cadd4fdd9153c7cd50049d92deaa658f695d72e10be\"" Jun 20 19:06:27.009716 systemd[1]: Started cri-containerd-22fe88523ec9c09f550c574eb752ae2f0fd834bc85762d7454a8065f02ed4e2c.scope - libcontainer container 22fe88523ec9c09f550c574eb752ae2f0fd834bc85762d7454a8065f02ed4e2c. Jun 20 19:06:27.016864 systemd[1]: Started cri-containerd-73d4181cc5a5fa3472146cadd4fdd9153c7cd50049d92deaa658f695d72e10be.scope - libcontainer container 73d4181cc5a5fa3472146cadd4fdd9153c7cd50049d92deaa658f695d72e10be. Jun 20 19:06:27.060798 containerd[1547]: time="2025-06-20T19:06:27.060623498Z" level=info msg="StartContainer for \"73d4181cc5a5fa3472146cadd4fdd9153c7cd50049d92deaa658f695d72e10be\" returns successfully" Jun 20 19:06:27.060798 containerd[1547]: time="2025-06-20T19:06:27.060679715Z" level=info msg="StartContainer for \"22fe88523ec9c09f550c574eb752ae2f0fd834bc85762d7454a8065f02ed4e2c\" returns successfully" Jun 20 19:06:31.196855 kubelet[2647]: E0620 19:06:31.192505 2647 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:49640->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4230-2-0-4-ec216ba796.184ad5b58268a271 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4230-2-0-4-ec216ba796,UID:1ad1e74a8d5a5dab456b5d7961b02543,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4230-2-0-4-ec216ba796,},FirstTimestamp:2025-06-20 19:06:20.740551281 +0000 UTC m=+365.017039620,LastTimestamp:2025-06-20 19:06:20.740551281 +0000 UTC m=+365.017039620,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-2-0-4-ec216ba796,}"