Jul 6 23:55:54.024024 kernel: Linux version 6.6.95-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Sun Jul 6 22:23:50 -00 2025 Jul 6 23:55:54.024067 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 6 23:55:54.024085 kernel: BIOS-provided physical RAM map: Jul 6 23:55:54.024098 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 6 23:55:54.024110 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 6 23:55:54.024123 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 6 23:55:54.024138 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007cfdbfff] usable Jul 6 23:55:54.024150 kernel: BIOS-e820: [mem 0x000000007cfdc000-0x000000007cffffff] reserved Jul 6 23:55:54.024166 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jul 6 23:55:54.024178 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jul 6 23:55:54.024191 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 6 23:55:54.024203 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 6 23:55:54.024216 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 6 23:55:54.024228 kernel: NX (Execute Disable) protection: active Jul 6 23:55:54.024247 kernel: APIC: Static calls initialized Jul 6 23:55:54.024261 kernel: SMBIOS 3.0.0 present. Jul 6 23:55:54.024275 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Jul 6 23:55:54.024289 kernel: Hypervisor detected: KVM Jul 6 23:55:54.024302 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 6 23:55:54.024316 kernel: kvm-clock: using sched offset of 3010301225 cycles Jul 6 23:55:54.024330 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 6 23:55:54.024345 kernel: tsc: Detected 2495.312 MHz processor Jul 6 23:55:54.024359 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 6 23:55:54.024377 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 6 23:55:54.024391 kernel: last_pfn = 0x7cfdc max_arch_pfn = 0x400000000 Jul 6 23:55:54.024406 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jul 6 23:55:54.024420 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 6 23:55:54.024433 kernel: Using GB pages for direct mapping Jul 6 23:55:54.024447 kernel: ACPI: Early table checksum verification disabled Jul 6 23:55:54.024461 kernel: ACPI: RSDP 0x00000000000F5270 000014 (v00 BOCHS ) Jul 6 23:55:54.024475 kernel: ACPI: RSDT 0x000000007CFE2693 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:55:54.024490 kernel: ACPI: FACP 0x000000007CFE2483 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:55:54.024509 kernel: ACPI: DSDT 0x000000007CFE0040 002443 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:55:54.024523 kernel: ACPI: FACS 0x000000007CFE0000 000040 Jul 6 23:55:54.024538 kernel: ACPI: APIC 0x000000007CFE2577 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:55:54.024551 kernel: ACPI: HPET 0x000000007CFE25F7 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:55:54.024565 kernel: ACPI: MCFG 0x000000007CFE262F 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:55:54.024579 kernel: ACPI: WAET 0x000000007CFE266B 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:55:54.024593 kernel: ACPI: Reserving FACP table memory at [mem 0x7cfe2483-0x7cfe2576] Jul 6 23:55:54.024608 kernel: ACPI: Reserving DSDT table memory at [mem 0x7cfe0040-0x7cfe2482] Jul 6 23:55:54.024652 kernel: ACPI: Reserving FACS table memory at [mem 0x7cfe0000-0x7cfe003f] Jul 6 23:55:54.024666 kernel: ACPI: Reserving APIC table memory at [mem 0x7cfe2577-0x7cfe25f6] Jul 6 23:55:54.024681 kernel: ACPI: Reserving HPET table memory at [mem 0x7cfe25f7-0x7cfe262e] Jul 6 23:55:54.024696 kernel: ACPI: Reserving MCFG table memory at [mem 0x7cfe262f-0x7cfe266a] Jul 6 23:55:54.024710 kernel: ACPI: Reserving WAET table memory at [mem 0x7cfe266b-0x7cfe2692] Jul 6 23:55:54.024725 kernel: No NUMA configuration found Jul 6 23:55:54.024742 kernel: Faking a node at [mem 0x0000000000000000-0x000000007cfdbfff] Jul 6 23:55:54.024757 kernel: NODE_DATA(0) allocated [mem 0x7cfd6000-0x7cfdbfff] Jul 6 23:55:54.024772 kernel: Zone ranges: Jul 6 23:55:54.024786 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 6 23:55:54.024801 kernel: DMA32 [mem 0x0000000001000000-0x000000007cfdbfff] Jul 6 23:55:54.024815 kernel: Normal empty Jul 6 23:55:54.024844 kernel: Movable zone start for each node Jul 6 23:55:54.024859 kernel: Early memory node ranges Jul 6 23:55:54.024874 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 6 23:55:54.024888 kernel: node 0: [mem 0x0000000000100000-0x000000007cfdbfff] Jul 6 23:55:54.024905 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007cfdbfff] Jul 6 23:55:54.024920 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 6 23:55:54.024935 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 6 23:55:54.024949 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jul 6 23:55:54.024964 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 6 23:55:54.024978 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 6 23:55:54.024993 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 6 23:55:54.025007 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 6 23:55:54.025022 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 6 23:55:54.025039 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 6 23:55:54.025054 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 6 23:55:54.025069 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 6 23:55:54.025083 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 6 23:55:54.025098 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 6 23:55:54.025112 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jul 6 23:55:54.025127 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 6 23:55:54.025141 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jul 6 23:55:54.025156 kernel: Booting paravirtualized kernel on KVM Jul 6 23:55:54.025179 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 6 23:55:54.025199 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jul 6 23:55:54.025220 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 Jul 6 23:55:54.025241 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 Jul 6 23:55:54.025261 kernel: pcpu-alloc: [0] 0 1 Jul 6 23:55:54.025282 kernel: kvm-guest: PV spinlocks disabled, no host support Jul 6 23:55:54.025306 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 6 23:55:54.025329 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 6 23:55:54.025354 kernel: random: crng init done Jul 6 23:55:54.025376 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 6 23:55:54.025396 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 6 23:55:54.025417 kernel: Fallback order for Node 0: 0 Jul 6 23:55:54.025437 kernel: Built 1 zonelists, mobility grouping on. Total pages: 503708 Jul 6 23:55:54.025458 kernel: Policy zone: DMA32 Jul 6 23:55:54.025478 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 6 23:55:54.025500 kernel: Memory: 1922052K/2047464K available (12288K kernel code, 2295K rwdata, 22748K rodata, 42868K init, 2324K bss, 125152K reserved, 0K cma-reserved) Jul 6 23:55:54.025521 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 6 23:55:54.025547 kernel: ftrace: allocating 37966 entries in 149 pages Jul 6 23:55:54.025568 kernel: ftrace: allocated 149 pages with 4 groups Jul 6 23:55:54.025588 kernel: Dynamic Preempt: voluntary Jul 6 23:55:54.025607 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 6 23:55:54.026971 kernel: rcu: RCU event tracing is enabled. Jul 6 23:55:54.026994 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 6 23:55:54.027010 kernel: Trampoline variant of Tasks RCU enabled. Jul 6 23:55:54.027025 kernel: Rude variant of Tasks RCU enabled. Jul 6 23:55:54.027040 kernel: Tracing variant of Tasks RCU enabled. Jul 6 23:55:54.027056 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 6 23:55:54.027079 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 6 23:55:54.027093 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jul 6 23:55:54.027108 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 6 23:55:54.027123 kernel: Console: colour VGA+ 80x25 Jul 6 23:55:54.027138 kernel: printk: console [tty0] enabled Jul 6 23:55:54.027152 kernel: printk: console [ttyS0] enabled Jul 6 23:55:54.027167 kernel: ACPI: Core revision 20230628 Jul 6 23:55:54.027182 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 6 23:55:54.027197 kernel: APIC: Switch to symmetric I/O mode setup Jul 6 23:55:54.027215 kernel: x2apic enabled Jul 6 23:55:54.027230 kernel: APIC: Switched APIC routing to: physical x2apic Jul 6 23:55:54.027244 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 6 23:55:54.027259 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jul 6 23:55:54.027274 kernel: Calibrating delay loop (skipped) preset value.. 4990.62 BogoMIPS (lpj=2495312) Jul 6 23:55:54.027289 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 6 23:55:54.027304 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 6 23:55:54.027319 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 6 23:55:54.027345 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 6 23:55:54.027360 kernel: Spectre V2 : Mitigation: Retpolines Jul 6 23:55:54.027376 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 6 23:55:54.027391 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 6 23:55:54.027410 kernel: RETBleed: Mitigation: untrained return thunk Jul 6 23:55:54.027425 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 6 23:55:54.027441 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 6 23:55:54.027456 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 6 23:55:54.027472 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 6 23:55:54.027490 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 6 23:55:54.027506 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 6 23:55:54.027522 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 6 23:55:54.027537 kernel: Freeing SMP alternatives memory: 32K Jul 6 23:55:54.027552 kernel: pid_max: default: 32768 minimum: 301 Jul 6 23:55:54.027568 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 6 23:55:54.027583 kernel: landlock: Up and running. Jul 6 23:55:54.027599 kernel: SELinux: Initializing. Jul 6 23:55:54.027617 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 6 23:55:54.027659 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 6 23:55:54.027675 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 6 23:55:54.027697 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 6 23:55:54.027719 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 6 23:55:54.027741 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 6 23:55:54.027763 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 6 23:55:54.027784 kernel: ... version: 0 Jul 6 23:55:54.027802 kernel: ... bit width: 48 Jul 6 23:55:54.027823 kernel: ... generic registers: 6 Jul 6 23:55:54.027863 kernel: ... value mask: 0000ffffffffffff Jul 6 23:55:54.027878 kernel: ... max period: 00007fffffffffff Jul 6 23:55:54.027893 kernel: ... fixed-purpose events: 0 Jul 6 23:55:54.027909 kernel: ... event mask: 000000000000003f Jul 6 23:55:54.027924 kernel: signal: max sigframe size: 1776 Jul 6 23:55:54.027939 kernel: rcu: Hierarchical SRCU implementation. Jul 6 23:55:54.027955 kernel: rcu: Max phase no-delay instances is 400. Jul 6 23:55:54.027970 kernel: smp: Bringing up secondary CPUs ... Jul 6 23:55:54.027989 kernel: smpboot: x86: Booting SMP configuration: Jul 6 23:55:54.028004 kernel: .... node #0, CPUs: #1 Jul 6 23:55:54.028019 kernel: smp: Brought up 1 node, 2 CPUs Jul 6 23:55:54.028034 kernel: smpboot: Max logical packages: 1 Jul 6 23:55:54.028050 kernel: smpboot: Total of 2 processors activated (9981.24 BogoMIPS) Jul 6 23:55:54.028065 kernel: devtmpfs: initialized Jul 6 23:55:54.028080 kernel: x86/mm: Memory block size: 128MB Jul 6 23:55:54.028096 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 6 23:55:54.028111 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 6 23:55:54.028130 kernel: pinctrl core: initialized pinctrl subsystem Jul 6 23:55:54.028145 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 6 23:55:54.028160 kernel: audit: initializing netlink subsys (disabled) Jul 6 23:55:54.028176 kernel: audit: type=2000 audit(1751846152.467:1): state=initialized audit_enabled=0 res=1 Jul 6 23:55:54.028191 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 6 23:55:54.028206 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 6 23:55:54.028222 kernel: cpuidle: using governor menu Jul 6 23:55:54.028237 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 6 23:55:54.028253 kernel: dca service started, version 1.12.1 Jul 6 23:55:54.028271 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jul 6 23:55:54.028286 kernel: PCI: Using configuration type 1 for base access Jul 6 23:55:54.028302 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 6 23:55:54.028318 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 6 23:55:54.028333 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 6 23:55:54.028349 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 6 23:55:54.028364 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 6 23:55:54.028379 kernel: ACPI: Added _OSI(Module Device) Jul 6 23:55:54.028395 kernel: ACPI: Added _OSI(Processor Device) Jul 6 23:55:54.028413 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 6 23:55:54.028428 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 6 23:55:54.028444 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jul 6 23:55:54.028459 kernel: ACPI: Interpreter enabled Jul 6 23:55:54.028474 kernel: ACPI: PM: (supports S0 S5) Jul 6 23:55:54.028490 kernel: ACPI: Using IOAPIC for interrupt routing Jul 6 23:55:54.028505 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 6 23:55:54.028521 kernel: PCI: Using E820 reservations for host bridge windows Jul 6 23:55:54.028536 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 6 23:55:54.028554 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 6 23:55:54.032325 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 6 23:55:54.032455 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jul 6 23:55:54.032569 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jul 6 23:55:54.032584 kernel: PCI host bridge to bus 0000:00 Jul 6 23:55:54.032723 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 6 23:55:54.032825 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 6 23:55:54.032999 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 6 23:55:54.033105 kernel: pci_bus 0000:00: root bus resource [mem 0x7d000000-0xafffffff window] Jul 6 23:55:54.033216 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 6 23:55:54.033313 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jul 6 23:55:54.033409 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 6 23:55:54.033535 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jul 6 23:55:54.033687 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 Jul 6 23:55:54.033802 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfb800000-0xfbffffff pref] Jul 6 23:55:54.033932 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfd200000-0xfd203fff 64bit pref] Jul 6 23:55:54.034045 kernel: pci 0000:00:01.0: reg 0x20: [mem 0xfea10000-0xfea10fff] Jul 6 23:55:54.034180 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea00000-0xfea0ffff pref] Jul 6 23:55:54.034296 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 6 23:55:54.034418 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jul 6 23:55:54.034535 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea11000-0xfea11fff] Jul 6 23:55:54.038202 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jul 6 23:55:54.038328 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea12000-0xfea12fff] Jul 6 23:55:54.038442 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jul 6 23:55:54.038550 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea13000-0xfea13fff] Jul 6 23:55:54.038725 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jul 6 23:55:54.038859 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea14000-0xfea14fff] Jul 6 23:55:54.038983 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jul 6 23:55:54.039088 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea15000-0xfea15fff] Jul 6 23:55:54.039201 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jul 6 23:55:54.039306 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea16000-0xfea16fff] Jul 6 23:55:54.039418 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jul 6 23:55:54.039531 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea17000-0xfea17fff] Jul 6 23:55:54.039667 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jul 6 23:55:54.039777 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea18000-0xfea18fff] Jul 6 23:55:54.039905 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Jul 6 23:55:54.040013 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfea19000-0xfea19fff] Jul 6 23:55:54.040126 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jul 6 23:55:54.040236 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 6 23:55:54.040348 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jul 6 23:55:54.040453 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc040-0xc05f] Jul 6 23:55:54.040557 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea1a000-0xfea1afff] Jul 6 23:55:54.040700 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jul 6 23:55:54.040810 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jul 6 23:55:54.040949 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Jul 6 23:55:54.041070 kernel: pci 0000:01:00.0: reg 0x14: [mem 0xfe880000-0xfe880fff] Jul 6 23:55:54.041181 kernel: pci 0000:01:00.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Jul 6 23:55:54.041292 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfe800000-0xfe87ffff pref] Jul 6 23:55:54.041399 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jul 6 23:55:54.041506 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Jul 6 23:55:54.041616 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Jul 6 23:55:54.041822 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Jul 6 23:55:54.041958 kernel: pci 0000:02:00.0: reg 0x10: [mem 0xfe600000-0xfe603fff 64bit] Jul 6 23:55:54.042065 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jul 6 23:55:54.042171 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Jul 6 23:55:54.042276 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jul 6 23:55:54.042396 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Jul 6 23:55:54.042507 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfe400000-0xfe400fff] Jul 6 23:55:54.042649 kernel: pci 0000:03:00.0: reg 0x20: [mem 0xfcc00000-0xfcc03fff 64bit pref] Jul 6 23:55:54.042776 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jul 6 23:55:54.043002 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Jul 6 23:55:54.043114 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jul 6 23:55:54.043233 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Jul 6 23:55:54.043344 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Jul 6 23:55:54.043452 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jul 6 23:55:54.043558 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Jul 6 23:55:54.043740 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jul 6 23:55:54.043878 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Jul 6 23:55:54.043990 kernel: pci 0000:05:00.0: reg 0x14: [mem 0xfe000000-0xfe000fff] Jul 6 23:55:54.044099 kernel: pci 0000:05:00.0: reg 0x20: [mem 0xfc800000-0xfc803fff 64bit pref] Jul 6 23:55:54.044204 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jul 6 23:55:54.044433 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Jul 6 23:55:54.044762 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jul 6 23:55:54.044941 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Jul 6 23:55:54.045057 kernel: pci 0000:06:00.0: reg 0x14: [mem 0xfde00000-0xfde00fff] Jul 6 23:55:54.045166 kernel: pci 0000:06:00.0: reg 0x20: [mem 0xfc600000-0xfc603fff 64bit pref] Jul 6 23:55:54.045271 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jul 6 23:55:54.045374 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Jul 6 23:55:54.045477 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jul 6 23:55:54.045491 kernel: acpiphp: Slot [0] registered Jul 6 23:55:54.045606 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Jul 6 23:55:54.046788 kernel: pci 0000:07:00.0: reg 0x14: [mem 0xfdc80000-0xfdc80fff] Jul 6 23:55:54.046917 kernel: pci 0000:07:00.0: reg 0x20: [mem 0xfc400000-0xfc403fff 64bit pref] Jul 6 23:55:54.047027 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfdc00000-0xfdc7ffff pref] Jul 6 23:55:54.047186 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jul 6 23:55:54.047304 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Jul 6 23:55:54.047408 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jul 6 23:55:54.047422 kernel: acpiphp: Slot [0-2] registered Jul 6 23:55:54.047532 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jul 6 23:55:54.048689 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Jul 6 23:55:54.048812 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jul 6 23:55:54.048826 kernel: acpiphp: Slot [0-3] registered Jul 6 23:55:54.048947 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jul 6 23:55:54.049053 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Jul 6 23:55:54.049157 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jul 6 23:55:54.049171 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 6 23:55:54.049182 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 6 23:55:54.049197 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 6 23:55:54.049208 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 6 23:55:54.049219 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 6 23:55:54.049229 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 6 23:55:54.049240 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 6 23:55:54.049250 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 6 23:55:54.049261 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 6 23:55:54.049271 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 6 23:55:54.049282 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 6 23:55:54.049295 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 6 23:55:54.049305 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 6 23:55:54.049316 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 6 23:55:54.049326 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 6 23:55:54.049337 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 6 23:55:54.049348 kernel: iommu: Default domain type: Translated Jul 6 23:55:54.049358 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 6 23:55:54.049369 kernel: PCI: Using ACPI for IRQ routing Jul 6 23:55:54.049380 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 6 23:55:54.049392 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 6 23:55:54.049403 kernel: e820: reserve RAM buffer [mem 0x7cfdc000-0x7fffffff] Jul 6 23:55:54.049508 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 6 23:55:54.049612 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 6 23:55:54.051788 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 6 23:55:54.051806 kernel: vgaarb: loaded Jul 6 23:55:54.051817 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 6 23:55:54.051844 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 6 23:55:54.051861 kernel: clocksource: Switched to clocksource kvm-clock Jul 6 23:55:54.051872 kernel: VFS: Disk quotas dquot_6.6.0 Jul 6 23:55:54.051883 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 6 23:55:54.051893 kernel: pnp: PnP ACPI init Jul 6 23:55:54.052008 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jul 6 23:55:54.052024 kernel: pnp: PnP ACPI: found 5 devices Jul 6 23:55:54.052035 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 6 23:55:54.052046 kernel: NET: Registered PF_INET protocol family Jul 6 23:55:54.052060 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 6 23:55:54.052072 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jul 6 23:55:54.052082 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 6 23:55:54.052093 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 6 23:55:54.052104 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jul 6 23:55:54.052115 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jul 6 23:55:54.052126 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 6 23:55:54.052136 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 6 23:55:54.052147 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 6 23:55:54.052160 kernel: NET: Registered PF_XDP protocol family Jul 6 23:55:54.052267 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jul 6 23:55:54.052374 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jul 6 23:55:54.052480 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jul 6 23:55:54.052585 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x1000-0x1fff] Jul 6 23:55:54.054735 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x2000-0x2fff] Jul 6 23:55:54.054860 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x3000-0x3fff] Jul 6 23:55:54.054973 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jul 6 23:55:54.055077 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Jul 6 23:55:54.055181 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Jul 6 23:55:54.055286 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jul 6 23:55:54.055390 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Jul 6 23:55:54.055537 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jul 6 23:55:54.055723 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jul 6 23:55:54.055880 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Jul 6 23:55:54.056020 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jul 6 23:55:54.056162 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jul 6 23:55:54.056296 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Jul 6 23:55:54.056417 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jul 6 23:55:54.056524 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jul 6 23:55:54.058653 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Jul 6 23:55:54.058771 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jul 6 23:55:54.058899 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jul 6 23:55:54.059023 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Jul 6 23:55:54.059133 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jul 6 23:55:54.059265 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jul 6 23:55:54.059382 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Jul 6 23:55:54.059490 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Jul 6 23:55:54.059597 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jul 6 23:55:54.059724 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jul 6 23:55:54.059845 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Jul 6 23:55:54.059953 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Jul 6 23:55:54.060059 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jul 6 23:55:54.060170 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jul 6 23:55:54.060278 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Jul 6 23:55:54.060384 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Jul 6 23:55:54.060495 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jul 6 23:55:54.060595 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 6 23:55:54.061712 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 6 23:55:54.061776 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 6 23:55:54.061857 kernel: pci_bus 0000:00: resource 7 [mem 0x7d000000-0xafffffff window] Jul 6 23:55:54.061920 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jul 6 23:55:54.061980 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jul 6 23:55:54.062062 kernel: pci_bus 0000:01: resource 1 [mem 0xfe800000-0xfe9fffff] Jul 6 23:55:54.062127 kernel: pci_bus 0000:01: resource 2 [mem 0xfd000000-0xfd1fffff 64bit pref] Jul 6 23:55:54.062197 kernel: pci_bus 0000:02: resource 1 [mem 0xfe600000-0xfe7fffff] Jul 6 23:55:54.062261 kernel: pci_bus 0000:02: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Jul 6 23:55:54.062332 kernel: pci_bus 0000:03: resource 1 [mem 0xfe400000-0xfe5fffff] Jul 6 23:55:54.062396 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Jul 6 23:55:54.062470 kernel: pci_bus 0000:04: resource 1 [mem 0xfe200000-0xfe3fffff] Jul 6 23:55:54.062534 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Jul 6 23:55:54.062603 kernel: pci_bus 0000:05: resource 1 [mem 0xfe000000-0xfe1fffff] Jul 6 23:55:54.062684 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Jul 6 23:55:54.062756 kernel: pci_bus 0000:06: resource 1 [mem 0xfde00000-0xfdffffff] Jul 6 23:55:54.062821 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Jul 6 23:55:54.062906 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Jul 6 23:55:54.062971 kernel: pci_bus 0000:07: resource 1 [mem 0xfdc00000-0xfddfffff] Jul 6 23:55:54.063036 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Jul 6 23:55:54.063106 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Jul 6 23:55:54.063172 kernel: pci_bus 0000:08: resource 1 [mem 0xfda00000-0xfdbfffff] Jul 6 23:55:54.063236 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Jul 6 23:55:54.063311 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Jul 6 23:55:54.063379 kernel: pci_bus 0000:09: resource 1 [mem 0xfd800000-0xfd9fffff] Jul 6 23:55:54.063444 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Jul 6 23:55:54.063455 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 6 23:55:54.063463 kernel: PCI: CLS 0 bytes, default 64 Jul 6 23:55:54.063471 kernel: Initialise system trusted keyrings Jul 6 23:55:54.063479 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jul 6 23:55:54.063487 kernel: Key type asymmetric registered Jul 6 23:55:54.063495 kernel: Asymmetric key parser 'x509' registered Jul 6 23:55:54.063504 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jul 6 23:55:54.063512 kernel: io scheduler mq-deadline registered Jul 6 23:55:54.063519 kernel: io scheduler kyber registered Jul 6 23:55:54.063526 kernel: io scheduler bfq registered Jul 6 23:55:54.063605 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jul 6 23:55:54.066719 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jul 6 23:55:54.066791 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jul 6 23:55:54.066872 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jul 6 23:55:54.066942 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jul 6 23:55:54.067017 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jul 6 23:55:54.067087 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jul 6 23:55:54.067156 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jul 6 23:55:54.067225 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jul 6 23:55:54.067294 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jul 6 23:55:54.067364 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jul 6 23:55:54.067433 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jul 6 23:55:54.067531 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jul 6 23:55:54.067607 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jul 6 23:55:54.067691 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jul 6 23:55:54.067761 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jul 6 23:55:54.067772 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 6 23:55:54.067850 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Jul 6 23:55:54.067921 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Jul 6 23:55:54.067931 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 6 23:55:54.067940 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Jul 6 23:55:54.067950 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 6 23:55:54.067958 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 6 23:55:54.067965 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 6 23:55:54.067973 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 6 23:55:54.067980 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 6 23:55:54.068054 kernel: rtc_cmos 00:03: RTC can wake from S4 Jul 6 23:55:54.068065 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 6 23:55:54.068128 kernel: rtc_cmos 00:03: registered as rtc0 Jul 6 23:55:54.068197 kernel: rtc_cmos 00:03: setting system clock to 2025-07-06T23:55:53 UTC (1751846153) Jul 6 23:55:54.068262 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jul 6 23:55:54.068271 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 6 23:55:54.068280 kernel: NET: Registered PF_INET6 protocol family Jul 6 23:55:54.068287 kernel: Segment Routing with IPv6 Jul 6 23:55:54.068295 kernel: In-situ OAM (IOAM) with IPv6 Jul 6 23:55:54.068302 kernel: NET: Registered PF_PACKET protocol family Jul 6 23:55:54.068310 kernel: Key type dns_resolver registered Jul 6 23:55:54.068317 kernel: IPI shorthand broadcast: enabled Jul 6 23:55:54.068326 kernel: sched_clock: Marking stable (1305020231, 147298254)->(1463531339, -11212854) Jul 6 23:55:54.068334 kernel: registered taskstats version 1 Jul 6 23:55:54.068343 kernel: Loading compiled-in X.509 certificates Jul 6 23:55:54.068350 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.95-flatcar: 6372c48ca52cc7f7bbee5675b604584c1c68ec5b' Jul 6 23:55:54.068358 kernel: Key type .fscrypt registered Jul 6 23:55:54.068365 kernel: Key type fscrypt-provisioning registered Jul 6 23:55:54.068372 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 6 23:55:54.068380 kernel: ima: Allocated hash algorithm: sha1 Jul 6 23:55:54.068390 kernel: ima: No architecture policies found Jul 6 23:55:54.068397 kernel: clk: Disabling unused clocks Jul 6 23:55:54.068405 kernel: Freeing unused kernel image (initmem) memory: 42868K Jul 6 23:55:54.068412 kernel: Write protecting the kernel read-only data: 36864k Jul 6 23:55:54.068420 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Jul 6 23:55:54.068427 kernel: Run /init as init process Jul 6 23:55:54.068434 kernel: with arguments: Jul 6 23:55:54.068442 kernel: /init Jul 6 23:55:54.068449 kernel: with environment: Jul 6 23:55:54.068456 kernel: HOME=/ Jul 6 23:55:54.068465 kernel: TERM=linux Jul 6 23:55:54.068472 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 6 23:55:54.068482 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 6 23:55:54.068491 systemd[1]: Detected virtualization kvm. Jul 6 23:55:54.068500 systemd[1]: Detected architecture x86-64. Jul 6 23:55:54.068507 systemd[1]: Running in initrd. Jul 6 23:55:54.068515 systemd[1]: No hostname configured, using default hostname. Jul 6 23:55:54.068524 systemd[1]: Hostname set to . Jul 6 23:55:54.068532 systemd[1]: Initializing machine ID from VM UUID. Jul 6 23:55:54.068540 systemd[1]: Queued start job for default target initrd.target. Jul 6 23:55:54.068548 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:55:54.068556 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:55:54.068565 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 6 23:55:54.068573 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:55:54.068580 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 6 23:55:54.068590 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 6 23:55:54.068599 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 6 23:55:54.068607 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 6 23:55:54.068615 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:55:54.069767 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:55:54.069805 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:55:54.069816 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:55:54.069846 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:55:54.069856 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:55:54.069866 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:55:54.069878 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:55:54.069889 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 6 23:55:54.069900 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 6 23:55:54.069911 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:55:54.069922 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:55:54.069933 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:55:54.069946 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:55:54.069957 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 6 23:55:54.069967 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:55:54.069978 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 6 23:55:54.069989 systemd[1]: Starting systemd-fsck-usr.service... Jul 6 23:55:54.069999 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:55:54.070009 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:55:54.070043 systemd-journald[188]: Collecting audit messages is disabled. Jul 6 23:55:54.070069 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:55:54.070079 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 6 23:55:54.070089 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:55:54.070099 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 6 23:55:54.070124 systemd-journald[188]: Journal started Jul 6 23:55:54.070159 systemd-journald[188]: Runtime Journal (/run/log/journal/0cfc28abad1946eca8c12166e9dc90fe) is 4.8M, max 38.4M, 33.6M free. Jul 6 23:55:54.044899 systemd-modules-load[189]: Inserted module 'overlay' Jul 6 23:55:54.101676 kernel: Bridge firewalling registered Jul 6 23:55:54.101706 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:55:54.072174 systemd-modules-load[189]: Inserted module 'br_netfilter' Jul 6 23:55:54.102264 systemd[1]: Finished systemd-fsck-usr.service. Jul 6 23:55:54.103097 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:55:54.103989 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:55:54.109738 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:55:54.111126 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:55:54.114122 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 6 23:55:54.115035 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:55:54.132021 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:55:54.134433 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:55:54.137912 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 6 23:55:54.141851 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:55:54.143316 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:55:54.144733 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:55:54.149897 dracut-cmdline[216]: dracut-dracut-053 Jul 6 23:55:54.154808 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 6 23:55:54.153787 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:55:54.163116 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:55:54.180962 systemd-resolved[228]: Positive Trust Anchors: Jul 6 23:55:54.180980 systemd-resolved[228]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:55:54.181010 systemd-resolved[228]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:55:54.189844 systemd-resolved[228]: Defaulting to hostname 'linux'. Jul 6 23:55:54.190701 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:55:54.191435 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:55:54.207669 kernel: SCSI subsystem initialized Jul 6 23:55:54.216660 kernel: Loading iSCSI transport class v2.0-870. Jul 6 23:55:54.226658 kernel: iscsi: registered transport (tcp) Jul 6 23:55:54.255169 kernel: iscsi: registered transport (qla4xxx) Jul 6 23:55:54.255227 kernel: QLogic iSCSI HBA Driver Jul 6 23:55:54.295916 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 6 23:55:54.300777 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 6 23:55:54.329023 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 6 23:55:54.329115 kernel: device-mapper: uevent: version 1.0.3 Jul 6 23:55:54.329139 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 6 23:55:54.371658 kernel: raid6: avx2x4 gen() 26525 MB/s Jul 6 23:55:54.388652 kernel: raid6: avx2x2 gen() 23636 MB/s Jul 6 23:55:54.405815 kernel: raid6: avx2x1 gen() 26280 MB/s Jul 6 23:55:54.405865 kernel: raid6: using algorithm avx2x4 gen() 26525 MB/s Jul 6 23:55:54.424760 kernel: raid6: .... xor() 7531 MB/s, rmw enabled Jul 6 23:55:54.424802 kernel: raid6: using avx2x2 recovery algorithm Jul 6 23:55:54.444683 kernel: xor: automatically using best checksumming function avx Jul 6 23:55:54.630690 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 6 23:55:54.642441 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:55:54.648853 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:55:54.666922 systemd-udevd[406]: Using default interface naming scheme 'v255'. Jul 6 23:55:54.670943 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:55:54.680885 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 6 23:55:54.697713 dracut-pre-trigger[415]: rd.md=0: removing MD RAID activation Jul 6 23:55:54.735513 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:55:54.740924 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:55:54.782540 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:55:54.793906 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 6 23:55:54.831739 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 6 23:55:54.834980 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:55:54.837542 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:55:54.839387 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:55:54.846879 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 6 23:55:54.860573 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:55:54.867874 kernel: scsi host0: Virtio SCSI HBA Jul 6 23:55:54.876719 kernel: cryptd: max_cpu_qlen set to 1000 Jul 6 23:55:54.890645 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jul 6 23:55:54.904722 kernel: libata version 3.00 loaded. Jul 6 23:55:54.915191 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 6 23:55:54.915292 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:55:54.933950 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:55:54.934659 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:55:54.934937 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:55:54.935500 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:55:54.948611 kernel: AVX2 version of gcm_enc/dec engaged. Jul 6 23:55:54.948718 kernel: AES CTR mode by8 optimization enabled Jul 6 23:55:54.948728 kernel: ACPI: bus type USB registered Jul 6 23:55:54.950650 kernel: usbcore: registered new interface driver usbfs Jul 6 23:55:54.950727 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:55:54.954657 kernel: usbcore: registered new interface driver hub Jul 6 23:55:54.960878 kernel: usbcore: registered new device driver usb Jul 6 23:55:54.970642 kernel: ahci 0000:00:1f.2: version 3.0 Jul 6 23:55:54.970794 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 6 23:55:54.975673 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jul 6 23:55:54.975802 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 6 23:55:54.979689 kernel: scsi host1: ahci Jul 6 23:55:54.983589 kernel: scsi host2: ahci Jul 6 23:55:54.987555 kernel: scsi host3: ahci Jul 6 23:55:54.987689 kernel: scsi host4: ahci Jul 6 23:55:54.989643 kernel: scsi host5: ahci Jul 6 23:55:54.989760 kernel: scsi host6: ahci Jul 6 23:55:54.989859 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a100 irq 48 Jul 6 23:55:54.989875 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a180 irq 48 Jul 6 23:55:54.989884 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a200 irq 48 Jul 6 23:55:54.989892 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a280 irq 48 Jul 6 23:55:54.989901 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a300 irq 48 Jul 6 23:55:54.989910 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a380 irq 48 Jul 6 23:55:54.993657 kernel: sd 0:0:0:0: Power-on or device reset occurred Jul 6 23:55:54.993856 kernel: sd 0:0:0:0: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Jul 6 23:55:54.993949 kernel: sd 0:0:0:0: [sda] Write Protect is off Jul 6 23:55:54.994065 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Jul 6 23:55:54.994648 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jul 6 23:55:54.997642 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 6 23:55:54.998285 kernel: GPT:17805311 != 80003071 Jul 6 23:55:54.998300 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 6 23:55:54.998309 kernel: GPT:17805311 != 80003071 Jul 6 23:55:54.998319 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 6 23:55:54.998330 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 6 23:55:54.998346 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jul 6 23:55:55.048294 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:55:55.053758 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:55:55.069294 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:55:55.300379 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jul 6 23:55:55.300512 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 6 23:55:55.300670 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jul 6 23:55:55.306270 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 6 23:55:55.306654 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 6 23:55:55.309673 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 6 23:55:55.312671 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 6 23:55:55.316961 kernel: ata1.00: applying bridge limits Jul 6 23:55:55.317111 kernel: ata1.00: configured for UDMA/100 Jul 6 23:55:55.324679 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 6 23:55:55.356182 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jul 6 23:55:55.356505 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Jul 6 23:55:55.361649 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jul 6 23:55:55.375757 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jul 6 23:55:55.376081 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Jul 6 23:55:55.376262 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Jul 6 23:55:55.383728 kernel: hub 1-0:1.0: USB hub found Jul 6 23:55:55.384008 kernel: hub 1-0:1.0: 4 ports detected Jul 6 23:55:55.386658 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jul 6 23:55:55.398208 kernel: hub 2-0:1.0: USB hub found Jul 6 23:55:55.398510 kernel: hub 2-0:1.0: 4 ports detected Jul 6 23:55:55.403032 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 6 23:55:55.403222 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 6 23:55:55.420664 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Jul 6 23:55:55.438665 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (453) Jul 6 23:55:55.438915 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jul 6 23:55:55.443941 kernel: BTRFS: device fsid 01287863-c21f-4cbb-820d-bbae8208f32f devid 1 transid 34 /dev/sda3 scanned by (udev-worker) (462) Jul 6 23:55:55.460417 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jul 6 23:55:55.466282 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jul 6 23:55:55.467007 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Jul 6 23:55:55.474515 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jul 6 23:55:55.481763 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 6 23:55:55.488181 disk-uuid[574]: Primary Header is updated. Jul 6 23:55:55.488181 disk-uuid[574]: Secondary Entries is updated. Jul 6 23:55:55.488181 disk-uuid[574]: Secondary Header is updated. Jul 6 23:55:55.501670 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 6 23:55:55.508658 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 6 23:55:55.519664 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 6 23:55:55.632656 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jul 6 23:55:55.773705 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 6 23:55:55.780529 kernel: usbcore: registered new interface driver usbhid Jul 6 23:55:55.780799 kernel: usbhid: USB HID core driver Jul 6 23:55:55.786217 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Jul 6 23:55:55.786264 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Jul 6 23:55:56.533739 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 6 23:55:56.536806 disk-uuid[575]: The operation has completed successfully. Jul 6 23:55:56.619171 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 6 23:55:56.619296 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 6 23:55:56.651861 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 6 23:55:56.657410 sh[595]: Success Jul 6 23:55:56.678670 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jul 6 23:55:56.752064 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 6 23:55:56.768748 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 6 23:55:56.773358 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 6 23:55:56.796468 kernel: BTRFS info (device dm-0): first mount of filesystem 01287863-c21f-4cbb-820d-bbae8208f32f Jul 6 23:55:56.796565 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:55:56.796611 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 6 23:55:56.798987 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 6 23:55:56.801344 kernel: BTRFS info (device dm-0): using free space tree Jul 6 23:55:56.810687 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jul 6 23:55:56.813602 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 6 23:55:56.815436 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 6 23:55:56.825776 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 6 23:55:56.827706 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 6 23:55:56.847494 kernel: BTRFS info (device sda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:55:56.847557 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:55:56.847567 kernel: BTRFS info (device sda6): using free space tree Jul 6 23:55:56.852503 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 6 23:55:56.852532 kernel: BTRFS info (device sda6): auto enabling async discard Jul 6 23:55:56.866182 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 6 23:55:56.867435 kernel: BTRFS info (device sda6): last unmount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:55:56.876314 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 6 23:55:56.886551 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 6 23:55:56.961615 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:55:56.966810 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:55:56.973958 ignition[703]: Ignition 2.19.0 Jul 6 23:55:56.974525 ignition[703]: Stage: fetch-offline Jul 6 23:55:56.974558 ignition[703]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:55:56.974565 ignition[703]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jul 6 23:55:56.974668 ignition[703]: parsed url from cmdline: "" Jul 6 23:55:56.977641 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:55:56.974671 ignition[703]: no config URL provided Jul 6 23:55:56.974675 ignition[703]: reading system config file "/usr/lib/ignition/user.ign" Jul 6 23:55:56.974681 ignition[703]: no config at "/usr/lib/ignition/user.ign" Jul 6 23:55:56.974685 ignition[703]: failed to fetch config: resource requires networking Jul 6 23:55:56.974826 ignition[703]: Ignition finished successfully Jul 6 23:55:56.989035 systemd-networkd[780]: lo: Link UP Jul 6 23:55:56.989045 systemd-networkd[780]: lo: Gained carrier Jul 6 23:55:56.990713 systemd-networkd[780]: Enumeration completed Jul 6 23:55:56.991346 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:55:56.991663 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:55:56.991667 systemd-networkd[780]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:55:56.992157 systemd[1]: Reached target network.target - Network. Jul 6 23:55:56.993587 systemd-networkd[780]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:55:56.993590 systemd-networkd[780]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:55:56.994487 systemd-networkd[780]: eth0: Link UP Jul 6 23:55:56.994490 systemd-networkd[780]: eth0: Gained carrier Jul 6 23:55:56.994496 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:55:56.997814 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 6 23:55:57.000879 systemd-networkd[780]: eth1: Link UP Jul 6 23:55:57.000883 systemd-networkd[780]: eth1: Gained carrier Jul 6 23:55:57.000892 systemd-networkd[780]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:55:57.012960 ignition[784]: Ignition 2.19.0 Jul 6 23:55:57.012975 ignition[784]: Stage: fetch Jul 6 23:55:57.013201 ignition[784]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:55:57.013213 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jul 6 23:55:57.013312 ignition[784]: parsed url from cmdline: "" Jul 6 23:55:57.013316 ignition[784]: no config URL provided Jul 6 23:55:57.013322 ignition[784]: reading system config file "/usr/lib/ignition/user.ign" Jul 6 23:55:57.013331 ignition[784]: no config at "/usr/lib/ignition/user.ign" Jul 6 23:55:57.013351 ignition[784]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Jul 6 23:55:57.013500 ignition[784]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Jul 6 23:55:57.026700 systemd-networkd[780]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 6 23:55:57.064753 systemd-networkd[780]: eth0: DHCPv4 address 95.217.0.60/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jul 6 23:55:57.213811 ignition[784]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Jul 6 23:55:57.218324 ignition[784]: GET result: OK Jul 6 23:55:57.218481 ignition[784]: parsing config with SHA512: ae496d7d175f9e24b8444529db7ca3fe53dbe0f16a85efddd2fd2ada4c15e424913763f4ab208dd6b6b30713874efe7f533c6cd222c51a963394f60086a0ade3 Jul 6 23:55:57.225881 unknown[784]: fetched base config from "system" Jul 6 23:55:57.226666 ignition[784]: fetch: fetch complete Jul 6 23:55:57.225902 unknown[784]: fetched base config from "system" Jul 6 23:55:57.226676 ignition[784]: fetch: fetch passed Jul 6 23:55:57.225912 unknown[784]: fetched user config from "hetzner" Jul 6 23:55:57.226741 ignition[784]: Ignition finished successfully Jul 6 23:55:57.230227 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 6 23:55:57.237981 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 6 23:55:57.263013 ignition[791]: Ignition 2.19.0 Jul 6 23:55:57.263755 ignition[791]: Stage: kargs Jul 6 23:55:57.264269 ignition[791]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:55:57.264289 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jul 6 23:55:57.266252 ignition[791]: kargs: kargs passed Jul 6 23:55:57.268300 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 6 23:55:57.266332 ignition[791]: Ignition finished successfully Jul 6 23:55:57.278061 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 6 23:55:57.300194 ignition[798]: Ignition 2.19.0 Jul 6 23:55:57.300218 ignition[798]: Stage: disks Jul 6 23:55:57.304452 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 6 23:55:57.300541 ignition[798]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:55:57.311210 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 6 23:55:57.300559 ignition[798]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jul 6 23:55:57.312364 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 6 23:55:57.302719 ignition[798]: disks: disks passed Jul 6 23:55:57.313417 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:55:57.302789 ignition[798]: Ignition finished successfully Jul 6 23:55:57.314473 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:55:57.316656 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:55:57.324850 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 6 23:55:57.346751 systemd-fsck[807]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jul 6 23:55:57.350679 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 6 23:55:57.357820 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 6 23:55:57.457647 kernel: EXT4-fs (sda9): mounted filesystem c3eefe20-4a42-420d-8034-4d5498275b2f r/w with ordered data mode. Quota mode: none. Jul 6 23:55:57.459550 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 6 23:55:57.461238 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 6 23:55:57.468702 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:55:57.477791 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 6 23:55:57.479611 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jul 6 23:55:57.480190 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 6 23:55:57.480222 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:55:57.487788 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 6 23:55:57.498903 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 6 23:55:57.514161 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (815) Jul 6 23:55:57.514194 kernel: BTRFS info (device sda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:55:57.514210 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:55:57.514226 kernel: BTRFS info (device sda6): using free space tree Jul 6 23:55:57.527985 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 6 23:55:57.528054 kernel: BTRFS info (device sda6): auto enabling async discard Jul 6 23:55:57.533533 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:55:57.583062 coreos-metadata[817]: Jul 06 23:55:57.582 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Jul 6 23:55:57.585224 coreos-metadata[817]: Jul 06 23:55:57.584 INFO Fetch successful Jul 6 23:55:57.585885 initrd-setup-root[842]: cut: /sysroot/etc/passwd: No such file or directory Jul 6 23:55:57.586571 coreos-metadata[817]: Jul 06 23:55:57.585 INFO wrote hostname ci-4081-3-4-6-7e2061accb to /sysroot/etc/hostname Jul 6 23:55:57.587283 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 6 23:55:57.593150 initrd-setup-root[850]: cut: /sysroot/etc/group: No such file or directory Jul 6 23:55:57.598264 initrd-setup-root[857]: cut: /sysroot/etc/shadow: No such file or directory Jul 6 23:55:57.602740 initrd-setup-root[864]: cut: /sysroot/etc/gshadow: No such file or directory Jul 6 23:55:57.691120 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 6 23:55:57.705804 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 6 23:55:57.710425 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 6 23:55:57.716638 kernel: BTRFS info (device sda6): last unmount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:55:57.749863 ignition[933]: INFO : Ignition 2.19.0 Jul 6 23:55:57.749863 ignition[933]: INFO : Stage: mount Jul 6 23:55:57.749863 ignition[933]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:55:57.749863 ignition[933]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jul 6 23:55:57.748559 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 6 23:55:57.755020 ignition[933]: INFO : mount: mount passed Jul 6 23:55:57.755020 ignition[933]: INFO : Ignition finished successfully Jul 6 23:55:57.753192 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 6 23:55:57.760744 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 6 23:55:57.790747 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 6 23:55:57.795981 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:55:57.809662 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (944) Jul 6 23:55:57.810721 kernel: BTRFS info (device sda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:55:57.813605 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:55:57.813654 kernel: BTRFS info (device sda6): using free space tree Jul 6 23:55:57.820736 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 6 23:55:57.820795 kernel: BTRFS info (device sda6): auto enabling async discard Jul 6 23:55:57.827261 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:55:57.856297 ignition[960]: INFO : Ignition 2.19.0 Jul 6 23:55:57.856297 ignition[960]: INFO : Stage: files Jul 6 23:55:57.858724 ignition[960]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:55:57.858724 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jul 6 23:55:57.862257 ignition[960]: DEBUG : files: compiled without relabeling support, skipping Jul 6 23:55:57.862257 ignition[960]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 6 23:55:57.862257 ignition[960]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 6 23:55:57.867850 ignition[960]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 6 23:55:57.867850 ignition[960]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 6 23:55:57.872128 ignition[960]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 6 23:55:57.872128 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 6 23:55:57.872128 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 6 23:55:57.868230 unknown[960]: wrote ssh authorized keys file for user: core Jul 6 23:55:58.192528 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 6 23:55:58.487886 systemd-networkd[780]: eth1: Gained IPv6LL Jul 6 23:55:58.999993 systemd-networkd[780]: eth0: Gained IPv6LL Jul 6 23:55:59.879975 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 6 23:55:59.882010 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 6 23:55:59.882010 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 6 23:56:00.556260 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 6 23:56:00.690346 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 6 23:56:00.692877 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 6 23:56:00.692877 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 6 23:56:00.692877 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:56:00.692877 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:56:00.692877 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:56:00.692877 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:56:00.692877 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:56:00.692877 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:56:00.692877 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:56:00.692877 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:56:00.692877 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 6 23:56:00.692877 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 6 23:56:00.692877 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 6 23:56:00.692877 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Jul 6 23:56:01.433847 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 6 23:56:01.637230 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 6 23:56:01.637230 ignition[960]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 6 23:56:01.642022 ignition[960]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:56:01.642022 ignition[960]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:56:01.642022 ignition[960]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 6 23:56:01.642022 ignition[960]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jul 6 23:56:01.642022 ignition[960]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jul 6 23:56:01.642022 ignition[960]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jul 6 23:56:01.642022 ignition[960]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jul 6 23:56:01.642022 ignition[960]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jul 6 23:56:01.642022 ignition[960]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jul 6 23:56:01.642022 ignition[960]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:56:01.642022 ignition[960]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:56:01.642022 ignition[960]: INFO : files: files passed Jul 6 23:56:01.642022 ignition[960]: INFO : Ignition finished successfully Jul 6 23:56:01.643442 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 6 23:56:01.657940 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 6 23:56:01.664973 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 6 23:56:01.668088 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 6 23:56:01.668202 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 6 23:56:01.683401 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:56:01.683401 initrd-setup-root-after-ignition[989]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:56:01.686259 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:56:01.686709 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:56:01.688378 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 6 23:56:01.699810 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 6 23:56:01.719530 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 6 23:56:01.719668 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 6 23:56:01.721068 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 6 23:56:01.721914 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 6 23:56:01.722971 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 6 23:56:01.731897 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 6 23:56:01.744203 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:56:01.750788 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 6 23:56:01.763222 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:56:01.764525 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:56:01.766094 systemd[1]: Stopped target timers.target - Timer Units. Jul 6 23:56:01.767375 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 6 23:56:01.767611 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:56:01.769132 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 6 23:56:01.770729 systemd[1]: Stopped target basic.target - Basic System. Jul 6 23:56:01.772083 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 6 23:56:01.773173 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:56:01.774476 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 6 23:56:01.775948 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 6 23:56:01.777290 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:56:01.778725 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 6 23:56:01.780138 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 6 23:56:01.781609 systemd[1]: Stopped target swap.target - Swaps. Jul 6 23:56:01.782931 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 6 23:56:01.783127 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:56:01.784729 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:56:01.786297 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:56:01.787751 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 6 23:56:01.788302 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:56:01.789595 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 6 23:56:01.789765 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 6 23:56:01.791543 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 6 23:56:01.791889 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:56:01.793208 systemd[1]: ignition-files.service: Deactivated successfully. Jul 6 23:56:01.793379 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 6 23:56:01.794472 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 6 23:56:01.794619 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 6 23:56:01.802867 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 6 23:56:01.804176 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 6 23:56:01.804293 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:56:01.808780 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 6 23:56:01.809271 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 6 23:56:01.809418 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:56:01.810148 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 6 23:56:01.810287 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:56:01.823138 ignition[1013]: INFO : Ignition 2.19.0 Jul 6 23:56:01.823138 ignition[1013]: INFO : Stage: umount Jul 6 23:56:01.823138 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:56:01.823138 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jul 6 23:56:01.823138 ignition[1013]: INFO : umount: umount passed Jul 6 23:56:01.823138 ignition[1013]: INFO : Ignition finished successfully Jul 6 23:56:01.832929 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 6 23:56:01.833017 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 6 23:56:01.834646 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 6 23:56:01.834806 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 6 23:56:01.835340 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 6 23:56:01.835426 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 6 23:56:01.836877 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 6 23:56:01.836913 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 6 23:56:01.837347 systemd[1]: Stopped target network.target - Network. Jul 6 23:56:01.837795 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 6 23:56:01.837848 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:56:01.839147 systemd[1]: Stopped target paths.target - Path Units. Jul 6 23:56:01.840150 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 6 23:56:01.844059 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:56:01.847447 systemd[1]: Stopped target slices.target - Slice Units. Jul 6 23:56:01.848350 systemd[1]: Stopped target sockets.target - Socket Units. Jul 6 23:56:01.849679 systemd[1]: iscsid.socket: Deactivated successfully. Jul 6 23:56:01.849718 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:56:01.851827 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 6 23:56:01.851908 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:56:01.852846 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 6 23:56:01.852906 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 6 23:56:01.854148 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 6 23:56:01.854185 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 6 23:56:01.855411 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 6 23:56:01.856353 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 6 23:56:01.860306 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 6 23:56:01.860983 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 6 23:56:01.861063 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 6 23:56:01.862143 systemd-networkd[780]: eth0: DHCPv6 lease lost Jul 6 23:56:01.863091 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 6 23:56:01.863161 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 6 23:56:01.863782 systemd-networkd[780]: eth1: DHCPv6 lease lost Jul 6 23:56:01.864178 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 6 23:56:01.864242 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 6 23:56:01.866050 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 6 23:56:01.866122 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 6 23:56:01.869282 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 6 23:56:01.869776 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:56:01.870945 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 6 23:56:01.870993 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 6 23:56:01.877751 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 6 23:56:01.878501 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 6 23:56:01.878568 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:56:01.880542 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 6 23:56:01.880582 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:56:01.881579 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 6 23:56:01.881612 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 6 23:56:01.882721 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 6 23:56:01.882754 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:56:01.883983 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:56:01.895082 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 6 23:56:01.895537 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:56:01.897438 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 6 23:56:01.897559 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 6 23:56:01.898812 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 6 23:56:01.898874 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 6 23:56:01.899989 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 6 23:56:01.900017 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:56:01.901227 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 6 23:56:01.901266 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:56:01.902771 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 6 23:56:01.902808 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 6 23:56:01.904080 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 6 23:56:01.904116 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:56:01.910824 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 6 23:56:01.911565 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 6 23:56:01.911616 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:56:01.912218 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:56:01.912256 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:56:01.916678 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 6 23:56:01.916755 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 6 23:56:01.919170 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 6 23:56:01.920748 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 6 23:56:01.939281 systemd[1]: Switching root. Jul 6 23:56:01.996664 systemd-journald[188]: Received SIGTERM from PID 1 (systemd). Jul 6 23:56:01.996781 systemd-journald[188]: Journal stopped Jul 6 23:56:03.089554 kernel: SELinux: policy capability network_peer_controls=1 Jul 6 23:56:03.089609 kernel: SELinux: policy capability open_perms=1 Jul 6 23:56:03.089671 kernel: SELinux: policy capability extended_socket_class=1 Jul 6 23:56:03.089683 kernel: SELinux: policy capability always_check_network=0 Jul 6 23:56:03.089699 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 6 23:56:03.089711 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 6 23:56:03.089724 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 6 23:56:03.089737 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 6 23:56:03.089746 kernel: audit: type=1403 audit(1751846162.191:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 6 23:56:03.089758 systemd[1]: Successfully loaded SELinux policy in 52.680ms. Jul 6 23:56:03.089775 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 18.501ms. Jul 6 23:56:03.089788 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 6 23:56:03.089797 systemd[1]: Detected virtualization kvm. Jul 6 23:56:03.089807 systemd[1]: Detected architecture x86-64. Jul 6 23:56:03.089816 systemd[1]: Detected first boot. Jul 6 23:56:03.089827 systemd[1]: Hostname set to . Jul 6 23:56:03.089846 systemd[1]: Initializing machine ID from VM UUID. Jul 6 23:56:03.089858 zram_generator::config[1055]: No configuration found. Jul 6 23:56:03.089869 systemd[1]: Populated /etc with preset unit settings. Jul 6 23:56:03.089880 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 6 23:56:03.089889 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 6 23:56:03.089898 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 6 23:56:03.089908 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 6 23:56:03.089920 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 6 23:56:03.089933 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 6 23:56:03.089946 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 6 23:56:03.089959 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 6 23:56:03.089973 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 6 23:56:03.089985 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 6 23:56:03.089995 systemd[1]: Created slice user.slice - User and Session Slice. Jul 6 23:56:03.090009 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:56:03.090025 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:56:03.090037 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 6 23:56:03.090050 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 6 23:56:03.090060 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 6 23:56:03.090070 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:56:03.090079 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 6 23:56:03.090088 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:56:03.090098 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 6 23:56:03.090108 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 6 23:56:03.090121 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 6 23:56:03.090134 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 6 23:56:03.090145 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:56:03.090157 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:56:03.090166 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:56:03.090176 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:56:03.090185 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 6 23:56:03.090196 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 6 23:56:03.090206 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:56:03.090215 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:56:03.090227 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:56:03.090237 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 6 23:56:03.090246 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 6 23:56:03.090257 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 6 23:56:03.090271 systemd[1]: Mounting media.mount - External Media Directory... Jul 6 23:56:03.090290 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:56:03.090305 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 6 23:56:03.090314 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 6 23:56:03.090324 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 6 23:56:03.090335 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 6 23:56:03.090344 systemd[1]: Reached target machines.target - Containers. Jul 6 23:56:03.090356 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 6 23:56:03.090366 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:56:03.090376 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:56:03.090386 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 6 23:56:03.090396 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:56:03.090406 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 6 23:56:03.090416 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:56:03.090429 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 6 23:56:03.090445 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:56:03.090459 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 6 23:56:03.090472 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 6 23:56:03.090485 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 6 23:56:03.090498 kernel: fuse: init (API version 7.39) Jul 6 23:56:03.090511 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 6 23:56:03.090524 systemd[1]: Stopped systemd-fsck-usr.service. Jul 6 23:56:03.090537 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:56:03.090547 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:56:03.090560 kernel: loop: module loaded Jul 6 23:56:03.090576 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 6 23:56:03.090592 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 6 23:56:03.090606 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:56:03.090616 systemd[1]: verity-setup.service: Deactivated successfully. Jul 6 23:56:03.090644 systemd[1]: Stopped verity-setup.service. Jul 6 23:56:03.090654 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:56:03.090663 kernel: ACPI: bus type drm_connector registered Jul 6 23:56:03.090674 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 6 23:56:03.090690 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 6 23:56:03.090704 systemd[1]: Mounted media.mount - External Media Directory. Jul 6 23:56:03.090732 systemd-journald[1142]: Collecting audit messages is disabled. Jul 6 23:56:03.090763 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 6 23:56:03.090777 systemd-journald[1142]: Journal started Jul 6 23:56:03.091166 systemd-journald[1142]: Runtime Journal (/run/log/journal/0cfc28abad1946eca8c12166e9dc90fe) is 4.8M, max 38.4M, 33.6M free. Jul 6 23:56:02.759053 systemd[1]: Queued start job for default target multi-user.target. Jul 6 23:56:02.788206 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jul 6 23:56:02.789176 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 6 23:56:03.094687 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:56:03.095756 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 6 23:56:03.096443 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 6 23:56:03.097267 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 6 23:56:03.098047 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:56:03.098896 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 6 23:56:03.099045 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 6 23:56:03.099992 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:56:03.100186 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:56:03.100994 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 6 23:56:03.101180 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 6 23:56:03.102092 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:56:03.102274 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:56:03.103181 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 6 23:56:03.103363 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 6 23:56:03.104146 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:56:03.104352 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:56:03.105148 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:56:03.105963 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 6 23:56:03.107031 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 6 23:56:03.116531 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 6 23:56:03.123077 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 6 23:56:03.128406 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 6 23:56:03.129529 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 6 23:56:03.129760 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:56:03.131214 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 6 23:56:03.139030 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 6 23:56:03.140894 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 6 23:56:03.142465 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:56:03.147601 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 6 23:56:03.154190 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 6 23:56:03.154792 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:56:03.157367 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 6 23:56:03.158697 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:56:03.161819 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:56:03.165986 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 6 23:56:03.168112 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 6 23:56:03.170597 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 6 23:56:03.172277 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 6 23:56:03.174944 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 6 23:56:03.191122 systemd-journald[1142]: Time spent on flushing to /var/log/journal/0cfc28abad1946eca8c12166e9dc90fe is 44.105ms for 1135 entries. Jul 6 23:56:03.191122 systemd-journald[1142]: System Journal (/var/log/journal/0cfc28abad1946eca8c12166e9dc90fe) is 8.0M, max 584.8M, 576.8M free. Jul 6 23:56:03.253255 systemd-journald[1142]: Received client request to flush runtime journal. Jul 6 23:56:03.253291 kernel: loop0: detected capacity change from 0 to 140768 Jul 6 23:56:03.206682 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:56:03.218917 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 6 23:56:03.220777 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 6 23:56:03.226958 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 6 23:56:03.238223 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:56:03.248888 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 6 23:56:03.255813 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 6 23:56:03.260668 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 6 23:56:03.273635 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 6 23:56:03.276707 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 6 23:56:03.283438 kernel: loop1: detected capacity change from 0 to 221472 Jul 6 23:56:03.283641 udevadm[1187]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 6 23:56:03.299206 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 6 23:56:03.308957 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:56:03.330773 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. Jul 6 23:56:03.330795 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. Jul 6 23:56:03.335344 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:56:03.345997 kernel: loop2: detected capacity change from 0 to 142488 Jul 6 23:56:03.397653 kernel: loop3: detected capacity change from 0 to 8 Jul 6 23:56:03.422656 kernel: loop4: detected capacity change from 0 to 140768 Jul 6 23:56:03.457654 kernel: loop5: detected capacity change from 0 to 221472 Jul 6 23:56:03.479881 kernel: loop6: detected capacity change from 0 to 142488 Jul 6 23:56:03.511639 kernel: loop7: detected capacity change from 0 to 8 Jul 6 23:56:03.516849 (sd-merge)[1200]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Jul 6 23:56:03.517599 (sd-merge)[1200]: Merged extensions into '/usr'. Jul 6 23:56:03.522732 systemd[1]: Reloading requested from client PID 1175 ('systemd-sysext') (unit systemd-sysext.service)... Jul 6 23:56:03.522882 systemd[1]: Reloading... Jul 6 23:56:03.591650 zram_generator::config[1222]: No configuration found. Jul 6 23:56:03.744809 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:56:03.750352 ldconfig[1170]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 6 23:56:03.798924 systemd[1]: Reloading finished in 275 ms. Jul 6 23:56:03.821760 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 6 23:56:03.822761 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 6 23:56:03.833858 systemd[1]: Starting ensure-sysext.service... Jul 6 23:56:03.836485 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:56:03.847696 systemd[1]: Reloading requested from client PID 1269 ('systemctl') (unit ensure-sysext.service)... Jul 6 23:56:03.849655 systemd[1]: Reloading... Jul 6 23:56:03.871193 systemd-tmpfiles[1270]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 6 23:56:03.871952 systemd-tmpfiles[1270]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 6 23:56:03.872740 systemd-tmpfiles[1270]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 6 23:56:03.873056 systemd-tmpfiles[1270]: ACLs are not supported, ignoring. Jul 6 23:56:03.873159 systemd-tmpfiles[1270]: ACLs are not supported, ignoring. Jul 6 23:56:03.876790 systemd-tmpfiles[1270]: Detected autofs mount point /boot during canonicalization of boot. Jul 6 23:56:03.876802 systemd-tmpfiles[1270]: Skipping /boot Jul 6 23:56:03.895267 systemd-tmpfiles[1270]: Detected autofs mount point /boot during canonicalization of boot. Jul 6 23:56:03.897023 systemd-tmpfiles[1270]: Skipping /boot Jul 6 23:56:03.938692 zram_generator::config[1296]: No configuration found. Jul 6 23:56:04.063811 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:56:04.117371 systemd[1]: Reloading finished in 267 ms. Jul 6 23:56:04.136962 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 6 23:56:04.145249 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:56:04.162131 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 6 23:56:04.167446 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 6 23:56:04.170705 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 6 23:56:04.181871 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:56:04.185853 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:56:04.194888 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 6 23:56:04.199120 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:56:04.199614 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:56:04.203494 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:56:04.206066 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:56:04.209738 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:56:04.210760 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:56:04.213718 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 6 23:56:04.215724 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:56:04.219500 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:56:04.219681 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:56:04.219817 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:56:04.219901 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:56:04.226731 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:56:04.227146 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:56:04.236750 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 6 23:56:04.237400 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:56:04.237571 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:56:04.239683 systemd[1]: Finished ensure-sysext.service. Jul 6 23:56:04.241310 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 6 23:56:04.252888 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 6 23:56:04.253014 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 6 23:56:04.259220 systemd-udevd[1352]: Using default interface naming scheme 'v255'. Jul 6 23:56:04.261807 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 6 23:56:04.268883 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:56:04.269187 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:56:04.272230 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:56:04.273461 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:56:04.276033 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:56:04.276143 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:56:04.279404 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:56:04.279452 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:56:04.279912 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 6 23:56:04.281180 augenrules[1375]: No rules Jul 6 23:56:04.287816 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 6 23:56:04.289709 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 6 23:56:04.293482 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 6 23:56:04.308905 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 6 23:56:04.320756 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:56:04.329859 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:56:04.330683 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 6 23:56:04.334154 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 6 23:56:04.388424 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 6 23:56:04.450247 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 6 23:56:04.451002 systemd[1]: Reached target time-set.target - System Time Set. Jul 6 23:56:04.458287 systemd-resolved[1351]: Positive Trust Anchors: Jul 6 23:56:04.458307 systemd-resolved[1351]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:56:04.458340 systemd-resolved[1351]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:56:04.463489 systemd-networkd[1391]: lo: Link UP Jul 6 23:56:04.463499 systemd-networkd[1391]: lo: Gained carrier Jul 6 23:56:04.469562 systemd-resolved[1351]: Using system hostname 'ci-4081-3-4-6-7e2061accb'. Jul 6 23:56:04.470488 systemd-networkd[1391]: Enumeration completed Jul 6 23:56:04.470571 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:56:04.473276 systemd-networkd[1391]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:56:04.473285 systemd-networkd[1391]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:56:04.474333 systemd-networkd[1391]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:56:04.474336 systemd-networkd[1391]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:56:04.475311 systemd-networkd[1391]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:56:04.475346 systemd-networkd[1391]: eth0: Link UP Jul 6 23:56:04.475349 systemd-networkd[1391]: eth0: Gained carrier Jul 6 23:56:04.475358 systemd-networkd[1391]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:56:04.478848 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 6 23:56:04.478948 systemd-networkd[1391]: eth1: Link UP Jul 6 23:56:04.478951 systemd-networkd[1391]: eth1: Gained carrier Jul 6 23:56:04.478965 systemd-networkd[1391]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:56:04.479828 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:56:04.480431 systemd[1]: Reached target network.target - Network. Jul 6 23:56:04.481582 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:56:04.507772 systemd-networkd[1391]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 6 23:56:04.508533 systemd-timesyncd[1369]: Network configuration changed, trying to establish connection. Jul 6 23:56:04.512649 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 6 23:56:04.519596 kernel: ACPI: button: Power Button [PWRF] Jul 6 23:56:04.518732 systemd-networkd[1391]: eth0: DHCPv4 address 95.217.0.60/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jul 6 23:56:04.519113 systemd-timesyncd[1369]: Network configuration changed, trying to establish connection. Jul 6 23:56:04.520045 systemd-timesyncd[1369]: Network configuration changed, trying to establish connection. Jul 6 23:56:04.525675 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1400) Jul 6 23:56:04.528670 kernel: mousedev: PS/2 mouse device common for all mice Jul 6 23:56:04.548462 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Jul 6 23:56:04.548690 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:56:04.548793 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:56:04.555769 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:56:04.557240 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:56:04.560801 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:56:04.561414 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:56:04.561447 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 6 23:56:04.561459 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:56:04.572945 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:56:04.573808 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:56:04.575978 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:56:04.576092 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:56:04.577459 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:56:04.581866 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 6 23:56:04.582098 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jul 6 23:56:04.582911 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 6 23:56:04.590350 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:56:04.590953 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:56:04.592209 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:56:04.600763 kernel: EDAC MC: Ver: 3.0.0 Jul 6 23:56:04.614645 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jul 6 23:56:04.623660 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Jul 6 23:56:04.626841 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Jul 6 23:56:04.631317 kernel: Console: switching to colour dummy device 80x25 Jul 6 23:56:04.633783 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jul 6 23:56:04.633819 kernel: [drm] features: -context_init Jul 6 23:56:04.649663 kernel: [drm] number of scanouts: 1 Jul 6 23:56:04.649730 kernel: [drm] number of cap sets: 0 Jul 6 23:56:04.650669 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Jul 6 23:56:04.657664 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jul 6 23:56:04.657733 kernel: Console: switching to colour frame buffer device 160x50 Jul 6 23:56:04.664644 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jul 6 23:56:04.665447 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jul 6 23:56:04.676378 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 6 23:56:04.680748 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:56:04.685863 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 6 23:56:04.689954 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:56:04.690100 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:56:04.697796 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:56:04.760552 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:56:04.807935 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 6 23:56:04.814886 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 6 23:56:04.845936 lvm[1448]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 6 23:56:04.890349 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 6 23:56:04.890887 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:56:04.891032 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:56:04.891278 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 6 23:56:04.891446 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 6 23:56:04.891876 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 6 23:56:04.892104 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 6 23:56:04.892226 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 6 23:56:04.892321 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 6 23:56:04.892371 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:56:04.892450 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:56:04.895195 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 6 23:56:04.899292 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 6 23:56:04.909000 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 6 23:56:04.911911 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 6 23:56:04.916972 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 6 23:56:04.921341 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:56:04.923492 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:56:04.925483 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 6 23:56:04.925685 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 6 23:56:04.935481 lvm[1452]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 6 23:56:04.937049 systemd[1]: Starting containerd.service - containerd container runtime... Jul 6 23:56:04.951888 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 6 23:56:04.959034 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 6 23:56:04.976871 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 6 23:56:04.990997 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 6 23:56:04.993334 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 6 23:56:05.003672 coreos-metadata[1454]: Jul 06 23:56:05.002 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Jul 6 23:56:05.004579 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 6 23:56:05.013316 coreos-metadata[1454]: Jul 06 23:56:05.013 INFO Fetch successful Jul 6 23:56:05.013316 coreos-metadata[1454]: Jul 06 23:56:05.013 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Jul 6 23:56:05.014670 coreos-metadata[1454]: Jul 06 23:56:05.013 INFO Fetch successful Jul 6 23:56:05.014940 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 6 23:56:05.018876 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Jul 6 23:56:05.021915 jq[1458]: false Jul 6 23:56:05.030904 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 6 23:56:05.039343 dbus-daemon[1455]: [system] SELinux support is enabled Jul 6 23:56:05.055536 extend-filesystems[1459]: Found loop4 Jul 6 23:56:05.055536 extend-filesystems[1459]: Found loop5 Jul 6 23:56:05.055536 extend-filesystems[1459]: Found loop6 Jul 6 23:56:05.055536 extend-filesystems[1459]: Found loop7 Jul 6 23:56:05.055536 extend-filesystems[1459]: Found sda Jul 6 23:56:05.055536 extend-filesystems[1459]: Found sda1 Jul 6 23:56:05.055536 extend-filesystems[1459]: Found sda2 Jul 6 23:56:05.055536 extend-filesystems[1459]: Found sda3 Jul 6 23:56:05.055536 extend-filesystems[1459]: Found usr Jul 6 23:56:05.055536 extend-filesystems[1459]: Found sda4 Jul 6 23:56:05.055536 extend-filesystems[1459]: Found sda6 Jul 6 23:56:05.055536 extend-filesystems[1459]: Found sda7 Jul 6 23:56:05.055536 extend-filesystems[1459]: Found sda9 Jul 6 23:56:05.055536 extend-filesystems[1459]: Checking size of /dev/sda9 Jul 6 23:56:05.131499 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Jul 6 23:56:05.042239 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 6 23:56:05.133001 extend-filesystems[1459]: Resized partition /dev/sda9 Jul 6 23:56:05.054172 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 6 23:56:05.144918 extend-filesystems[1482]: resize2fs 1.47.1 (20-May-2024) Jul 6 23:56:05.057934 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 6 23:56:05.058560 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 6 23:56:05.064822 systemd[1]: Starting update-engine.service - Update Engine... Jul 6 23:56:05.079023 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 6 23:56:05.099381 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 6 23:56:05.151611 jq[1478]: true Jul 6 23:56:05.108856 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 6 23:56:05.124023 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 6 23:56:05.124699 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 6 23:56:05.125012 systemd[1]: motdgen.service: Deactivated successfully. Jul 6 23:56:05.125187 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 6 23:56:05.139107 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 6 23:56:05.139733 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 6 23:56:05.154250 update_engine[1472]: I20250706 23:56:05.154151 1472 main.cc:92] Flatcar Update Engine starting Jul 6 23:56:05.159266 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 6 23:56:05.159317 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 6 23:56:05.162691 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 6 23:56:05.162731 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 6 23:56:05.172525 (ntainerd)[1489]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 6 23:56:05.183124 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1395) Jul 6 23:56:05.184565 systemd[1]: Started update-engine.service - Update Engine. Jul 6 23:56:05.186111 update_engine[1472]: I20250706 23:56:05.185230 1472 update_check_scheduler.cc:74] Next update check in 6m15s Jul 6 23:56:05.193206 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 6 23:56:05.200223 tar[1487]: linux-amd64/helm Jul 6 23:56:05.200418 jq[1488]: true Jul 6 23:56:05.276042 systemd-logind[1467]: New seat seat0. Jul 6 23:56:05.290456 systemd-logind[1467]: Watching system buttons on /dev/input/event2 (Power Button) Jul 6 23:56:05.291087 systemd-logind[1467]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 6 23:56:05.291516 systemd[1]: Started systemd-logind.service - User Login Management. Jul 6 23:56:05.345521 locksmithd[1501]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 6 23:56:05.374377 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Jul 6 23:56:05.349307 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 6 23:56:05.353724 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 6 23:56:05.383038 bash[1524]: Updated "/home/core/.ssh/authorized_keys" Jul 6 23:56:05.385945 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 6 23:56:05.391110 extend-filesystems[1482]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jul 6 23:56:05.391110 extend-filesystems[1482]: old_desc_blocks = 1, new_desc_blocks = 5 Jul 6 23:56:05.391110 extend-filesystems[1482]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Jul 6 23:56:05.418340 extend-filesystems[1459]: Resized filesystem in /dev/sda9 Jul 6 23:56:05.418340 extend-filesystems[1459]: Found sr0 Jul 6 23:56:05.395207 systemd[1]: Starting sshkeys.service... Jul 6 23:56:05.411208 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 6 23:56:05.411414 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 6 23:56:05.446470 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 6 23:56:05.457355 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 6 23:56:05.498935 coreos-metadata[1539]: Jul 06 23:56:05.498 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Jul 6 23:56:05.500614 coreos-metadata[1539]: Jul 06 23:56:05.500 INFO Fetch successful Jul 6 23:56:05.503531 unknown[1539]: wrote ssh authorized keys file for user: core Jul 6 23:56:05.528948 update-ssh-keys[1543]: Updated "/home/core/.ssh/authorized_keys" Jul 6 23:56:05.529649 containerd[1489]: time="2025-07-06T23:56:05.529415076Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 6 23:56:05.532057 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 6 23:56:05.541698 systemd[1]: Finished sshkeys.service. Jul 6 23:56:05.602686 containerd[1489]: time="2025-07-06T23:56:05.602407201Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:56:05.605454 containerd[1489]: time="2025-07-06T23:56:05.604008474Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.95-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:56:05.605454 containerd[1489]: time="2025-07-06T23:56:05.604044101Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 6 23:56:05.605454 containerd[1489]: time="2025-07-06T23:56:05.604062986Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 6 23:56:05.605454 containerd[1489]: time="2025-07-06T23:56:05.604238645Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 6 23:56:05.605454 containerd[1489]: time="2025-07-06T23:56:05.604258692Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 6 23:56:05.605454 containerd[1489]: time="2025-07-06T23:56:05.604320548Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:56:05.605454 containerd[1489]: time="2025-07-06T23:56:05.604331880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:56:05.605454 containerd[1489]: time="2025-07-06T23:56:05.604505225Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:56:05.605454 containerd[1489]: time="2025-07-06T23:56:05.604523609Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 6 23:56:05.605454 containerd[1489]: time="2025-07-06T23:56:05.604536012Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:56:05.605454 containerd[1489]: time="2025-07-06T23:56:05.604544659Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 6 23:56:05.605714 containerd[1489]: time="2025-07-06T23:56:05.604644526Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:56:05.605714 containerd[1489]: time="2025-07-06T23:56:05.604865951Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:56:05.605714 containerd[1489]: time="2025-07-06T23:56:05.604983943Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:56:05.605714 containerd[1489]: time="2025-07-06T23:56:05.605000994Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 6 23:56:05.605714 containerd[1489]: time="2025-07-06T23:56:05.605095452Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 6 23:56:05.605714 containerd[1489]: time="2025-07-06T23:56:05.605148932Z" level=info msg="metadata content store policy set" policy=shared Jul 6 23:56:05.612684 containerd[1489]: time="2025-07-06T23:56:05.612646798Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 6 23:56:05.612724 containerd[1489]: time="2025-07-06T23:56:05.612702153Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 6 23:56:05.612724 containerd[1489]: time="2025-07-06T23:56:05.612719906Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 6 23:56:05.612761 containerd[1489]: time="2025-07-06T23:56:05.612734483Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 6 23:56:05.612761 containerd[1489]: time="2025-07-06T23:56:05.612747768Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 6 23:56:05.612907 containerd[1489]: time="2025-07-06T23:56:05.612885637Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 6 23:56:05.613160 containerd[1489]: time="2025-07-06T23:56:05.613135886Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 6 23:56:05.613245 containerd[1489]: time="2025-07-06T23:56:05.613223981Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 6 23:56:05.613273 containerd[1489]: time="2025-07-06T23:56:05.613245681Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 6 23:56:05.613273 containerd[1489]: time="2025-07-06T23:56:05.613257223Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 6 23:56:05.613273 containerd[1489]: time="2025-07-06T23:56:05.613268684Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 6 23:56:05.613321 containerd[1489]: time="2025-07-06T23:56:05.613280156Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 6 23:56:05.613321 containerd[1489]: time="2025-07-06T23:56:05.613291227Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 6 23:56:05.613321 containerd[1489]: time="2025-07-06T23:56:05.613303300Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 6 23:56:05.613321 containerd[1489]: time="2025-07-06T23:56:05.613315452Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 6 23:56:05.613389 containerd[1489]: time="2025-07-06T23:56:05.613327775Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 6 23:56:05.613389 containerd[1489]: time="2025-07-06T23:56:05.613341812Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 6 23:56:05.613389 containerd[1489]: time="2025-07-06T23:56:05.613354716Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 6 23:56:05.613389 containerd[1489]: time="2025-07-06T23:56:05.613377539Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 6 23:56:05.613389 containerd[1489]: time="2025-07-06T23:56:05.613389141Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 6 23:56:05.613491 containerd[1489]: time="2025-07-06T23:56:05.613400752Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 6 23:56:05.613491 containerd[1489]: time="2025-07-06T23:56:05.613414328Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 6 23:56:05.613491 containerd[1489]: time="2025-07-06T23:56:05.613425429Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 6 23:56:05.613491 containerd[1489]: time="2025-07-06T23:56:05.613436670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 6 23:56:05.613491 containerd[1489]: time="2025-07-06T23:56:05.613446658Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 6 23:56:05.613491 containerd[1489]: time="2025-07-06T23:56:05.613458180Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 6 23:56:05.613491 containerd[1489]: time="2025-07-06T23:56:05.613471074Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 6 23:56:05.613491 containerd[1489]: time="2025-07-06T23:56:05.613484519Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 6 23:56:05.613632 containerd[1489]: time="2025-07-06T23:56:05.613494338Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 6 23:56:05.613632 containerd[1489]: time="2025-07-06T23:56:05.613506230Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 6 23:56:05.613632 containerd[1489]: time="2025-07-06T23:56:05.613517290Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 6 23:56:05.613632 containerd[1489]: time="2025-07-06T23:56:05.613530425Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 6 23:56:05.613632 containerd[1489]: time="2025-07-06T23:56:05.613547307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 6 23:56:05.613632 containerd[1489]: time="2025-07-06T23:56:05.613557706Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 6 23:56:05.613632 containerd[1489]: time="2025-07-06T23:56:05.613566783Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 6 23:56:05.613632 containerd[1489]: time="2025-07-06T23:56:05.613602180Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 6 23:56:05.613632 containerd[1489]: time="2025-07-06T23:56:05.613617138Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 6 23:56:05.613987 containerd[1489]: time="2025-07-06T23:56:05.613646392Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 6 23:56:05.613987 containerd[1489]: time="2025-07-06T23:56:05.613658555Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 6 23:56:05.613987 containerd[1489]: time="2025-07-06T23:56:05.613667392Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 6 23:56:05.613987 containerd[1489]: time="2025-07-06T23:56:05.613678272Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 6 23:56:05.613987 containerd[1489]: time="2025-07-06T23:56:05.613686859Z" level=info msg="NRI interface is disabled by configuration." Jul 6 23:56:05.613987 containerd[1489]: time="2025-07-06T23:56:05.613695896Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 6 23:56:05.614085 containerd[1489]: time="2025-07-06T23:56:05.613946275Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 6 23:56:05.614085 containerd[1489]: time="2025-07-06T23:56:05.613995888Z" level=info msg="Connect containerd service" Jul 6 23:56:05.614085 containerd[1489]: time="2025-07-06T23:56:05.614024221Z" level=info msg="using legacy CRI server" Jul 6 23:56:05.614085 containerd[1489]: time="2025-07-06T23:56:05.614029441Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 6 23:56:05.614265 containerd[1489]: time="2025-07-06T23:56:05.614135690Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 6 23:56:05.618533 containerd[1489]: time="2025-07-06T23:56:05.614614548Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 6 23:56:05.618533 containerd[1489]: time="2025-07-06T23:56:05.615988644Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 6 23:56:05.618533 containerd[1489]: time="2025-07-06T23:56:05.616021536Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 6 23:56:05.618533 containerd[1489]: time="2025-07-06T23:56:05.616053776Z" level=info msg="Start subscribing containerd event" Jul 6 23:56:05.618533 containerd[1489]: time="2025-07-06T23:56:05.616088702Z" level=info msg="Start recovering state" Jul 6 23:56:05.618533 containerd[1489]: time="2025-07-06T23:56:05.616141160Z" level=info msg="Start event monitor" Jul 6 23:56:05.618533 containerd[1489]: time="2025-07-06T23:56:05.616155387Z" level=info msg="Start snapshots syncer" Jul 6 23:56:05.618533 containerd[1489]: time="2025-07-06T23:56:05.616162620Z" level=info msg="Start cni network conf syncer for default" Jul 6 23:56:05.618533 containerd[1489]: time="2025-07-06T23:56:05.616168852Z" level=info msg="Start streaming server" Jul 6 23:56:05.618533 containerd[1489]: time="2025-07-06T23:56:05.616216020Z" level=info msg="containerd successfully booted in 0.089952s" Jul 6 23:56:05.616428 systemd[1]: Started containerd.service - containerd container runtime. Jul 6 23:56:05.719887 systemd-networkd[1391]: eth1: Gained IPv6LL Jul 6 23:56:05.720501 systemd-timesyncd[1369]: Network configuration changed, trying to establish connection. Jul 6 23:56:05.725429 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 6 23:56:05.728168 systemd[1]: Reached target network-online.target - Network is Online. Jul 6 23:56:05.739806 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:56:05.747439 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 6 23:56:05.789180 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 6 23:56:05.904758 sshd_keygen[1483]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 6 23:56:05.921777 tar[1487]: linux-amd64/LICENSE Jul 6 23:56:05.921777 tar[1487]: linux-amd64/README.md Jul 6 23:56:05.933209 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 6 23:56:05.937186 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 6 23:56:05.946870 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 6 23:56:05.955732 systemd[1]: issuegen.service: Deactivated successfully. Jul 6 23:56:05.955939 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 6 23:56:05.966296 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 6 23:56:05.974394 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 6 23:56:05.985614 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 6 23:56:05.996154 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 6 23:56:05.999129 systemd[1]: Reached target getty.target - Login Prompts. Jul 6 23:56:06.424415 systemd-networkd[1391]: eth0: Gained IPv6LL Jul 6 23:56:06.425781 systemd-timesyncd[1369]: Network configuration changed, trying to establish connection. Jul 6 23:56:07.083084 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:56:07.086318 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 6 23:56:07.087551 (kubelet)[1586]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:56:07.089691 systemd[1]: Startup finished in 1.499s (kernel) + 8.417s (initrd) + 4.948s (userspace) = 14.864s. Jul 6 23:56:07.893754 kubelet[1586]: E0706 23:56:07.893677 1586 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:56:07.895949 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:56:07.896127 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:56:07.896449 systemd[1]: kubelet.service: Consumed 1.412s CPU time. Jul 6 23:56:08.652588 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 6 23:56:08.660174 systemd[1]: Started sshd@0-95.217.0.60:22-147.75.109.163:54382.service - OpenSSH per-connection server daemon (147.75.109.163:54382). Jul 6 23:56:09.713700 sshd[1598]: Accepted publickey for core from 147.75.109.163 port 54382 ssh2: RSA SHA256:WO1o7mVFDf5n+bNY0zV07pWGN617llOWY24GfZ+AEOU Jul 6 23:56:09.716798 sshd[1598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:56:09.733178 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 6 23:56:09.741197 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 6 23:56:09.746707 systemd-logind[1467]: New session 1 of user core. Jul 6 23:56:09.763850 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 6 23:56:09.769080 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 6 23:56:09.775817 (systemd)[1602]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 6 23:56:09.906682 systemd[1602]: Queued start job for default target default.target. Jul 6 23:56:09.917475 systemd[1602]: Created slice app.slice - User Application Slice. Jul 6 23:56:09.917500 systemd[1602]: Reached target paths.target - Paths. Jul 6 23:56:09.917511 systemd[1602]: Reached target timers.target - Timers. Jul 6 23:56:09.918875 systemd[1602]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 6 23:56:09.940412 systemd[1602]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 6 23:56:09.940615 systemd[1602]: Reached target sockets.target - Sockets. Jul 6 23:56:09.941029 systemd[1602]: Reached target basic.target - Basic System. Jul 6 23:56:09.941174 systemd[1602]: Reached target default.target - Main User Target. Jul 6 23:56:09.941296 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 6 23:56:09.941447 systemd[1602]: Startup finished in 156ms. Jul 6 23:56:09.948861 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 6 23:56:10.660955 systemd[1]: Started sshd@1-95.217.0.60:22-147.75.109.163:54384.service - OpenSSH per-connection server daemon (147.75.109.163:54384). Jul 6 23:56:11.648269 sshd[1613]: Accepted publickey for core from 147.75.109.163 port 54384 ssh2: RSA SHA256:WO1o7mVFDf5n+bNY0zV07pWGN617llOWY24GfZ+AEOU Jul 6 23:56:11.650338 sshd[1613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:56:11.658325 systemd-logind[1467]: New session 2 of user core. Jul 6 23:56:11.667879 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 6 23:56:12.336065 sshd[1613]: pam_unix(sshd:session): session closed for user core Jul 6 23:56:12.339911 systemd[1]: sshd@1-95.217.0.60:22-147.75.109.163:54384.service: Deactivated successfully. Jul 6 23:56:12.342887 systemd[1]: session-2.scope: Deactivated successfully. Jul 6 23:56:12.344511 systemd-logind[1467]: Session 2 logged out. Waiting for processes to exit. Jul 6 23:56:12.346298 systemd-logind[1467]: Removed session 2. Jul 6 23:56:12.524046 systemd[1]: Started sshd@2-95.217.0.60:22-147.75.109.163:54400.service - OpenSSH per-connection server daemon (147.75.109.163:54400). Jul 6 23:56:13.557701 sshd[1620]: Accepted publickey for core from 147.75.109.163 port 54400 ssh2: RSA SHA256:WO1o7mVFDf5n+bNY0zV07pWGN617llOWY24GfZ+AEOU Jul 6 23:56:13.560020 sshd[1620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:56:13.569282 systemd-logind[1467]: New session 3 of user core. Jul 6 23:56:13.578994 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 6 23:56:14.266544 sshd[1620]: pam_unix(sshd:session): session closed for user core Jul 6 23:56:14.270775 systemd[1]: sshd@2-95.217.0.60:22-147.75.109.163:54400.service: Deactivated successfully. Jul 6 23:56:14.274222 systemd[1]: session-3.scope: Deactivated successfully. Jul 6 23:56:14.276466 systemd-logind[1467]: Session 3 logged out. Waiting for processes to exit. Jul 6 23:56:14.278400 systemd-logind[1467]: Removed session 3. Jul 6 23:56:14.444017 systemd[1]: Started sshd@3-95.217.0.60:22-147.75.109.163:54402.service - OpenSSH per-connection server daemon (147.75.109.163:54402). Jul 6 23:56:15.443393 sshd[1627]: Accepted publickey for core from 147.75.109.163 port 54402 ssh2: RSA SHA256:WO1o7mVFDf5n+bNY0zV07pWGN617llOWY24GfZ+AEOU Jul 6 23:56:15.445881 sshd[1627]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:56:15.454547 systemd-logind[1467]: New session 4 of user core. Jul 6 23:56:15.462749 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 6 23:56:16.136135 sshd[1627]: pam_unix(sshd:session): session closed for user core Jul 6 23:56:16.140474 systemd[1]: sshd@3-95.217.0.60:22-147.75.109.163:54402.service: Deactivated successfully. Jul 6 23:56:16.143539 systemd[1]: session-4.scope: Deactivated successfully. Jul 6 23:56:16.146222 systemd-logind[1467]: Session 4 logged out. Waiting for processes to exit. Jul 6 23:56:16.148237 systemd-logind[1467]: Removed session 4. Jul 6 23:56:16.313151 systemd[1]: Started sshd@4-95.217.0.60:22-147.75.109.163:49810.service - OpenSSH per-connection server daemon (147.75.109.163:49810). Jul 6 23:56:17.336033 sshd[1634]: Accepted publickey for core from 147.75.109.163 port 49810 ssh2: RSA SHA256:WO1o7mVFDf5n+bNY0zV07pWGN617llOWY24GfZ+AEOU Jul 6 23:56:17.338322 sshd[1634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:56:17.346973 systemd-logind[1467]: New session 5 of user core. Jul 6 23:56:17.359084 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 6 23:56:17.885061 sudo[1637]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 6 23:56:17.885532 sudo[1637]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:56:17.903824 sudo[1637]: pam_unix(sudo:session): session closed for user root Jul 6 23:56:17.937920 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 6 23:56:17.945006 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:56:18.067994 sshd[1634]: pam_unix(sshd:session): session closed for user core Jul 6 23:56:18.074720 systemd[1]: sshd@4-95.217.0.60:22-147.75.109.163:49810.service: Deactivated successfully. Jul 6 23:56:18.078414 systemd[1]: session-5.scope: Deactivated successfully. Jul 6 23:56:18.085908 systemd-logind[1467]: Session 5 logged out. Waiting for processes to exit. Jul 6 23:56:18.088387 systemd-logind[1467]: Removed session 5. Jul 6 23:56:18.107321 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:56:18.121980 (kubelet)[1649]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:56:18.170233 kubelet[1649]: E0706 23:56:18.169199 1649 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:56:18.172796 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:56:18.173040 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:56:18.259364 systemd[1]: Started sshd@5-95.217.0.60:22-147.75.109.163:49814.service - OpenSSH per-connection server daemon (147.75.109.163:49814). Jul 6 23:56:19.288913 sshd[1657]: Accepted publickey for core from 147.75.109.163 port 49814 ssh2: RSA SHA256:WO1o7mVFDf5n+bNY0zV07pWGN617llOWY24GfZ+AEOU Jul 6 23:56:19.291177 sshd[1657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:56:19.297730 systemd-logind[1467]: New session 6 of user core. Jul 6 23:56:19.308857 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 6 23:56:19.835118 sudo[1661]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 6 23:56:19.835655 sudo[1661]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:56:19.841762 sudo[1661]: pam_unix(sudo:session): session closed for user root Jul 6 23:56:19.851253 sudo[1660]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 6 23:56:19.851853 sudo[1660]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:56:19.876261 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 6 23:56:19.878682 auditctl[1664]: No rules Jul 6 23:56:19.880501 systemd[1]: audit-rules.service: Deactivated successfully. Jul 6 23:56:19.880937 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 6 23:56:19.889593 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 6 23:56:19.930063 augenrules[1682]: No rules Jul 6 23:56:19.931199 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 6 23:56:19.932934 sudo[1660]: pam_unix(sudo:session): session closed for user root Jul 6 23:56:20.100112 sshd[1657]: pam_unix(sshd:session): session closed for user core Jul 6 23:56:20.105776 systemd[1]: sshd@5-95.217.0.60:22-147.75.109.163:49814.service: Deactivated successfully. Jul 6 23:56:20.107949 systemd[1]: session-6.scope: Deactivated successfully. Jul 6 23:56:20.109317 systemd-logind[1467]: Session 6 logged out. Waiting for processes to exit. Jul 6 23:56:20.111044 systemd-logind[1467]: Removed session 6. Jul 6 23:56:20.284440 systemd[1]: Started sshd@6-95.217.0.60:22-147.75.109.163:49824.service - OpenSSH per-connection server daemon (147.75.109.163:49824). Jul 6 23:56:21.305302 sshd[1690]: Accepted publickey for core from 147.75.109.163 port 49824 ssh2: RSA SHA256:WO1o7mVFDf5n+bNY0zV07pWGN617llOWY24GfZ+AEOU Jul 6 23:56:21.307461 sshd[1690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:56:21.314814 systemd-logind[1467]: New session 7 of user core. Jul 6 23:56:21.320915 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 6 23:56:21.846728 sudo[1693]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 6 23:56:21.847193 sudo[1693]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:56:22.307025 (dockerd)[1710]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 6 23:56:22.307903 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 6 23:56:22.680473 dockerd[1710]: time="2025-07-06T23:56:22.680304963Z" level=info msg="Starting up" Jul 6 23:56:22.806225 dockerd[1710]: time="2025-07-06T23:56:22.806135470Z" level=info msg="Loading containers: start." Jul 6 23:56:22.939678 kernel: Initializing XFRM netlink socket Jul 6 23:56:22.980277 systemd-timesyncd[1369]: Network configuration changed, trying to establish connection. Jul 6 23:56:23.038039 systemd-networkd[1391]: docker0: Link UP Jul 6 23:56:23.059068 dockerd[1710]: time="2025-07-06T23:56:23.059006252Z" level=info msg="Loading containers: done." Jul 6 23:56:23.074111 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3743420542-merged.mount: Deactivated successfully. Jul 6 23:56:23.081267 dockerd[1710]: time="2025-07-06T23:56:23.081207415Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 6 23:56:23.081437 dockerd[1710]: time="2025-07-06T23:56:23.081378786Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jul 6 23:56:23.081586 dockerd[1710]: time="2025-07-06T23:56:23.081556279Z" level=info msg="Daemon has completed initialization" Jul 6 23:56:23.128325 dockerd[1710]: time="2025-07-06T23:56:23.128173377Z" level=info msg="API listen on /run/docker.sock" Jul 6 23:56:23.128320 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 6 23:56:23.143121 systemd-timesyncd[1369]: Contacted time server 78.46.87.46:123 (2.flatcar.pool.ntp.org). Jul 6 23:56:23.143226 systemd-timesyncd[1369]: Initial clock synchronization to Sun 2025-07-06 23:56:23.331843 UTC. Jul 6 23:56:24.461653 containerd[1489]: time="2025-07-06T23:56:24.461505360Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 6 23:56:25.129446 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1031833820.mount: Deactivated successfully. Jul 6 23:56:26.270340 containerd[1489]: time="2025-07-06T23:56:26.270282570Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:26.271427 containerd[1489]: time="2025-07-06T23:56:26.271381453Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=28077838" Jul 6 23:56:26.272721 containerd[1489]: time="2025-07-06T23:56:26.272684106Z" level=info msg="ImageCreate event name:\"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:26.275329 containerd[1489]: time="2025-07-06T23:56:26.275291419Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:26.276410 containerd[1489]: time="2025-07-06T23:56:26.276223417Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"28074544\" in 1.81466015s" Jul 6 23:56:26.276410 containerd[1489]: time="2025-07-06T23:56:26.276277815Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\"" Jul 6 23:56:26.276914 containerd[1489]: time="2025-07-06T23:56:26.276886009Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 6 23:56:27.654146 containerd[1489]: time="2025-07-06T23:56:27.654062269Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:27.655492 containerd[1489]: time="2025-07-06T23:56:27.655440738Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=24713316" Jul 6 23:56:27.657287 containerd[1489]: time="2025-07-06T23:56:27.657244188Z" level=info msg="ImageCreate event name:\"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:27.660960 containerd[1489]: time="2025-07-06T23:56:27.660924388Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:27.662510 containerd[1489]: time="2025-07-06T23:56:27.662253568Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"26315128\" in 1.385339005s" Jul 6 23:56:27.662510 containerd[1489]: time="2025-07-06T23:56:27.662309991Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\"" Jul 6 23:56:27.663188 containerd[1489]: time="2025-07-06T23:56:27.663144434Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 6 23:56:28.187826 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 6 23:56:28.195045 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:56:28.347482 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:56:28.358054 (kubelet)[1915]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:56:28.397769 kubelet[1915]: E0706 23:56:28.397732 1915 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:56:28.402567 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:56:28.402905 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:56:28.752612 containerd[1489]: time="2025-07-06T23:56:28.752554148Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:28.754027 containerd[1489]: time="2025-07-06T23:56:28.753994137Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=18783693" Jul 6 23:56:28.755309 containerd[1489]: time="2025-07-06T23:56:28.755275535Z" level=info msg="ImageCreate event name:\"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:28.758371 containerd[1489]: time="2025-07-06T23:56:28.758111296Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:28.759211 containerd[1489]: time="2025-07-06T23:56:28.759173786Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"20385523\" in 1.095987048s" Jul 6 23:56:28.759251 containerd[1489]: time="2025-07-06T23:56:28.759220624Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\"" Jul 6 23:56:28.759952 containerd[1489]: time="2025-07-06T23:56:28.759928591Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 6 23:56:29.819348 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1875021309.mount: Deactivated successfully. Jul 6 23:56:30.110965 containerd[1489]: time="2025-07-06T23:56:30.110832100Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:30.112017 containerd[1489]: time="2025-07-06T23:56:30.111855555Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=30383971" Jul 6 23:56:30.113664 containerd[1489]: time="2025-07-06T23:56:30.112827676Z" level=info msg="ImageCreate event name:\"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:30.114715 containerd[1489]: time="2025-07-06T23:56:30.114676146Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:30.115351 containerd[1489]: time="2025-07-06T23:56:30.115216127Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"30382962\" in 1.355260741s" Jul 6 23:56:30.115351 containerd[1489]: time="2025-07-06T23:56:30.115243598Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\"" Jul 6 23:56:30.115757 containerd[1489]: time="2025-07-06T23:56:30.115731617Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 6 23:56:30.683573 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount205339360.mount: Deactivated successfully. Jul 6 23:56:31.508649 containerd[1489]: time="2025-07-06T23:56:31.508585043Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:31.509933 containerd[1489]: time="2025-07-06T23:56:31.509900185Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565335" Jul 6 23:56:31.511259 containerd[1489]: time="2025-07-06T23:56:31.511236006Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:31.514082 containerd[1489]: time="2025-07-06T23:56:31.514062712Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:31.515020 containerd[1489]: time="2025-07-06T23:56:31.515000027Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.39924391s" Jul 6 23:56:31.515108 containerd[1489]: time="2025-07-06T23:56:31.515091923Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 6 23:56:31.515670 containerd[1489]: time="2025-07-06T23:56:31.515647001Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 6 23:56:32.019623 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount679905077.mount: Deactivated successfully. Jul 6 23:56:32.029958 containerd[1489]: time="2025-07-06T23:56:32.029847955Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:32.031339 containerd[1489]: time="2025-07-06T23:56:32.031282757Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321160" Jul 6 23:56:32.032482 containerd[1489]: time="2025-07-06T23:56:32.032454227Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:32.037812 containerd[1489]: time="2025-07-06T23:56:32.036446354Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:32.037812 containerd[1489]: time="2025-07-06T23:56:32.037306203Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 521.631456ms" Jul 6 23:56:32.037812 containerd[1489]: time="2025-07-06T23:56:32.037336706Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 6 23:56:32.038274 containerd[1489]: time="2025-07-06T23:56:32.038253600Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 6 23:56:32.590706 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2508356813.mount: Deactivated successfully. Jul 6 23:56:34.158708 containerd[1489]: time="2025-07-06T23:56:34.158586692Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:34.160372 containerd[1489]: time="2025-07-06T23:56:34.160194246Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780083" Jul 6 23:56:34.163660 containerd[1489]: time="2025-07-06T23:56:34.161685382Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:34.166422 containerd[1489]: time="2025-07-06T23:56:34.166375465Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:34.168343 containerd[1489]: time="2025-07-06T23:56:34.168290845Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.12989045s" Jul 6 23:56:34.168451 containerd[1489]: time="2025-07-06T23:56:34.168432272Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jul 6 23:56:37.817084 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:56:37.825858 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:56:37.867967 systemd[1]: Reloading requested from client PID 2072 ('systemctl') (unit session-7.scope)... Jul 6 23:56:37.867992 systemd[1]: Reloading... Jul 6 23:56:38.000656 zram_generator::config[2121]: No configuration found. Jul 6 23:56:38.092522 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:56:38.173619 systemd[1]: Reloading finished in 305 ms. Jul 6 23:56:38.213797 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 6 23:56:38.213862 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 6 23:56:38.214055 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:56:38.215764 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:56:38.331780 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:56:38.337155 (kubelet)[2164]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 6 23:56:38.375642 kubelet[2164]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:56:38.375642 kubelet[2164]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 6 23:56:38.375642 kubelet[2164]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:56:38.376054 kubelet[2164]: I0706 23:56:38.375583 2164 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 6 23:56:38.756477 kubelet[2164]: I0706 23:56:38.756395 2164 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 6 23:56:38.756477 kubelet[2164]: I0706 23:56:38.756446 2164 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 6 23:56:38.756913 kubelet[2164]: I0706 23:56:38.756870 2164 server.go:934] "Client rotation is on, will bootstrap in background" Jul 6 23:56:38.797959 kubelet[2164]: I0706 23:56:38.797840 2164 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:56:38.802160 kubelet[2164]: E0706 23:56:38.802116 2164 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://95.217.0.60:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 95.217.0.60:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:56:38.824840 kubelet[2164]: E0706 23:56:38.824718 2164 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 6 23:56:38.824840 kubelet[2164]: I0706 23:56:38.824763 2164 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 6 23:56:38.830103 kubelet[2164]: I0706 23:56:38.830082 2164 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 6 23:56:38.833016 kubelet[2164]: I0706 23:56:38.832872 2164 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 6 23:56:38.833214 kubelet[2164]: I0706 23:56:38.833028 2164 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 6 23:56:38.833312 kubelet[2164]: I0706 23:56:38.833060 2164 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-4-6-7e2061accb","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 6 23:56:38.833312 kubelet[2164]: I0706 23:56:38.833296 2164 topology_manager.go:138] "Creating topology manager with none policy" Jul 6 23:56:38.833312 kubelet[2164]: I0706 23:56:38.833309 2164 container_manager_linux.go:300] "Creating device plugin manager" Jul 6 23:56:38.833555 kubelet[2164]: I0706 23:56:38.833422 2164 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:56:38.838619 kubelet[2164]: I0706 23:56:38.837429 2164 kubelet.go:408] "Attempting to sync node with API server" Jul 6 23:56:38.838619 kubelet[2164]: I0706 23:56:38.837474 2164 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 6 23:56:38.838619 kubelet[2164]: I0706 23:56:38.837529 2164 kubelet.go:314] "Adding apiserver pod source" Jul 6 23:56:38.838619 kubelet[2164]: I0706 23:56:38.837562 2164 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 6 23:56:38.846685 kubelet[2164]: W0706 23:56:38.846217 2164 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://95.217.0.60:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-4-6-7e2061accb&limit=500&resourceVersion=0": dial tcp 95.217.0.60:6443: connect: connection refused Jul 6 23:56:38.847022 kubelet[2164]: E0706 23:56:38.846989 2164 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://95.217.0.60:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-4-6-7e2061accb&limit=500&resourceVersion=0\": dial tcp 95.217.0.60:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:56:38.847249 kubelet[2164]: I0706 23:56:38.847230 2164 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 6 23:56:38.849102 kubelet[2164]: W0706 23:56:38.848824 2164 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://95.217.0.60:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 95.217.0.60:6443: connect: connection refused Jul 6 23:56:38.849102 kubelet[2164]: E0706 23:56:38.848899 2164 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://95.217.0.60:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 95.217.0.60:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:56:38.852777 kubelet[2164]: I0706 23:56:38.852727 2164 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 6 23:56:38.853907 kubelet[2164]: W0706 23:56:38.853861 2164 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 6 23:56:38.854840 kubelet[2164]: I0706 23:56:38.854787 2164 server.go:1274] "Started kubelet" Jul 6 23:56:38.856715 kubelet[2164]: I0706 23:56:38.856004 2164 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 6 23:56:38.857589 kubelet[2164]: I0706 23:56:38.857568 2164 server.go:449] "Adding debug handlers to kubelet server" Jul 6 23:56:38.861724 kubelet[2164]: I0706 23:56:38.860798 2164 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 6 23:56:38.861724 kubelet[2164]: I0706 23:56:38.861232 2164 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 6 23:56:38.863836 kubelet[2164]: E0706 23:56:38.861554 2164 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://95.217.0.60:6443/api/v1/namespaces/default/events\": dial tcp 95.217.0.60:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-4-6-7e2061accb.184fced60fdae4b1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-4-6-7e2061accb,UID:ci-4081-3-4-6-7e2061accb,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-4-6-7e2061accb,},FirstTimestamp:2025-07-06 23:56:38.854739121 +0000 UTC m=+0.514502351,LastTimestamp:2025-07-06 23:56:38.854739121 +0000 UTC m=+0.514502351,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-4-6-7e2061accb,}" Jul 6 23:56:38.864373 kubelet[2164]: I0706 23:56:38.864117 2164 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 6 23:56:38.874671 kubelet[2164]: I0706 23:56:38.868952 2164 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 6 23:56:38.875165 kubelet[2164]: I0706 23:56:38.875115 2164 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 6 23:56:38.878198 kubelet[2164]: E0706 23:56:38.875524 2164 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-4-6-7e2061accb\" not found" Jul 6 23:56:38.878573 kubelet[2164]: I0706 23:56:38.878536 2164 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 6 23:56:38.878671 kubelet[2164]: I0706 23:56:38.878648 2164 reconciler.go:26] "Reconciler: start to sync state" Jul 6 23:56:38.883539 kubelet[2164]: E0706 23:56:38.883499 2164 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://95.217.0.60:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-4-6-7e2061accb?timeout=10s\": dial tcp 95.217.0.60:6443: connect: connection refused" interval="200ms" Jul 6 23:56:38.885085 kubelet[2164]: I0706 23:56:38.885057 2164 factory.go:221] Registration of the systemd container factory successfully Jul 6 23:56:38.885291 kubelet[2164]: I0706 23:56:38.885272 2164 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 6 23:56:38.890045 kubelet[2164]: W0706 23:56:38.889982 2164 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://95.217.0.60:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 95.217.0.60:6443: connect: connection refused Jul 6 23:56:38.892077 kubelet[2164]: E0706 23:56:38.892048 2164 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://95.217.0.60:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 95.217.0.60:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:56:38.895477 kubelet[2164]: I0706 23:56:38.894584 2164 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 6 23:56:38.896405 kubelet[2164]: I0706 23:56:38.895608 2164 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 6 23:56:38.896405 kubelet[2164]: I0706 23:56:38.895678 2164 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 6 23:56:38.896405 kubelet[2164]: I0706 23:56:38.895703 2164 kubelet.go:2321] "Starting kubelet main sync loop" Jul 6 23:56:38.896405 kubelet[2164]: E0706 23:56:38.895752 2164 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 6 23:56:38.901101 kubelet[2164]: I0706 23:56:38.901077 2164 factory.go:221] Registration of the containerd container factory successfully Jul 6 23:56:38.901266 kubelet[2164]: W0706 23:56:38.901210 2164 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://95.217.0.60:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 95.217.0.60:6443: connect: connection refused Jul 6 23:56:38.901310 kubelet[2164]: E0706 23:56:38.901273 2164 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://95.217.0.60:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 95.217.0.60:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:56:38.901567 kubelet[2164]: E0706 23:56:38.901539 2164 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 6 23:56:38.929649 kubelet[2164]: I0706 23:56:38.929579 2164 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 6 23:56:38.929798 kubelet[2164]: I0706 23:56:38.929683 2164 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 6 23:56:38.929798 kubelet[2164]: I0706 23:56:38.929706 2164 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:56:38.932999 kubelet[2164]: I0706 23:56:38.932954 2164 policy_none.go:49] "None policy: Start" Jul 6 23:56:38.933894 kubelet[2164]: I0706 23:56:38.933736 2164 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 6 23:56:38.933894 kubelet[2164]: I0706 23:56:38.933761 2164 state_mem.go:35] "Initializing new in-memory state store" Jul 6 23:56:38.941164 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 6 23:56:38.959300 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 6 23:56:38.970541 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 6 23:56:38.973450 kubelet[2164]: I0706 23:56:38.973171 2164 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 6 23:56:38.973450 kubelet[2164]: I0706 23:56:38.973398 2164 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 6 23:56:38.973450 kubelet[2164]: I0706 23:56:38.973409 2164 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 6 23:56:38.975760 kubelet[2164]: I0706 23:56:38.974140 2164 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 6 23:56:38.977069 kubelet[2164]: E0706 23:56:38.976980 2164 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-4-6-7e2061accb\" not found" Jul 6 23:56:39.011491 systemd[1]: Created slice kubepods-burstable-poda8ba97ea8a79604b926b2c1da1ab06f5.slice - libcontainer container kubepods-burstable-poda8ba97ea8a79604b926b2c1da1ab06f5.slice. Jul 6 23:56:39.033428 systemd[1]: Created slice kubepods-burstable-pod582984c00b7053039dfe19faa07eaacb.slice - libcontainer container kubepods-burstable-pod582984c00b7053039dfe19faa07eaacb.slice. Jul 6 23:56:39.039442 systemd[1]: Created slice kubepods-burstable-podd5ae581745f7243314ad06cfa636a2d9.slice - libcontainer container kubepods-burstable-podd5ae581745f7243314ad06cfa636a2d9.slice. Jul 6 23:56:39.077787 kubelet[2164]: I0706 23:56:39.077695 2164 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-4-6-7e2061accb" Jul 6 23:56:39.078485 kubelet[2164]: E0706 23:56:39.078422 2164 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://95.217.0.60:6443/api/v1/nodes\": dial tcp 95.217.0.60:6443: connect: connection refused" node="ci-4081-3-4-6-7e2061accb" Jul 6 23:56:39.085540 kubelet[2164]: E0706 23:56:39.085457 2164 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://95.217.0.60:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-4-6-7e2061accb?timeout=10s\": dial tcp 95.217.0.60:6443: connect: connection refused" interval="400ms" Jul 6 23:56:39.180292 kubelet[2164]: I0706 23:56:39.180056 2164 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d5ae581745f7243314ad06cfa636a2d9-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-4-6-7e2061accb\" (UID: \"d5ae581745f7243314ad06cfa636a2d9\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-6-7e2061accb" Jul 6 23:56:39.180292 kubelet[2164]: I0706 23:56:39.180108 2164 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a8ba97ea8a79604b926b2c1da1ab06f5-ca-certs\") pod \"kube-apiserver-ci-4081-3-4-6-7e2061accb\" (UID: \"a8ba97ea8a79604b926b2c1da1ab06f5\") " pod="kube-system/kube-apiserver-ci-4081-3-4-6-7e2061accb" Jul 6 23:56:39.180292 kubelet[2164]: I0706 23:56:39.180123 2164 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a8ba97ea8a79604b926b2c1da1ab06f5-k8s-certs\") pod \"kube-apiserver-ci-4081-3-4-6-7e2061accb\" (UID: \"a8ba97ea8a79604b926b2c1da1ab06f5\") " pod="kube-system/kube-apiserver-ci-4081-3-4-6-7e2061accb" Jul 6 23:56:39.180292 kubelet[2164]: I0706 23:56:39.180170 2164 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a8ba97ea8a79604b926b2c1da1ab06f5-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-4-6-7e2061accb\" (UID: \"a8ba97ea8a79604b926b2c1da1ab06f5\") " pod="kube-system/kube-apiserver-ci-4081-3-4-6-7e2061accb" Jul 6 23:56:39.180292 kubelet[2164]: I0706 23:56:39.180185 2164 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d5ae581745f7243314ad06cfa636a2d9-ca-certs\") pod \"kube-controller-manager-ci-4081-3-4-6-7e2061accb\" (UID: \"d5ae581745f7243314ad06cfa636a2d9\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-6-7e2061accb" Jul 6 23:56:39.180904 kubelet[2164]: I0706 23:56:39.180244 2164 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d5ae581745f7243314ad06cfa636a2d9-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-4-6-7e2061accb\" (UID: \"d5ae581745f7243314ad06cfa636a2d9\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-6-7e2061accb" Jul 6 23:56:39.180904 kubelet[2164]: I0706 23:56:39.180262 2164 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d5ae581745f7243314ad06cfa636a2d9-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-4-6-7e2061accb\" (UID: \"d5ae581745f7243314ad06cfa636a2d9\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-6-7e2061accb" Jul 6 23:56:39.180904 kubelet[2164]: I0706 23:56:39.180276 2164 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d5ae581745f7243314ad06cfa636a2d9-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-4-6-7e2061accb\" (UID: \"d5ae581745f7243314ad06cfa636a2d9\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-6-7e2061accb" Jul 6 23:56:39.180904 kubelet[2164]: I0706 23:56:39.180290 2164 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/582984c00b7053039dfe19faa07eaacb-kubeconfig\") pod \"kube-scheduler-ci-4081-3-4-6-7e2061accb\" (UID: \"582984c00b7053039dfe19faa07eaacb\") " pod="kube-system/kube-scheduler-ci-4081-3-4-6-7e2061accb" Jul 6 23:56:39.283002 kubelet[2164]: I0706 23:56:39.282784 2164 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-4-6-7e2061accb" Jul 6 23:56:39.283513 kubelet[2164]: E0706 23:56:39.283397 2164 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://95.217.0.60:6443/api/v1/nodes\": dial tcp 95.217.0.60:6443: connect: connection refused" node="ci-4081-3-4-6-7e2061accb" Jul 6 23:56:39.331409 containerd[1489]: time="2025-07-06T23:56:39.331323980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-4-6-7e2061accb,Uid:a8ba97ea8a79604b926b2c1da1ab06f5,Namespace:kube-system,Attempt:0,}" Jul 6 23:56:39.338360 containerd[1489]: time="2025-07-06T23:56:39.338250846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-4-6-7e2061accb,Uid:582984c00b7053039dfe19faa07eaacb,Namespace:kube-system,Attempt:0,}" Jul 6 23:56:39.343110 containerd[1489]: time="2025-07-06T23:56:39.342862491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-4-6-7e2061accb,Uid:d5ae581745f7243314ad06cfa636a2d9,Namespace:kube-system,Attempt:0,}" Jul 6 23:56:39.486330 kubelet[2164]: E0706 23:56:39.486231 2164 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://95.217.0.60:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-4-6-7e2061accb?timeout=10s\": dial tcp 95.217.0.60:6443: connect: connection refused" interval="800ms" Jul 6 23:56:39.687594 kubelet[2164]: I0706 23:56:39.687268 2164 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-4-6-7e2061accb" Jul 6 23:56:39.687857 kubelet[2164]: E0706 23:56:39.687809 2164 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://95.217.0.60:6443/api/v1/nodes\": dial tcp 95.217.0.60:6443: connect: connection refused" node="ci-4081-3-4-6-7e2061accb" Jul 6 23:56:39.740481 kubelet[2164]: W0706 23:56:39.740406 2164 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://95.217.0.60:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 95.217.0.60:6443: connect: connection refused Jul 6 23:56:39.740481 kubelet[2164]: E0706 23:56:39.740478 2164 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://95.217.0.60:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 95.217.0.60:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:56:39.845947 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3518386250.mount: Deactivated successfully. Jul 6 23:56:39.866562 containerd[1489]: time="2025-07-06T23:56:39.866430129Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:56:39.867966 containerd[1489]: time="2025-07-06T23:56:39.867909941Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:56:39.869424 containerd[1489]: time="2025-07-06T23:56:39.869354294Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 6 23:56:39.870539 containerd[1489]: time="2025-07-06T23:56:39.870482762Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312078" Jul 6 23:56:39.873661 containerd[1489]: time="2025-07-06T23:56:39.871687998Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:56:39.874420 containerd[1489]: time="2025-07-06T23:56:39.874375236Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:56:39.874686 containerd[1489]: time="2025-07-06T23:56:39.874581206Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 6 23:56:39.880018 containerd[1489]: time="2025-07-06T23:56:39.879973885Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:56:39.883666 containerd[1489]: time="2025-07-06T23:56:39.883571973Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 552.131003ms" Jul 6 23:56:39.886699 containerd[1489]: time="2025-07-06T23:56:39.886620515Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 543.682574ms" Jul 6 23:56:39.889849 containerd[1489]: time="2025-07-06T23:56:39.889799106Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 551.443852ms" Jul 6 23:56:40.066706 containerd[1489]: time="2025-07-06T23:56:40.063564480Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:56:40.066706 containerd[1489]: time="2025-07-06T23:56:40.063643186Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:56:40.066706 containerd[1489]: time="2025-07-06T23:56:40.063653504Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:40.066706 containerd[1489]: time="2025-07-06T23:56:40.063728543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:40.074151 containerd[1489]: time="2025-07-06T23:56:40.073876055Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:56:40.074151 containerd[1489]: time="2025-07-06T23:56:40.073961703Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:56:40.074151 containerd[1489]: time="2025-07-06T23:56:40.074005467Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:40.075163 containerd[1489]: time="2025-07-06T23:56:40.075000176Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:40.078074 containerd[1489]: time="2025-07-06T23:56:40.077676882Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:56:40.078074 containerd[1489]: time="2025-07-06T23:56:40.077766348Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:56:40.078074 containerd[1489]: time="2025-07-06T23:56:40.077787497Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:40.078074 containerd[1489]: time="2025-07-06T23:56:40.077955157Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:40.094720 kubelet[2164]: W0706 23:56:40.094617 2164 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://95.217.0.60:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-4-6-7e2061accb&limit=500&resourceVersion=0": dial tcp 95.217.0.60:6443: connect: connection refused Jul 6 23:56:40.094882 kubelet[2164]: E0706 23:56:40.094740 2164 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://95.217.0.60:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-4-6-7e2061accb&limit=500&resourceVersion=0\": dial tcp 95.217.0.60:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:56:40.097878 systemd[1]: Started cri-containerd-867d863b88feedbcf931a3057f17a60cd51a013efd9c97fc0465d10a86470948.scope - libcontainer container 867d863b88feedbcf931a3057f17a60cd51a013efd9c97fc0465d10a86470948. Jul 6 23:56:40.103400 systemd[1]: Started cri-containerd-9b62e3a295b421030de9e4510b3647d4047c8b0cd6906b378ae52a839fdaa250.scope - libcontainer container 9b62e3a295b421030de9e4510b3647d4047c8b0cd6906b378ae52a839fdaa250. Jul 6 23:56:40.119785 systemd[1]: Started cri-containerd-ce920c909d8b25214d913f9561f00f7acec73b88ab868d9c5f59f2765fb71dcb.scope - libcontainer container ce920c909d8b25214d913f9561f00f7acec73b88ab868d9c5f59f2765fb71dcb. Jul 6 23:56:40.172540 containerd[1489]: time="2025-07-06T23:56:40.172494331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-4-6-7e2061accb,Uid:582984c00b7053039dfe19faa07eaacb,Namespace:kube-system,Attempt:0,} returns sandbox id \"867d863b88feedbcf931a3057f17a60cd51a013efd9c97fc0465d10a86470948\"" Jul 6 23:56:40.179771 containerd[1489]: time="2025-07-06T23:56:40.179740511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-4-6-7e2061accb,Uid:a8ba97ea8a79604b926b2c1da1ab06f5,Namespace:kube-system,Attempt:0,} returns sandbox id \"ce920c909d8b25214d913f9561f00f7acec73b88ab868d9c5f59f2765fb71dcb\"" Jul 6 23:56:40.182132 containerd[1489]: time="2025-07-06T23:56:40.182092436Z" level=info msg="CreateContainer within sandbox \"867d863b88feedbcf931a3057f17a60cd51a013efd9c97fc0465d10a86470948\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 6 23:56:40.183044 containerd[1489]: time="2025-07-06T23:56:40.182946239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-4-6-7e2061accb,Uid:d5ae581745f7243314ad06cfa636a2d9,Namespace:kube-system,Attempt:0,} returns sandbox id \"9b62e3a295b421030de9e4510b3647d4047c8b0cd6906b378ae52a839fdaa250\"" Jul 6 23:56:40.195753 containerd[1489]: time="2025-07-06T23:56:40.195702612Z" level=info msg="CreateContainer within sandbox \"ce920c909d8b25214d913f9561f00f7acec73b88ab868d9c5f59f2765fb71dcb\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 6 23:56:40.197249 containerd[1489]: time="2025-07-06T23:56:40.197186165Z" level=info msg="CreateContainer within sandbox \"9b62e3a295b421030de9e4510b3647d4047c8b0cd6906b378ae52a839fdaa250\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 6 23:56:40.199841 containerd[1489]: time="2025-07-06T23:56:40.199803396Z" level=info msg="CreateContainer within sandbox \"867d863b88feedbcf931a3057f17a60cd51a013efd9c97fc0465d10a86470948\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"34a57c5f0303e495071d4c9fa7353cc508e1348e8f2e3ad5a8d20f904332f4b3\"" Jul 6 23:56:40.200455 containerd[1489]: time="2025-07-06T23:56:40.200426404Z" level=info msg="StartContainer for \"34a57c5f0303e495071d4c9fa7353cc508e1348e8f2e3ad5a8d20f904332f4b3\"" Jul 6 23:56:40.215567 containerd[1489]: time="2025-07-06T23:56:40.215427663Z" level=info msg="CreateContainer within sandbox \"9b62e3a295b421030de9e4510b3647d4047c8b0cd6906b378ae52a839fdaa250\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d95f363eccedf9841e97dfdec0018a839757e30325f6ea8de700ba022adad5a1\"" Jul 6 23:56:40.216647 containerd[1489]: time="2025-07-06T23:56:40.216023124Z" level=info msg="StartContainer for \"d95f363eccedf9841e97dfdec0018a839757e30325f6ea8de700ba022adad5a1\"" Jul 6 23:56:40.219153 kubelet[2164]: W0706 23:56:40.219093 2164 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://95.217.0.60:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 95.217.0.60:6443: connect: connection refused Jul 6 23:56:40.219218 kubelet[2164]: E0706 23:56:40.219174 2164 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://95.217.0.60:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 95.217.0.60:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:56:40.222862 containerd[1489]: time="2025-07-06T23:56:40.222701342Z" level=info msg="CreateContainer within sandbox \"ce920c909d8b25214d913f9561f00f7acec73b88ab868d9c5f59f2765fb71dcb\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4ea267208721c6a320905bc55a878b66ef0e46f4b5ffacfd5d73f4821f4625bf\"" Jul 6 23:56:40.223564 containerd[1489]: time="2025-07-06T23:56:40.223536428Z" level=info msg="StartContainer for \"4ea267208721c6a320905bc55a878b66ef0e46f4b5ffacfd5d73f4821f4625bf\"" Jul 6 23:56:40.230762 systemd[1]: Started cri-containerd-34a57c5f0303e495071d4c9fa7353cc508e1348e8f2e3ad5a8d20f904332f4b3.scope - libcontainer container 34a57c5f0303e495071d4c9fa7353cc508e1348e8f2e3ad5a8d20f904332f4b3. Jul 6 23:56:40.247743 systemd[1]: Started cri-containerd-d95f363eccedf9841e97dfdec0018a839757e30325f6ea8de700ba022adad5a1.scope - libcontainer container d95f363eccedf9841e97dfdec0018a839757e30325f6ea8de700ba022adad5a1. Jul 6 23:56:40.269723 systemd[1]: Started cri-containerd-4ea267208721c6a320905bc55a878b66ef0e46f4b5ffacfd5d73f4821f4625bf.scope - libcontainer container 4ea267208721c6a320905bc55a878b66ef0e46f4b5ffacfd5d73f4821f4625bf. Jul 6 23:56:40.289656 kubelet[2164]: W0706 23:56:40.287069 2164 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://95.217.0.60:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 95.217.0.60:6443: connect: connection refused Jul 6 23:56:40.289656 kubelet[2164]: E0706 23:56:40.287141 2164 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://95.217.0.60:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 95.217.0.60:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:56:40.289656 kubelet[2164]: E0706 23:56:40.287307 2164 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://95.217.0.60:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-4-6-7e2061accb?timeout=10s\": dial tcp 95.217.0.60:6443: connect: connection refused" interval="1.6s" Jul 6 23:56:40.297394 containerd[1489]: time="2025-07-06T23:56:40.297341977Z" level=info msg="StartContainer for \"34a57c5f0303e495071d4c9fa7353cc508e1348e8f2e3ad5a8d20f904332f4b3\" returns successfully" Jul 6 23:56:40.328163 containerd[1489]: time="2025-07-06T23:56:40.328066906Z" level=info msg="StartContainer for \"4ea267208721c6a320905bc55a878b66ef0e46f4b5ffacfd5d73f4821f4625bf\" returns successfully" Jul 6 23:56:40.347940 containerd[1489]: time="2025-07-06T23:56:40.347906763Z" level=info msg="StartContainer for \"d95f363eccedf9841e97dfdec0018a839757e30325f6ea8de700ba022adad5a1\" returns successfully" Jul 6 23:56:40.490214 kubelet[2164]: I0706 23:56:40.489803 2164 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-4-6-7e2061accb" Jul 6 23:56:40.490214 kubelet[2164]: E0706 23:56:40.490138 2164 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://95.217.0.60:6443/api/v1/nodes\": dial tcp 95.217.0.60:6443: connect: connection refused" node="ci-4081-3-4-6-7e2061accb" Jul 6 23:56:42.035644 kubelet[2164]: E0706 23:56:42.035272 2164 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-4-6-7e2061accb\" not found" node="ci-4081-3-4-6-7e2061accb" Jul 6 23:56:42.093937 kubelet[2164]: I0706 23:56:42.093674 2164 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-4-6-7e2061accb" Jul 6 23:56:42.111436 kubelet[2164]: I0706 23:56:42.111389 2164 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-3-4-6-7e2061accb" Jul 6 23:56:42.111436 kubelet[2164]: E0706 23:56:42.111432 2164 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4081-3-4-6-7e2061accb\": node \"ci-4081-3-4-6-7e2061accb\" not found" Jul 6 23:56:42.122312 kubelet[2164]: E0706 23:56:42.122274 2164 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-4-6-7e2061accb\" not found" Jul 6 23:56:42.223314 kubelet[2164]: E0706 23:56:42.223244 2164 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-4-6-7e2061accb\" not found" Jul 6 23:56:42.323991 kubelet[2164]: E0706 23:56:42.323764 2164 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-4-6-7e2061accb\" not found" Jul 6 23:56:42.424096 kubelet[2164]: E0706 23:56:42.424013 2164 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-4-6-7e2061accb\" not found" Jul 6 23:56:42.525023 kubelet[2164]: E0706 23:56:42.524954 2164 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-4-6-7e2061accb\" not found" Jul 6 23:56:42.626115 kubelet[2164]: E0706 23:56:42.625897 2164 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-4-6-7e2061accb\" not found" Jul 6 23:56:42.726958 kubelet[2164]: E0706 23:56:42.726848 2164 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-4-6-7e2061accb\" not found" Jul 6 23:56:42.827021 kubelet[2164]: E0706 23:56:42.826947 2164 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-4-6-7e2061accb\" not found" Jul 6 23:56:42.927814 kubelet[2164]: E0706 23:56:42.927608 2164 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-4-6-7e2061accb\" not found" Jul 6 23:56:43.028189 kubelet[2164]: E0706 23:56:43.028131 2164 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-4-6-7e2061accb\" not found" Jul 6 23:56:43.129262 kubelet[2164]: E0706 23:56:43.129192 2164 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-4-6-7e2061accb\" not found" Jul 6 23:56:43.229932 kubelet[2164]: E0706 23:56:43.229853 2164 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-4-6-7e2061accb\" not found" Jul 6 23:56:43.852685 kubelet[2164]: I0706 23:56:43.852653 2164 apiserver.go:52] "Watching apiserver" Jul 6 23:56:43.879559 kubelet[2164]: I0706 23:56:43.879473 2164 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 6 23:56:44.310197 systemd[1]: Reloading requested from client PID 2437 ('systemctl') (unit session-7.scope)... Jul 6 23:56:44.310224 systemd[1]: Reloading... Jul 6 23:56:44.418664 zram_generator::config[2473]: No configuration found. Jul 6 23:56:44.532959 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:56:44.629378 systemd[1]: Reloading finished in 318 ms. Jul 6 23:56:44.664971 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:56:44.665756 kubelet[2164]: I0706 23:56:44.665484 2164 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:56:44.690900 systemd[1]: kubelet.service: Deactivated successfully. Jul 6 23:56:44.691110 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:56:44.697382 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:56:44.857753 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:56:44.870140 (kubelet)[2528]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 6 23:56:44.941312 kubelet[2528]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:56:44.941312 kubelet[2528]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 6 23:56:44.941312 kubelet[2528]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:56:44.941731 kubelet[2528]: I0706 23:56:44.941367 2528 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 6 23:56:44.949665 kubelet[2528]: I0706 23:56:44.949026 2528 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 6 23:56:44.949665 kubelet[2528]: I0706 23:56:44.949053 2528 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 6 23:56:44.949665 kubelet[2528]: I0706 23:56:44.949426 2528 server.go:934] "Client rotation is on, will bootstrap in background" Jul 6 23:56:44.952021 kubelet[2528]: I0706 23:56:44.951988 2528 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 6 23:56:44.966982 kubelet[2528]: I0706 23:56:44.966927 2528 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:56:44.971228 kubelet[2528]: E0706 23:56:44.971191 2528 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 6 23:56:44.971427 kubelet[2528]: I0706 23:56:44.971393 2528 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 6 23:56:44.975184 kubelet[2528]: I0706 23:56:44.975164 2528 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 6 23:56:44.975378 kubelet[2528]: I0706 23:56:44.975367 2528 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 6 23:56:44.975662 kubelet[2528]: I0706 23:56:44.975568 2528 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 6 23:56:44.976228 kubelet[2528]: I0706 23:56:44.975596 2528 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-4-6-7e2061accb","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 6 23:56:44.976228 kubelet[2528]: I0706 23:56:44.975966 2528 topology_manager.go:138] "Creating topology manager with none policy" Jul 6 23:56:44.976228 kubelet[2528]: I0706 23:56:44.975977 2528 container_manager_linux.go:300] "Creating device plugin manager" Jul 6 23:56:44.976228 kubelet[2528]: I0706 23:56:44.976006 2528 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:56:44.976228 kubelet[2528]: I0706 23:56:44.976127 2528 kubelet.go:408] "Attempting to sync node with API server" Jul 6 23:56:44.976457 kubelet[2528]: I0706 23:56:44.976141 2528 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 6 23:56:44.976457 kubelet[2528]: I0706 23:56:44.976168 2528 kubelet.go:314] "Adding apiserver pod source" Jul 6 23:56:44.976457 kubelet[2528]: I0706 23:56:44.976179 2528 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 6 23:56:44.980228 kubelet[2528]: I0706 23:56:44.980191 2528 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 6 23:56:44.981645 kubelet[2528]: I0706 23:56:44.980722 2528 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 6 23:56:44.981645 kubelet[2528]: I0706 23:56:44.981243 2528 server.go:1274] "Started kubelet" Jul 6 23:56:44.988856 kubelet[2528]: I0706 23:56:44.988741 2528 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 6 23:56:44.996796 kubelet[2528]: I0706 23:56:44.994688 2528 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 6 23:56:44.996796 kubelet[2528]: E0706 23:56:44.995029 2528 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-4-6-7e2061accb\" not found" Jul 6 23:56:44.996796 kubelet[2528]: I0706 23:56:44.995488 2528 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 6 23:56:44.996796 kubelet[2528]: I0706 23:56:44.995706 2528 reconciler.go:26] "Reconciler: start to sync state" Jul 6 23:56:44.998174 kubelet[2528]: I0706 23:56:44.998143 2528 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 6 23:56:44.999956 kubelet[2528]: I0706 23:56:44.999942 2528 server.go:449] "Adding debug handlers to kubelet server" Jul 6 23:56:45.012019 kubelet[2528]: I0706 23:56:45.011981 2528 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 6 23:56:45.013807 kubelet[2528]: I0706 23:56:45.013794 2528 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 6 23:56:45.014047 kubelet[2528]: I0706 23:56:45.014036 2528 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 6 23:56:45.019501 kubelet[2528]: I0706 23:56:45.014221 2528 factory.go:221] Registration of the systemd container factory successfully Jul 6 23:56:45.019697 kubelet[2528]: I0706 23:56:45.019674 2528 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 6 23:56:45.026733 kubelet[2528]: I0706 23:56:45.026699 2528 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 6 23:56:45.029725 kubelet[2528]: I0706 23:56:45.029709 2528 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 6 23:56:45.029872 kubelet[2528]: I0706 23:56:45.029865 2528 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 6 23:56:45.029921 kubelet[2528]: I0706 23:56:45.029915 2528 kubelet.go:2321] "Starting kubelet main sync loop" Jul 6 23:56:45.029988 kubelet[2528]: E0706 23:56:45.029975 2528 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 6 23:56:45.035675 kubelet[2528]: E0706 23:56:45.035051 2528 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 6 23:56:45.036175 kubelet[2528]: I0706 23:56:45.036161 2528 factory.go:221] Registration of the containerd container factory successfully Jul 6 23:56:45.087491 kubelet[2528]: I0706 23:56:45.087460 2528 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 6 23:56:45.087491 kubelet[2528]: I0706 23:56:45.087480 2528 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 6 23:56:45.087668 kubelet[2528]: I0706 23:56:45.087520 2528 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:56:45.087752 kubelet[2528]: I0706 23:56:45.087733 2528 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 6 23:56:45.087777 kubelet[2528]: I0706 23:56:45.087749 2528 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 6 23:56:45.087777 kubelet[2528]: I0706 23:56:45.087773 2528 policy_none.go:49] "None policy: Start" Jul 6 23:56:45.089305 kubelet[2528]: I0706 23:56:45.089290 2528 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 6 23:56:45.089386 kubelet[2528]: I0706 23:56:45.089379 2528 state_mem.go:35] "Initializing new in-memory state store" Jul 6 23:56:45.089583 kubelet[2528]: I0706 23:56:45.089575 2528 state_mem.go:75] "Updated machine memory state" Jul 6 23:56:45.093556 kubelet[2528]: I0706 23:56:45.093541 2528 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 6 23:56:45.095354 kubelet[2528]: I0706 23:56:45.095344 2528 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 6 23:56:45.095830 kubelet[2528]: I0706 23:56:45.095800 2528 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 6 23:56:45.096222 kubelet[2528]: I0706 23:56:45.096201 2528 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 6 23:56:45.205667 kubelet[2528]: I0706 23:56:45.203736 2528 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-4-6-7e2061accb" Jul 6 23:56:45.213318 kubelet[2528]: I0706 23:56:45.213280 2528 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081-3-4-6-7e2061accb" Jul 6 23:56:45.213562 kubelet[2528]: I0706 23:56:45.213545 2528 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-3-4-6-7e2061accb" Jul 6 23:56:45.298198 kubelet[2528]: I0706 23:56:45.298144 2528 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d5ae581745f7243314ad06cfa636a2d9-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-4-6-7e2061accb\" (UID: \"d5ae581745f7243314ad06cfa636a2d9\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-6-7e2061accb" Jul 6 23:56:45.298198 kubelet[2528]: I0706 23:56:45.298196 2528 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d5ae581745f7243314ad06cfa636a2d9-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-4-6-7e2061accb\" (UID: \"d5ae581745f7243314ad06cfa636a2d9\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-6-7e2061accb" Jul 6 23:56:45.298372 kubelet[2528]: I0706 23:56:45.298219 2528 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d5ae581745f7243314ad06cfa636a2d9-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-4-6-7e2061accb\" (UID: \"d5ae581745f7243314ad06cfa636a2d9\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-6-7e2061accb" Jul 6 23:56:45.298372 kubelet[2528]: I0706 23:56:45.298244 2528 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a8ba97ea8a79604b926b2c1da1ab06f5-k8s-certs\") pod \"kube-apiserver-ci-4081-3-4-6-7e2061accb\" (UID: \"a8ba97ea8a79604b926b2c1da1ab06f5\") " pod="kube-system/kube-apiserver-ci-4081-3-4-6-7e2061accb" Jul 6 23:56:45.298372 kubelet[2528]: I0706 23:56:45.298265 2528 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a8ba97ea8a79604b926b2c1da1ab06f5-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-4-6-7e2061accb\" (UID: \"a8ba97ea8a79604b926b2c1da1ab06f5\") " pod="kube-system/kube-apiserver-ci-4081-3-4-6-7e2061accb" Jul 6 23:56:45.298372 kubelet[2528]: I0706 23:56:45.298284 2528 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d5ae581745f7243314ad06cfa636a2d9-ca-certs\") pod \"kube-controller-manager-ci-4081-3-4-6-7e2061accb\" (UID: \"d5ae581745f7243314ad06cfa636a2d9\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-6-7e2061accb" Jul 6 23:56:45.298372 kubelet[2528]: I0706 23:56:45.298301 2528 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d5ae581745f7243314ad06cfa636a2d9-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-4-6-7e2061accb\" (UID: \"d5ae581745f7243314ad06cfa636a2d9\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-6-7e2061accb" Jul 6 23:56:45.298677 kubelet[2528]: I0706 23:56:45.298320 2528 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/582984c00b7053039dfe19faa07eaacb-kubeconfig\") pod \"kube-scheduler-ci-4081-3-4-6-7e2061accb\" (UID: \"582984c00b7053039dfe19faa07eaacb\") " pod="kube-system/kube-scheduler-ci-4081-3-4-6-7e2061accb" Jul 6 23:56:45.298677 kubelet[2528]: I0706 23:56:45.298339 2528 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a8ba97ea8a79604b926b2c1da1ab06f5-ca-certs\") pod \"kube-apiserver-ci-4081-3-4-6-7e2061accb\" (UID: \"a8ba97ea8a79604b926b2c1da1ab06f5\") " pod="kube-system/kube-apiserver-ci-4081-3-4-6-7e2061accb" Jul 6 23:56:45.318279 sudo[2561]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 6 23:56:45.318769 sudo[2561]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 6 23:56:45.877193 sudo[2561]: pam_unix(sudo:session): session closed for user root Jul 6 23:56:45.979649 kubelet[2528]: I0706 23:56:45.977399 2528 apiserver.go:52] "Watching apiserver" Jul 6 23:56:45.996066 kubelet[2528]: I0706 23:56:45.996008 2528 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 6 23:56:46.105493 kubelet[2528]: I0706 23:56:46.105427 2528 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-4-6-7e2061accb" podStartSLOduration=1.105406475 podStartE2EDuration="1.105406475s" podCreationTimestamp="2025-07-06 23:56:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:56:46.098233237 +0000 UTC m=+1.221156576" watchObservedRunningTime="2025-07-06 23:56:46.105406475 +0000 UTC m=+1.228329813" Jul 6 23:56:46.113806 kubelet[2528]: I0706 23:56:46.113750 2528 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-4-6-7e2061accb" podStartSLOduration=1.113731651 podStartE2EDuration="1.113731651s" podCreationTimestamp="2025-07-06 23:56:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:56:46.105597581 +0000 UTC m=+1.228520910" watchObservedRunningTime="2025-07-06 23:56:46.113731651 +0000 UTC m=+1.236654990" Jul 6 23:56:46.122194 kubelet[2528]: I0706 23:56:46.122143 2528 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-4-6-7e2061accb" podStartSLOduration=1.122125472 podStartE2EDuration="1.122125472s" podCreationTimestamp="2025-07-06 23:56:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:56:46.114189079 +0000 UTC m=+1.237112419" watchObservedRunningTime="2025-07-06 23:56:46.122125472 +0000 UTC m=+1.245048811" Jul 6 23:56:47.643755 sudo[1693]: pam_unix(sudo:session): session closed for user root Jul 6 23:56:47.809962 sshd[1690]: pam_unix(sshd:session): session closed for user core Jul 6 23:56:47.813300 systemd[1]: sshd@6-95.217.0.60:22-147.75.109.163:49824.service: Deactivated successfully. Jul 6 23:56:47.815359 systemd[1]: session-7.scope: Deactivated successfully. Jul 6 23:56:47.815512 systemd[1]: session-7.scope: Consumed 5.819s CPU time, 141.7M memory peak, 0B memory swap peak. Jul 6 23:56:47.816710 systemd-logind[1467]: Session 7 logged out. Waiting for processes to exit. Jul 6 23:56:47.818297 systemd-logind[1467]: Removed session 7. Jul 6 23:56:49.836920 kubelet[2528]: I0706 23:56:49.836785 2528 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 6 23:56:49.838650 containerd[1489]: time="2025-07-06T23:56:49.837530039Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 6 23:56:49.839219 kubelet[2528]: I0706 23:56:49.837748 2528 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 6 23:56:50.227452 update_engine[1472]: I20250706 23:56:50.226444 1472 update_attempter.cc:509] Updating boot flags... Jul 6 23:56:50.309787 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (2608) Jul 6 23:56:50.375748 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (2612) Jul 6 23:56:50.430710 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (2612) Jul 6 23:56:50.719750 systemd[1]: Created slice kubepods-besteffort-poddfaf75ca_2675_4083_b91d_ccaf0c6d84a7.slice - libcontainer container kubepods-besteffort-poddfaf75ca_2675_4083_b91d_ccaf0c6d84a7.slice. Jul 6 23:56:50.730025 kubelet[2528]: I0706 23:56:50.729868 2528 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dfaf75ca-2675-4083-b91d-ccaf0c6d84a7-xtables-lock\") pod \"kube-proxy-sb5pr\" (UID: \"dfaf75ca-2675-4083-b91d-ccaf0c6d84a7\") " pod="kube-system/kube-proxy-sb5pr" Jul 6 23:56:50.730025 kubelet[2528]: I0706 23:56:50.729907 2528 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dfaf75ca-2675-4083-b91d-ccaf0c6d84a7-lib-modules\") pod \"kube-proxy-sb5pr\" (UID: \"dfaf75ca-2675-4083-b91d-ccaf0c6d84a7\") " pod="kube-system/kube-proxy-sb5pr" Jul 6 23:56:50.730025 kubelet[2528]: I0706 23:56:50.729931 2528 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/dfaf75ca-2675-4083-b91d-ccaf0c6d84a7-kube-proxy\") pod \"kube-proxy-sb5pr\" (UID: \"dfaf75ca-2675-4083-b91d-ccaf0c6d84a7\") " pod="kube-system/kube-proxy-sb5pr" Jul 6 23:56:50.730025 kubelet[2528]: I0706 23:56:50.729949 2528 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smgct\" (UniqueName: \"kubernetes.io/projected/dfaf75ca-2675-4083-b91d-ccaf0c6d84a7-kube-api-access-smgct\") pod \"kube-proxy-sb5pr\" (UID: \"dfaf75ca-2675-4083-b91d-ccaf0c6d84a7\") " pod="kube-system/kube-proxy-sb5pr" Jul 6 23:56:50.754893 systemd[1]: Created slice kubepods-burstable-pod327c3bdd_1104_4eb4_86e5_8e4700bdd490.slice - libcontainer container kubepods-burstable-pod327c3bdd_1104_4eb4_86e5_8e4700bdd490.slice. Jul 6 23:56:50.830219 kubelet[2528]: I0706 23:56:50.830175 2528 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/327c3bdd-1104-4eb4-86e5-8e4700bdd490-host-proc-sys-kernel\") pod \"cilium-jv4hk\" (UID: \"327c3bdd-1104-4eb4-86e5-8e4700bdd490\") " pod="kube-system/cilium-jv4hk" Jul 6 23:56:50.830402 kubelet[2528]: I0706 23:56:50.830394 2528 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/327c3bdd-1104-4eb4-86e5-8e4700bdd490-lib-modules\") pod \"cilium-jv4hk\" (UID: \"327c3bdd-1104-4eb4-86e5-8e4700bdd490\") " pod="kube-system/cilium-jv4hk" Jul 6 23:56:50.830472 kubelet[2528]: I0706 23:56:50.830464 2528 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/327c3bdd-1104-4eb4-86e5-8e4700bdd490-cilium-run\") pod \"cilium-jv4hk\" (UID: \"327c3bdd-1104-4eb4-86e5-8e4700bdd490\") " pod="kube-system/cilium-jv4hk" Jul 6 23:56:50.831099 kubelet[2528]: I0706 23:56:50.830574 2528 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/327c3bdd-1104-4eb4-86e5-8e4700bdd490-hubble-tls\") pod \"cilium-jv4hk\" (UID: \"327c3bdd-1104-4eb4-86e5-8e4700bdd490\") " pod="kube-system/cilium-jv4hk" Jul 6 23:56:50.831099 kubelet[2528]: I0706 23:56:50.830607 2528 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/327c3bdd-1104-4eb4-86e5-8e4700bdd490-host-proc-sys-net\") pod \"cilium-jv4hk\" (UID: \"327c3bdd-1104-4eb4-86e5-8e4700bdd490\") " pod="kube-system/cilium-jv4hk" Jul 6 23:56:50.831099 kubelet[2528]: I0706 23:56:50.830639 2528 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/327c3bdd-1104-4eb4-86e5-8e4700bdd490-hostproc\") pod \"cilium-jv4hk\" (UID: \"327c3bdd-1104-4eb4-86e5-8e4700bdd490\") " pod="kube-system/cilium-jv4hk" Jul 6 23:56:50.831099 kubelet[2528]: I0706 23:56:50.830652 2528 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/327c3bdd-1104-4eb4-86e5-8e4700bdd490-cni-path\") pod \"cilium-jv4hk\" (UID: \"327c3bdd-1104-4eb4-86e5-8e4700bdd490\") " pod="kube-system/cilium-jv4hk" Jul 6 23:56:50.831099 kubelet[2528]: I0706 23:56:50.830667 2528 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/327c3bdd-1104-4eb4-86e5-8e4700bdd490-etc-cni-netd\") pod \"cilium-jv4hk\" (UID: \"327c3bdd-1104-4eb4-86e5-8e4700bdd490\") " pod="kube-system/cilium-jv4hk" Jul 6 23:56:50.831099 kubelet[2528]: I0706 23:56:50.830680 2528 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/327c3bdd-1104-4eb4-86e5-8e4700bdd490-cilium-config-path\") pod \"cilium-jv4hk\" (UID: \"327c3bdd-1104-4eb4-86e5-8e4700bdd490\") " pod="kube-system/cilium-jv4hk" Jul 6 23:56:50.831231 kubelet[2528]: I0706 23:56:50.830694 2528 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwvws\" (UniqueName: \"kubernetes.io/projected/327c3bdd-1104-4eb4-86e5-8e4700bdd490-kube-api-access-zwvws\") pod \"cilium-jv4hk\" (UID: \"327c3bdd-1104-4eb4-86e5-8e4700bdd490\") " pod="kube-system/cilium-jv4hk" Jul 6 23:56:50.831231 kubelet[2528]: I0706 23:56:50.830706 2528 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/327c3bdd-1104-4eb4-86e5-8e4700bdd490-bpf-maps\") pod \"cilium-jv4hk\" (UID: \"327c3bdd-1104-4eb4-86e5-8e4700bdd490\") " pod="kube-system/cilium-jv4hk" Jul 6 23:56:50.831231 kubelet[2528]: I0706 23:56:50.830718 2528 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/327c3bdd-1104-4eb4-86e5-8e4700bdd490-xtables-lock\") pod \"cilium-jv4hk\" (UID: \"327c3bdd-1104-4eb4-86e5-8e4700bdd490\") " pod="kube-system/cilium-jv4hk" Jul 6 23:56:50.831231 kubelet[2528]: I0706 23:56:50.830734 2528 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/327c3bdd-1104-4eb4-86e5-8e4700bdd490-cilium-cgroup\") pod \"cilium-jv4hk\" (UID: \"327c3bdd-1104-4eb4-86e5-8e4700bdd490\") " pod="kube-system/cilium-jv4hk" Jul 6 23:56:50.831231 kubelet[2528]: I0706 23:56:50.830748 2528 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/327c3bdd-1104-4eb4-86e5-8e4700bdd490-clustermesh-secrets\") pod \"cilium-jv4hk\" (UID: \"327c3bdd-1104-4eb4-86e5-8e4700bdd490\") " pod="kube-system/cilium-jv4hk" Jul 6 23:56:51.004160 systemd[1]: Created slice kubepods-besteffort-podfcf2c35e_7dca_4015_90a2_421d983bac78.slice - libcontainer container kubepods-besteffort-podfcf2c35e_7dca_4015_90a2_421d983bac78.slice. Jul 6 23:56:51.032879 containerd[1489]: time="2025-07-06T23:56:51.032443723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sb5pr,Uid:dfaf75ca-2675-4083-b91d-ccaf0c6d84a7,Namespace:kube-system,Attempt:0,}" Jul 6 23:56:51.034012 kubelet[2528]: I0706 23:56:51.033449 2528 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fcf2c35e-7dca-4015-90a2-421d983bac78-cilium-config-path\") pod \"cilium-operator-5d85765b45-55tbg\" (UID: \"fcf2c35e-7dca-4015-90a2-421d983bac78\") " pod="kube-system/cilium-operator-5d85765b45-55tbg" Jul 6 23:56:51.034012 kubelet[2528]: I0706 23:56:51.033486 2528 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hst4k\" (UniqueName: \"kubernetes.io/projected/fcf2c35e-7dca-4015-90a2-421d983bac78-kube-api-access-hst4k\") pod \"cilium-operator-5d85765b45-55tbg\" (UID: \"fcf2c35e-7dca-4015-90a2-421d983bac78\") " pod="kube-system/cilium-operator-5d85765b45-55tbg" Jul 6 23:56:51.058356 containerd[1489]: time="2025-07-06T23:56:51.057935245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jv4hk,Uid:327c3bdd-1104-4eb4-86e5-8e4700bdd490,Namespace:kube-system,Attempt:0,}" Jul 6 23:56:51.073978 containerd[1489]: time="2025-07-06T23:56:51.073868962Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:56:51.074119 containerd[1489]: time="2025-07-06T23:56:51.073966978Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:56:51.074119 containerd[1489]: time="2025-07-06T23:56:51.073998989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:51.074168 containerd[1489]: time="2025-07-06T23:56:51.074132184Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:51.091415 containerd[1489]: time="2025-07-06T23:56:51.091289869Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:56:51.091415 containerd[1489]: time="2025-07-06T23:56:51.091364597Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:56:51.091603 containerd[1489]: time="2025-07-06T23:56:51.091387756Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:51.091603 containerd[1489]: time="2025-07-06T23:56:51.091535668Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:51.102266 systemd[1]: Started cri-containerd-8c3ddbc43cce521a961605f1ca95e6c9af3bfe2858ddc5de1a28956a90dc39ac.scope - libcontainer container 8c3ddbc43cce521a961605f1ca95e6c9af3bfe2858ddc5de1a28956a90dc39ac. Jul 6 23:56:51.119803 systemd[1]: Started cri-containerd-1f2dfe7339217a3050d4fca2cfb3a8c78735465ad67867adab0c4f6f57ec8b73.scope - libcontainer container 1f2dfe7339217a3050d4fca2cfb3a8c78735465ad67867adab0c4f6f57ec8b73. Jul 6 23:56:51.151889 containerd[1489]: time="2025-07-06T23:56:51.151761936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sb5pr,Uid:dfaf75ca-2675-4083-b91d-ccaf0c6d84a7,Namespace:kube-system,Attempt:0,} returns sandbox id \"8c3ddbc43cce521a961605f1ca95e6c9af3bfe2858ddc5de1a28956a90dc39ac\"" Jul 6 23:56:51.155533 containerd[1489]: time="2025-07-06T23:56:51.155360440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jv4hk,Uid:327c3bdd-1104-4eb4-86e5-8e4700bdd490,Namespace:kube-system,Attempt:0,} returns sandbox id \"1f2dfe7339217a3050d4fca2cfb3a8c78735465ad67867adab0c4f6f57ec8b73\"" Jul 6 23:56:51.156428 containerd[1489]: time="2025-07-06T23:56:51.156089323Z" level=info msg="CreateContainer within sandbox \"8c3ddbc43cce521a961605f1ca95e6c9af3bfe2858ddc5de1a28956a90dc39ac\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 6 23:56:51.157827 containerd[1489]: time="2025-07-06T23:56:51.157810182Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 6 23:56:51.177325 containerd[1489]: time="2025-07-06T23:56:51.177255433Z" level=info msg="CreateContainer within sandbox \"8c3ddbc43cce521a961605f1ca95e6c9af3bfe2858ddc5de1a28956a90dc39ac\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1a1f6467ffa3974b8eda5e03eb4990574bbf1ddd4af7a7469da4e99c1035a3d9\"" Jul 6 23:56:51.178095 containerd[1489]: time="2025-07-06T23:56:51.178063336Z" level=info msg="StartContainer for \"1a1f6467ffa3974b8eda5e03eb4990574bbf1ddd4af7a7469da4e99c1035a3d9\"" Jul 6 23:56:51.207857 systemd[1]: Started cri-containerd-1a1f6467ffa3974b8eda5e03eb4990574bbf1ddd4af7a7469da4e99c1035a3d9.scope - libcontainer container 1a1f6467ffa3974b8eda5e03eb4990574bbf1ddd4af7a7469da4e99c1035a3d9. Jul 6 23:56:51.246886 containerd[1489]: time="2025-07-06T23:56:51.246842492Z" level=info msg="StartContainer for \"1a1f6467ffa3974b8eda5e03eb4990574bbf1ddd4af7a7469da4e99c1035a3d9\" returns successfully" Jul 6 23:56:51.312043 containerd[1489]: time="2025-07-06T23:56:51.311388909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-55tbg,Uid:fcf2c35e-7dca-4015-90a2-421d983bac78,Namespace:kube-system,Attempt:0,}" Jul 6 23:56:51.336420 containerd[1489]: time="2025-07-06T23:56:51.335988336Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:56:51.336420 containerd[1489]: time="2025-07-06T23:56:51.336039745Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:56:51.336420 containerd[1489]: time="2025-07-06T23:56:51.336055897Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:51.336420 containerd[1489]: time="2025-07-06T23:56:51.336149973Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:51.359926 systemd[1]: Started cri-containerd-97276da28bac7cdab317a89b816bc96429ccc6e6b392984a2303b2864b333f0b.scope - libcontainer container 97276da28bac7cdab317a89b816bc96429ccc6e6b392984a2303b2864b333f0b. Jul 6 23:56:51.402988 containerd[1489]: time="2025-07-06T23:56:51.402937378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-55tbg,Uid:fcf2c35e-7dca-4015-90a2-421d983bac78,Namespace:kube-system,Attempt:0,} returns sandbox id \"97276da28bac7cdab317a89b816bc96429ccc6e6b392984a2303b2864b333f0b\"" Jul 6 23:56:54.735446 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount578582016.mount: Deactivated successfully. Jul 6 23:56:56.323034 containerd[1489]: time="2025-07-06T23:56:56.322956213Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:56.324724 containerd[1489]: time="2025-07-06T23:56:56.324682764Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jul 6 23:56:56.325571 containerd[1489]: time="2025-07-06T23:56:56.325502737Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:56.327566 containerd[1489]: time="2025-07-06T23:56:56.326837234Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 5.168928185s" Jul 6 23:56:56.327566 containerd[1489]: time="2025-07-06T23:56:56.326881425Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 6 23:56:56.329286 containerd[1489]: time="2025-07-06T23:56:56.328510154Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 6 23:56:56.329985 containerd[1489]: time="2025-07-06T23:56:56.329563759Z" level=info msg="CreateContainer within sandbox \"1f2dfe7339217a3050d4fca2cfb3a8c78735465ad67867adab0c4f6f57ec8b73\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 6 23:56:56.401304 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2995659907.mount: Deactivated successfully. Jul 6 23:56:56.407252 containerd[1489]: time="2025-07-06T23:56:56.407198869Z" level=info msg="CreateContainer within sandbox \"1f2dfe7339217a3050d4fca2cfb3a8c78735465ad67867adab0c4f6f57ec8b73\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3622aa595506c93fe5ec566a6943a5b61f91c384dba317c7f8db2cdaf12939d6\"" Jul 6 23:56:56.409661 containerd[1489]: time="2025-07-06T23:56:56.408191234Z" level=info msg="StartContainer for \"3622aa595506c93fe5ec566a6943a5b61f91c384dba317c7f8db2cdaf12939d6\"" Jul 6 23:56:56.537552 systemd[1]: run-containerd-runc-k8s.io-3622aa595506c93fe5ec566a6943a5b61f91c384dba317c7f8db2cdaf12939d6-runc.5HCJlK.mount: Deactivated successfully. Jul 6 23:56:56.541275 kubelet[2528]: I0706 23:56:56.536130 2528 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-sb5pr" podStartSLOduration=6.53175244 podStartE2EDuration="6.53175244s" podCreationTimestamp="2025-07-06 23:56:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:56:52.104354042 +0000 UTC m=+7.227277411" watchObservedRunningTime="2025-07-06 23:56:56.53175244 +0000 UTC m=+11.654675779" Jul 6 23:56:56.548609 systemd[1]: Started cri-containerd-3622aa595506c93fe5ec566a6943a5b61f91c384dba317c7f8db2cdaf12939d6.scope - libcontainer container 3622aa595506c93fe5ec566a6943a5b61f91c384dba317c7f8db2cdaf12939d6. Jul 6 23:56:56.589876 containerd[1489]: time="2025-07-06T23:56:56.589767535Z" level=info msg="StartContainer for \"3622aa595506c93fe5ec566a6943a5b61f91c384dba317c7f8db2cdaf12939d6\" returns successfully" Jul 6 23:56:56.598771 systemd[1]: cri-containerd-3622aa595506c93fe5ec566a6943a5b61f91c384dba317c7f8db2cdaf12939d6.scope: Deactivated successfully. Jul 6 23:56:56.752061 containerd[1489]: time="2025-07-06T23:56:56.733797894Z" level=info msg="shim disconnected" id=3622aa595506c93fe5ec566a6943a5b61f91c384dba317c7f8db2cdaf12939d6 namespace=k8s.io Jul 6 23:56:56.752061 containerd[1489]: time="2025-07-06T23:56:56.752041255Z" level=warning msg="cleaning up after shim disconnected" id=3622aa595506c93fe5ec566a6943a5b61f91c384dba317c7f8db2cdaf12939d6 namespace=k8s.io Jul 6 23:56:56.752061 containerd[1489]: time="2025-07-06T23:56:56.752067155Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:56:57.151978 containerd[1489]: time="2025-07-06T23:56:57.151902878Z" level=info msg="CreateContainer within sandbox \"1f2dfe7339217a3050d4fca2cfb3a8c78735465ad67867adab0c4f6f57ec8b73\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 6 23:56:57.172530 containerd[1489]: time="2025-07-06T23:56:57.172462853Z" level=info msg="CreateContainer within sandbox \"1f2dfe7339217a3050d4fca2cfb3a8c78735465ad67867adab0c4f6f57ec8b73\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"9aa21bc0992ce9f1afa7fb72cd5e2ee51a94664a43d66cf9e92c5a23e8a23edd\"" Jul 6 23:56:57.174687 containerd[1489]: time="2025-07-06T23:56:57.173857636Z" level=info msg="StartContainer for \"9aa21bc0992ce9f1afa7fb72cd5e2ee51a94664a43d66cf9e92c5a23e8a23edd\"" Jul 6 23:56:57.209933 systemd[1]: Started cri-containerd-9aa21bc0992ce9f1afa7fb72cd5e2ee51a94664a43d66cf9e92c5a23e8a23edd.scope - libcontainer container 9aa21bc0992ce9f1afa7fb72cd5e2ee51a94664a43d66cf9e92c5a23e8a23edd. Jul 6 23:56:57.240557 containerd[1489]: time="2025-07-06T23:56:57.240453321Z" level=info msg="StartContainer for \"9aa21bc0992ce9f1afa7fb72cd5e2ee51a94664a43d66cf9e92c5a23e8a23edd\" returns successfully" Jul 6 23:56:57.258835 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 6 23:56:57.259762 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:56:57.259895 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:56:57.268336 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:56:57.268557 systemd[1]: cri-containerd-9aa21bc0992ce9f1afa7fb72cd5e2ee51a94664a43d66cf9e92c5a23e8a23edd.scope: Deactivated successfully. Jul 6 23:56:57.293850 containerd[1489]: time="2025-07-06T23:56:57.293781778Z" level=info msg="shim disconnected" id=9aa21bc0992ce9f1afa7fb72cd5e2ee51a94664a43d66cf9e92c5a23e8a23edd namespace=k8s.io Jul 6 23:56:57.293850 containerd[1489]: time="2025-07-06T23:56:57.293843858Z" level=warning msg="cleaning up after shim disconnected" id=9aa21bc0992ce9f1afa7fb72cd5e2ee51a94664a43d66cf9e92c5a23e8a23edd namespace=k8s.io Jul 6 23:56:57.293850 containerd[1489]: time="2025-07-06T23:56:57.293853148Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:56:57.302667 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:56:57.400685 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3622aa595506c93fe5ec566a6943a5b61f91c384dba317c7f8db2cdaf12939d6-rootfs.mount: Deactivated successfully. Jul 6 23:56:57.796364 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3410411264.mount: Deactivated successfully. Jul 6 23:56:58.163043 containerd[1489]: time="2025-07-06T23:56:58.162648850Z" level=info msg="CreateContainer within sandbox \"1f2dfe7339217a3050d4fca2cfb3a8c78735465ad67867adab0c4f6f57ec8b73\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 6 23:56:58.192477 containerd[1489]: time="2025-07-06T23:56:58.192257009Z" level=info msg="CreateContainer within sandbox \"1f2dfe7339217a3050d4fca2cfb3a8c78735465ad67867adab0c4f6f57ec8b73\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"30ee2e8084cc36b8420e0afe7252f9a5fb37ca5bb0fcf240e7b6108bb77f74fe\"" Jul 6 23:56:58.193639 containerd[1489]: time="2025-07-06T23:56:58.193204741Z" level=info msg="StartContainer for \"30ee2e8084cc36b8420e0afe7252f9a5fb37ca5bb0fcf240e7b6108bb77f74fe\"" Jul 6 23:56:58.235234 systemd[1]: Started cri-containerd-30ee2e8084cc36b8420e0afe7252f9a5fb37ca5bb0fcf240e7b6108bb77f74fe.scope - libcontainer container 30ee2e8084cc36b8420e0afe7252f9a5fb37ca5bb0fcf240e7b6108bb77f74fe. Jul 6 23:56:58.276304 containerd[1489]: time="2025-07-06T23:56:58.276269238Z" level=info msg="StartContainer for \"30ee2e8084cc36b8420e0afe7252f9a5fb37ca5bb0fcf240e7b6108bb77f74fe\" returns successfully" Jul 6 23:56:58.284618 systemd[1]: cri-containerd-30ee2e8084cc36b8420e0afe7252f9a5fb37ca5bb0fcf240e7b6108bb77f74fe.scope: Deactivated successfully. Jul 6 23:56:58.356422 containerd[1489]: time="2025-07-06T23:56:58.356311988Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:58.358317 containerd[1489]: time="2025-07-06T23:56:58.358067339Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jul 6 23:56:58.359809 containerd[1489]: time="2025-07-06T23:56:58.359411619Z" level=info msg="shim disconnected" id=30ee2e8084cc36b8420e0afe7252f9a5fb37ca5bb0fcf240e7b6108bb77f74fe namespace=k8s.io Jul 6 23:56:58.359809 containerd[1489]: time="2025-07-06T23:56:58.359474891Z" level=warning msg="cleaning up after shim disconnected" id=30ee2e8084cc36b8420e0afe7252f9a5fb37ca5bb0fcf240e7b6108bb77f74fe namespace=k8s.io Jul 6 23:56:58.359809 containerd[1489]: time="2025-07-06T23:56:58.359490013Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:56:58.360479 containerd[1489]: time="2025-07-06T23:56:58.360221752Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:58.362698 containerd[1489]: time="2025-07-06T23:56:58.362587169Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.034045985s" Jul 6 23:56:58.362963 containerd[1489]: time="2025-07-06T23:56:58.362783208Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 6 23:56:58.383392 containerd[1489]: time="2025-07-06T23:56:58.382617430Z" level=warning msg="cleanup warnings time=\"2025-07-06T23:56:58Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 6 23:56:58.386729 containerd[1489]: time="2025-07-06T23:56:58.386675187Z" level=info msg="CreateContainer within sandbox \"97276da28bac7cdab317a89b816bc96429ccc6e6b392984a2303b2864b333f0b\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 6 23:56:58.412956 containerd[1489]: time="2025-07-06T23:56:58.412854907Z" level=info msg="CreateContainer within sandbox \"97276da28bac7cdab317a89b816bc96429ccc6e6b392984a2303b2864b333f0b\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"d1123389833f2fe2d477b06e22655ddb0d1810a27d38da2f861185da4a819ce5\"" Jul 6 23:56:58.415279 containerd[1489]: time="2025-07-06T23:56:58.413774798Z" level=info msg="StartContainer for \"d1123389833f2fe2d477b06e22655ddb0d1810a27d38da2f861185da4a819ce5\"" Jul 6 23:56:58.450848 systemd[1]: Started cri-containerd-d1123389833f2fe2d477b06e22655ddb0d1810a27d38da2f861185da4a819ce5.scope - libcontainer container d1123389833f2fe2d477b06e22655ddb0d1810a27d38da2f861185da4a819ce5. Jul 6 23:56:58.480737 containerd[1489]: time="2025-07-06T23:56:58.480678858Z" level=info msg="StartContainer for \"d1123389833f2fe2d477b06e22655ddb0d1810a27d38da2f861185da4a819ce5\" returns successfully" Jul 6 23:56:59.169771 containerd[1489]: time="2025-07-06T23:56:59.169613803Z" level=info msg="CreateContainer within sandbox \"1f2dfe7339217a3050d4fca2cfb3a8c78735465ad67867adab0c4f6f57ec8b73\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 6 23:56:59.210240 containerd[1489]: time="2025-07-06T23:56:59.210095292Z" level=info msg="CreateContainer within sandbox \"1f2dfe7339217a3050d4fca2cfb3a8c78735465ad67867adab0c4f6f57ec8b73\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"12ae6ef10b2a58f6a6147148e4825c765c311a69429e0fb7ca485420b28c6daa\"" Jul 6 23:56:59.211410 containerd[1489]: time="2025-07-06T23:56:59.210567420Z" level=info msg="StartContainer for \"12ae6ef10b2a58f6a6147148e4825c765c311a69429e0fb7ca485420b28c6daa\"" Jul 6 23:56:59.248768 systemd[1]: Started cri-containerd-12ae6ef10b2a58f6a6147148e4825c765c311a69429e0fb7ca485420b28c6daa.scope - libcontainer container 12ae6ef10b2a58f6a6147148e4825c765c311a69429e0fb7ca485420b28c6daa. Jul 6 23:56:59.287141 containerd[1489]: time="2025-07-06T23:56:59.287087077Z" level=info msg="StartContainer for \"12ae6ef10b2a58f6a6147148e4825c765c311a69429e0fb7ca485420b28c6daa\" returns successfully" Jul 6 23:56:59.290034 systemd[1]: cri-containerd-12ae6ef10b2a58f6a6147148e4825c765c311a69429e0fb7ca485420b28c6daa.scope: Deactivated successfully. Jul 6 23:56:59.306742 kubelet[2528]: I0706 23:56:59.306610 2528 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-55tbg" podStartSLOduration=2.347315833 podStartE2EDuration="9.306594048s" podCreationTimestamp="2025-07-06 23:56:50 +0000 UTC" firstStartedPulling="2025-07-06 23:56:51.404433371 +0000 UTC m=+6.527356720" lastFinishedPulling="2025-07-06 23:56:58.363711586 +0000 UTC m=+13.486634935" observedRunningTime="2025-07-06 23:56:59.30403171 +0000 UTC m=+14.426955050" watchObservedRunningTime="2025-07-06 23:56:59.306594048 +0000 UTC m=+14.429517386" Jul 6 23:56:59.330461 containerd[1489]: time="2025-07-06T23:56:59.330359990Z" level=info msg="shim disconnected" id=12ae6ef10b2a58f6a6147148e4825c765c311a69429e0fb7ca485420b28c6daa namespace=k8s.io Jul 6 23:56:59.330461 containerd[1489]: time="2025-07-06T23:56:59.330447433Z" level=warning msg="cleaning up after shim disconnected" id=12ae6ef10b2a58f6a6147148e4825c765c311a69429e0fb7ca485420b28c6daa namespace=k8s.io Jul 6 23:56:59.330461 containerd[1489]: time="2025-07-06T23:56:59.330458237Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:56:59.400047 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount375054452.mount: Deactivated successfully. Jul 6 23:57:00.208095 containerd[1489]: time="2025-07-06T23:57:00.208043549Z" level=info msg="CreateContainer within sandbox \"1f2dfe7339217a3050d4fca2cfb3a8c78735465ad67867adab0c4f6f57ec8b73\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 6 23:57:00.246106 containerd[1489]: time="2025-07-06T23:57:00.246059555Z" level=info msg="CreateContainer within sandbox \"1f2dfe7339217a3050d4fca2cfb3a8c78735465ad67867adab0c4f6f57ec8b73\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"96209ec3b5db3183bc8313c1d9aded79944730aae0b28431f43984c4c2459882\"" Jul 6 23:57:00.247845 containerd[1489]: time="2025-07-06T23:57:00.247814397Z" level=info msg="StartContainer for \"96209ec3b5db3183bc8313c1d9aded79944730aae0b28431f43984c4c2459882\"" Jul 6 23:57:00.281748 systemd[1]: Started cri-containerd-96209ec3b5db3183bc8313c1d9aded79944730aae0b28431f43984c4c2459882.scope - libcontainer container 96209ec3b5db3183bc8313c1d9aded79944730aae0b28431f43984c4c2459882. Jul 6 23:57:00.317045 containerd[1489]: time="2025-07-06T23:57:00.316998802Z" level=info msg="StartContainer for \"96209ec3b5db3183bc8313c1d9aded79944730aae0b28431f43984c4c2459882\" returns successfully" Jul 6 23:57:00.459307 kubelet[2528]: I0706 23:57:00.458877 2528 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 6 23:57:00.513432 systemd[1]: Created slice kubepods-burstable-pod949f8c90_0ab9_4807_9968_0b3e12bc832b.slice - libcontainer container kubepods-burstable-pod949f8c90_0ab9_4807_9968_0b3e12bc832b.slice. Jul 6 23:57:00.524522 systemd[1]: Created slice kubepods-burstable-podb4d3e652_cc06_4013_b053_a3ce5d6ce6be.slice - libcontainer container kubepods-burstable-podb4d3e652_cc06_4013_b053_a3ce5d6ce6be.slice. Jul 6 23:57:00.602475 kubelet[2528]: I0706 23:57:00.602408 2528 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9lwr\" (UniqueName: \"kubernetes.io/projected/949f8c90-0ab9-4807-9968-0b3e12bc832b-kube-api-access-j9lwr\") pod \"coredns-7c65d6cfc9-9mrsh\" (UID: \"949f8c90-0ab9-4807-9968-0b3e12bc832b\") " pod="kube-system/coredns-7c65d6cfc9-9mrsh" Jul 6 23:57:00.602475 kubelet[2528]: I0706 23:57:00.602464 2528 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/949f8c90-0ab9-4807-9968-0b3e12bc832b-config-volume\") pod \"coredns-7c65d6cfc9-9mrsh\" (UID: \"949f8c90-0ab9-4807-9968-0b3e12bc832b\") " pod="kube-system/coredns-7c65d6cfc9-9mrsh" Jul 6 23:57:00.602475 kubelet[2528]: I0706 23:57:00.602487 2528 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b4d3e652-cc06-4013-b053-a3ce5d6ce6be-config-volume\") pod \"coredns-7c65d6cfc9-jrd8h\" (UID: \"b4d3e652-cc06-4013-b053-a3ce5d6ce6be\") " pod="kube-system/coredns-7c65d6cfc9-jrd8h" Jul 6 23:57:00.602901 kubelet[2528]: I0706 23:57:00.602508 2528 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-262tx\" (UniqueName: \"kubernetes.io/projected/b4d3e652-cc06-4013-b053-a3ce5d6ce6be-kube-api-access-262tx\") pod \"coredns-7c65d6cfc9-jrd8h\" (UID: \"b4d3e652-cc06-4013-b053-a3ce5d6ce6be\") " pod="kube-system/coredns-7c65d6cfc9-jrd8h" Jul 6 23:57:00.820546 containerd[1489]: time="2025-07-06T23:57:00.820456796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-9mrsh,Uid:949f8c90-0ab9-4807-9968-0b3e12bc832b,Namespace:kube-system,Attempt:0,}" Jul 6 23:57:00.828561 containerd[1489]: time="2025-07-06T23:57:00.828357541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-jrd8h,Uid:b4d3e652-cc06-4013-b053-a3ce5d6ce6be,Namespace:kube-system,Attempt:0,}" Jul 6 23:57:01.229903 kubelet[2528]: I0706 23:57:01.229009 2528 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-jv4hk" podStartSLOduration=6.058532921 podStartE2EDuration="11.228980085s" podCreationTimestamp="2025-07-06 23:56:50 +0000 UTC" firstStartedPulling="2025-07-06 23:56:51.157250083 +0000 UTC m=+6.280173422" lastFinishedPulling="2025-07-06 23:56:56.327697247 +0000 UTC m=+11.450620586" observedRunningTime="2025-07-06 23:57:01.2277821 +0000 UTC m=+16.350705459" watchObservedRunningTime="2025-07-06 23:57:01.228980085 +0000 UTC m=+16.351903465" Jul 6 23:57:02.597590 systemd-networkd[1391]: cilium_host: Link UP Jul 6 23:57:02.598518 systemd-networkd[1391]: cilium_net: Link UP Jul 6 23:57:02.598524 systemd-networkd[1391]: cilium_net: Gained carrier Jul 6 23:57:02.600947 systemd-networkd[1391]: cilium_host: Gained carrier Jul 6 23:57:02.603177 systemd-networkd[1391]: cilium_host: Gained IPv6LL Jul 6 23:57:02.729798 systemd-networkd[1391]: cilium_vxlan: Link UP Jul 6 23:57:02.729982 systemd-networkd[1391]: cilium_vxlan: Gained carrier Jul 6 23:57:03.163718 kernel: NET: Registered PF_ALG protocol family Jul 6 23:57:03.383854 systemd-networkd[1391]: cilium_net: Gained IPv6LL Jul 6 23:57:03.832793 systemd-networkd[1391]: cilium_vxlan: Gained IPv6LL Jul 6 23:57:03.839243 systemd-networkd[1391]: lxc_health: Link UP Jul 6 23:57:03.851683 systemd-networkd[1391]: lxc_health: Gained carrier Jul 6 23:57:04.423616 systemd-networkd[1391]: lxc65c7986461d2: Link UP Jul 6 23:57:04.431730 kernel: eth0: renamed from tmpdc40d Jul 6 23:57:04.438396 systemd-networkd[1391]: lxc8bd1d293246c: Link UP Jul 6 23:57:04.456663 kernel: eth0: renamed from tmp21f53 Jul 6 23:57:04.454855 systemd-networkd[1391]: lxc65c7986461d2: Gained carrier Jul 6 23:57:04.463052 systemd-networkd[1391]: lxc8bd1d293246c: Gained carrier Jul 6 23:57:05.495979 systemd-networkd[1391]: lxc65c7986461d2: Gained IPv6LL Jul 6 23:57:05.559919 systemd-networkd[1391]: lxc8bd1d293246c: Gained IPv6LL Jul 6 23:57:05.880861 systemd-networkd[1391]: lxc_health: Gained IPv6LL Jul 6 23:57:08.154186 containerd[1489]: time="2025-07-06T23:57:08.152795495Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:57:08.154186 containerd[1489]: time="2025-07-06T23:57:08.152920195Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:57:08.154186 containerd[1489]: time="2025-07-06T23:57:08.152931680Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:57:08.158295 containerd[1489]: time="2025-07-06T23:57:08.153136206Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:57:08.180225 systemd[1]: Started cri-containerd-21f536541a1719df1c311c5316f80b34c42b61557c5156e0479db5600ba7e01b.scope - libcontainer container 21f536541a1719df1c311c5316f80b34c42b61557c5156e0479db5600ba7e01b. Jul 6 23:57:08.203987 containerd[1489]: time="2025-07-06T23:57:08.203762329Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:57:08.203987 containerd[1489]: time="2025-07-06T23:57:08.203823115Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:57:08.203987 containerd[1489]: time="2025-07-06T23:57:08.203840973Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:57:08.203987 containerd[1489]: time="2025-07-06T23:57:08.203907411Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:57:08.228819 systemd[1]: Started cri-containerd-dc40d05a756518542959de390cc5fc0dcf0b904345321bd53357182662b6e445.scope - libcontainer container dc40d05a756518542959de390cc5fc0dcf0b904345321bd53357182662b6e445. Jul 6 23:57:08.252817 containerd[1489]: time="2025-07-06T23:57:08.252702111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-jrd8h,Uid:b4d3e652-cc06-4013-b053-a3ce5d6ce6be,Namespace:kube-system,Attempt:0,} returns sandbox id \"21f536541a1719df1c311c5316f80b34c42b61557c5156e0479db5600ba7e01b\"" Jul 6 23:57:08.257087 containerd[1489]: time="2025-07-06T23:57:08.257022230Z" level=info msg="CreateContainer within sandbox \"21f536541a1719df1c311c5316f80b34c42b61557c5156e0479db5600ba7e01b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 6 23:57:08.278182 containerd[1489]: time="2025-07-06T23:57:08.278082944Z" level=info msg="CreateContainer within sandbox \"21f536541a1719df1c311c5316f80b34c42b61557c5156e0479db5600ba7e01b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1c025653b90e57702043a189e76f4dc495f82b285fe4c819cd7d2b8b4109d1b9\"" Jul 6 23:57:08.279741 containerd[1489]: time="2025-07-06T23:57:08.278829272Z" level=info msg="StartContainer for \"1c025653b90e57702043a189e76f4dc495f82b285fe4c819cd7d2b8b4109d1b9\"" Jul 6 23:57:08.306116 containerd[1489]: time="2025-07-06T23:57:08.306068604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-9mrsh,Uid:949f8c90-0ab9-4807-9968-0b3e12bc832b,Namespace:kube-system,Attempt:0,} returns sandbox id \"dc40d05a756518542959de390cc5fc0dcf0b904345321bd53357182662b6e445\"" Jul 6 23:57:08.311957 containerd[1489]: time="2025-07-06T23:57:08.311916251Z" level=info msg="CreateContainer within sandbox \"dc40d05a756518542959de390cc5fc0dcf0b904345321bd53357182662b6e445\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 6 23:57:08.313936 systemd[1]: Started cri-containerd-1c025653b90e57702043a189e76f4dc495f82b285fe4c819cd7d2b8b4109d1b9.scope - libcontainer container 1c025653b90e57702043a189e76f4dc495f82b285fe4c819cd7d2b8b4109d1b9. Jul 6 23:57:08.327799 kubelet[2528]: I0706 23:57:08.327760 2528 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:57:08.340016 containerd[1489]: time="2025-07-06T23:57:08.339934399Z" level=info msg="CreateContainer within sandbox \"dc40d05a756518542959de390cc5fc0dcf0b904345321bd53357182662b6e445\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d1138faeedb301b990f6b62c8cb7c879d8c5f51a385ea4610427e480678a8f86\"" Jul 6 23:57:08.341817 containerd[1489]: time="2025-07-06T23:57:08.341473934Z" level=info msg="StartContainer for \"d1138faeedb301b990f6b62c8cb7c879d8c5f51a385ea4610427e480678a8f86\"" Jul 6 23:57:08.360938 containerd[1489]: time="2025-07-06T23:57:08.360493918Z" level=info msg="StartContainer for \"1c025653b90e57702043a189e76f4dc495f82b285fe4c819cd7d2b8b4109d1b9\" returns successfully" Jul 6 23:57:08.387977 systemd[1]: Started cri-containerd-d1138faeedb301b990f6b62c8cb7c879d8c5f51a385ea4610427e480678a8f86.scope - libcontainer container d1138faeedb301b990f6b62c8cb7c879d8c5f51a385ea4610427e480678a8f86. Jul 6 23:57:08.421043 containerd[1489]: time="2025-07-06T23:57:08.420920310Z" level=info msg="StartContainer for \"d1138faeedb301b990f6b62c8cb7c879d8c5f51a385ea4610427e480678a8f86\" returns successfully" Jul 6 23:57:09.164403 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2467244771.mount: Deactivated successfully. Jul 6 23:57:09.255923 kubelet[2528]: I0706 23:57:09.254826 2528 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-jrd8h" podStartSLOduration=19.254798896 podStartE2EDuration="19.254798896s" podCreationTimestamp="2025-07-06 23:56:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:57:09.252479779 +0000 UTC m=+24.375403159" watchObservedRunningTime="2025-07-06 23:57:09.254798896 +0000 UTC m=+24.377722265" Jul 6 23:57:09.292387 kubelet[2528]: I0706 23:57:09.292298 2528 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-9mrsh" podStartSLOduration=19.292275367 podStartE2EDuration="19.292275367s" podCreationTimestamp="2025-07-06 23:56:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:57:09.289934555 +0000 UTC m=+24.412857935" watchObservedRunningTime="2025-07-06 23:57:09.292275367 +0000 UTC m=+24.415198716" Jul 6 23:59:26.430316 systemd[1]: Started sshd@7-95.217.0.60:22-13.221.248.111:27594.service - OpenSSH per-connection server daemon (13.221.248.111:27594). Jul 6 23:59:31.770224 sshd[3934]: Connection closed by 13.221.248.111 port 27594 [preauth] Jul 6 23:59:31.773225 systemd[1]: sshd@7-95.217.0.60:22-13.221.248.111:27594.service: Deactivated successfully. Jul 7 00:00:00.196237 systemd[1]: Started logrotate.service - Rotate and Compress System Logs. Jul 7 00:00:00.199947 systemd[1]: Starting mdadm.service - Initiates a check run of an MD array's redundancy information.... Jul 7 00:00:00.210120 systemd[1]: logrotate.service: Deactivated successfully. Jul 7 00:00:00.234592 systemd[1]: mdadm.service: Deactivated successfully. Jul 7 00:00:00.235182 systemd[1]: Finished mdadm.service - Initiates a check run of an MD array's redundancy information.. Jul 7 00:01:27.030105 systemd[1]: Started sshd@8-95.217.0.60:22-147.75.109.163:54706.service - OpenSSH per-connection server daemon (147.75.109.163:54706). Jul 7 00:01:28.065123 sshd[3961]: Accepted publickey for core from 147.75.109.163 port 54706 ssh2: RSA SHA256:WO1o7mVFDf5n+bNY0zV07pWGN617llOWY24GfZ+AEOU Jul 7 00:01:28.067797 sshd[3961]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:01:28.077789 systemd-logind[1467]: New session 8 of user core. Jul 7 00:01:28.083983 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 7 00:01:29.543144 sshd[3961]: pam_unix(sshd:session): session closed for user core Jul 7 00:01:29.549695 systemd[1]: sshd@8-95.217.0.60:22-147.75.109.163:54706.service: Deactivated successfully. Jul 7 00:01:29.551831 systemd[1]: session-8.scope: Deactivated successfully. Jul 7 00:01:29.553477 systemd-logind[1467]: Session 8 logged out. Waiting for processes to exit. Jul 7 00:01:29.554977 systemd-logind[1467]: Removed session 8. Jul 7 00:01:34.727111 systemd[1]: Started sshd@9-95.217.0.60:22-147.75.109.163:54714.service - OpenSSH per-connection server daemon (147.75.109.163:54714). Jul 7 00:01:35.764619 sshd[3975]: Accepted publickey for core from 147.75.109.163 port 54714 ssh2: RSA SHA256:WO1o7mVFDf5n+bNY0zV07pWGN617llOWY24GfZ+AEOU Jul 7 00:01:35.767080 sshd[3975]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:01:35.775449 systemd-logind[1467]: New session 9 of user core. Jul 7 00:01:35.779918 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 7 00:01:36.604725 sshd[3975]: pam_unix(sshd:session): session closed for user core Jul 7 00:01:36.609542 systemd[1]: sshd@9-95.217.0.60:22-147.75.109.163:54714.service: Deactivated successfully. Jul 7 00:01:36.613780 systemd[1]: session-9.scope: Deactivated successfully. Jul 7 00:01:36.616166 systemd-logind[1467]: Session 9 logged out. Waiting for processes to exit. Jul 7 00:01:36.618767 systemd-logind[1467]: Removed session 9. Jul 7 00:01:41.778054 systemd[1]: Started sshd@10-95.217.0.60:22-147.75.109.163:48148.service - OpenSSH per-connection server daemon (147.75.109.163:48148). Jul 7 00:01:42.793323 sshd[3990]: Accepted publickey for core from 147.75.109.163 port 48148 ssh2: RSA SHA256:WO1o7mVFDf5n+bNY0zV07pWGN617llOWY24GfZ+AEOU Jul 7 00:01:42.805156 sshd[3990]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:01:42.814578 systemd-logind[1467]: New session 10 of user core. Jul 7 00:01:42.822859 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 7 00:01:43.596819 sshd[3990]: pam_unix(sshd:session): session closed for user core Jul 7 00:01:43.603720 systemd[1]: sshd@10-95.217.0.60:22-147.75.109.163:48148.service: Deactivated successfully. Jul 7 00:01:43.608381 systemd[1]: session-10.scope: Deactivated successfully. Jul 7 00:01:43.609958 systemd-logind[1467]: Session 10 logged out. Waiting for processes to exit. Jul 7 00:01:43.612156 systemd-logind[1467]: Removed session 10. Jul 7 00:01:43.783451 systemd[1]: Started sshd@11-95.217.0.60:22-147.75.109.163:48150.service - OpenSSH per-connection server daemon (147.75.109.163:48150). Jul 7 00:01:44.817961 sshd[4005]: Accepted publickey for core from 147.75.109.163 port 48150 ssh2: RSA SHA256:WO1o7mVFDf5n+bNY0zV07pWGN617llOWY24GfZ+AEOU Jul 7 00:01:44.820547 sshd[4005]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:01:44.827747 systemd-logind[1467]: New session 11 of user core. Jul 7 00:01:44.837955 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 7 00:01:45.685655 sshd[4005]: pam_unix(sshd:session): session closed for user core Jul 7 00:01:45.690763 systemd[1]: sshd@11-95.217.0.60:22-147.75.109.163:48150.service: Deactivated successfully. Jul 7 00:01:45.693507 systemd[1]: session-11.scope: Deactivated successfully. Jul 7 00:01:45.696338 systemd-logind[1467]: Session 11 logged out. Waiting for processes to exit. Jul 7 00:01:45.697920 systemd-logind[1467]: Removed session 11. Jul 7 00:01:45.869120 systemd[1]: Started sshd@12-95.217.0.60:22-147.75.109.163:48162.service - OpenSSH per-connection server daemon (147.75.109.163:48162). Jul 7 00:01:46.903437 sshd[4018]: Accepted publickey for core from 147.75.109.163 port 48162 ssh2: RSA SHA256:WO1o7mVFDf5n+bNY0zV07pWGN617llOWY24GfZ+AEOU Jul 7 00:01:46.905618 sshd[4018]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:01:46.914278 systemd-logind[1467]: New session 12 of user core. Jul 7 00:01:46.919925 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 7 00:01:47.731161 sshd[4018]: pam_unix(sshd:session): session closed for user core Jul 7 00:01:47.736214 systemd[1]: sshd@12-95.217.0.60:22-147.75.109.163:48162.service: Deactivated successfully. Jul 7 00:01:47.740085 systemd[1]: session-12.scope: Deactivated successfully. Jul 7 00:01:47.743018 systemd-logind[1467]: Session 12 logged out. Waiting for processes to exit. Jul 7 00:01:47.745100 systemd-logind[1467]: Removed session 12. Jul 7 00:01:52.908090 systemd[1]: Started sshd@13-95.217.0.60:22-147.75.109.163:59618.service - OpenSSH per-connection server daemon (147.75.109.163:59618). Jul 7 00:01:53.921932 sshd[4033]: Accepted publickey for core from 147.75.109.163 port 59618 ssh2: RSA SHA256:WO1o7mVFDf5n+bNY0zV07pWGN617llOWY24GfZ+AEOU Jul 7 00:01:53.924118 sshd[4033]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:01:53.932342 systemd-logind[1467]: New session 13 of user core. Jul 7 00:01:53.944931 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 7 00:01:54.736211 sshd[4033]: pam_unix(sshd:session): session closed for user core Jul 7 00:01:54.742057 systemd[1]: sshd@13-95.217.0.60:22-147.75.109.163:59618.service: Deactivated successfully. Jul 7 00:01:54.745800 systemd[1]: session-13.scope: Deactivated successfully. Jul 7 00:01:54.748029 systemd-logind[1467]: Session 13 logged out. Waiting for processes to exit. Jul 7 00:01:54.750414 systemd-logind[1467]: Removed session 13. Jul 7 00:01:54.915383 systemd[1]: Started sshd@14-95.217.0.60:22-147.75.109.163:59626.service - OpenSSH per-connection server daemon (147.75.109.163:59626). Jul 7 00:01:55.932917 sshd[4046]: Accepted publickey for core from 147.75.109.163 port 59626 ssh2: RSA SHA256:WO1o7mVFDf5n+bNY0zV07pWGN617llOWY24GfZ+AEOU Jul 7 00:01:55.935528 sshd[4046]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:01:55.944426 systemd-logind[1467]: New session 14 of user core. Jul 7 00:01:55.952082 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 7 00:01:56.972334 sshd[4046]: pam_unix(sshd:session): session closed for user core Jul 7 00:01:56.984213 systemd[1]: sshd@14-95.217.0.60:22-147.75.109.163:59626.service: Deactivated successfully. Jul 7 00:01:56.987953 systemd[1]: session-14.scope: Deactivated successfully. Jul 7 00:01:56.989401 systemd-logind[1467]: Session 14 logged out. Waiting for processes to exit. Jul 7 00:01:56.991134 systemd-logind[1467]: Removed session 14. Jul 7 00:01:57.153434 systemd[1]: Started sshd@15-95.217.0.60:22-147.75.109.163:58570.service - OpenSSH per-connection server daemon (147.75.109.163:58570). Jul 7 00:01:58.191850 sshd[4058]: Accepted publickey for core from 147.75.109.163 port 58570 ssh2: RSA SHA256:WO1o7mVFDf5n+bNY0zV07pWGN617llOWY24GfZ+AEOU Jul 7 00:01:58.194328 sshd[4058]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:01:58.203006 systemd-logind[1467]: New session 15 of user core. Jul 7 00:01:58.209905 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 7 00:02:00.900697 sshd[4058]: pam_unix(sshd:session): session closed for user core Jul 7 00:02:00.907471 systemd-logind[1467]: Session 15 logged out. Waiting for processes to exit. Jul 7 00:02:00.908257 systemd[1]: sshd@15-95.217.0.60:22-147.75.109.163:58570.service: Deactivated successfully. Jul 7 00:02:00.912309 systemd[1]: session-15.scope: Deactivated successfully. Jul 7 00:02:00.914144 systemd-logind[1467]: Removed session 15. Jul 7 00:02:01.073664 systemd[1]: Started sshd@16-95.217.0.60:22-147.75.109.163:58578.service - OpenSSH per-connection server daemon (147.75.109.163:58578). Jul 7 00:02:02.077108 sshd[4076]: Accepted publickey for core from 147.75.109.163 port 58578 ssh2: RSA SHA256:WO1o7mVFDf5n+bNY0zV07pWGN617llOWY24GfZ+AEOU Jul 7 00:02:02.079948 sshd[4076]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:02:02.088231 systemd-logind[1467]: New session 16 of user core. Jul 7 00:02:02.093887 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 7 00:02:03.093318 sshd[4076]: pam_unix(sshd:session): session closed for user core Jul 7 00:02:03.096677 systemd[1]: sshd@16-95.217.0.60:22-147.75.109.163:58578.service: Deactivated successfully. Jul 7 00:02:03.098342 systemd[1]: session-16.scope: Deactivated successfully. Jul 7 00:02:03.100068 systemd-logind[1467]: Session 16 logged out. Waiting for processes to exit. Jul 7 00:02:03.101853 systemd-logind[1467]: Removed session 16. Jul 7 00:02:03.281225 systemd[1]: Started sshd@17-95.217.0.60:22-147.75.109.163:58594.service - OpenSSH per-connection server daemon (147.75.109.163:58594). Jul 7 00:02:04.301982 sshd[4087]: Accepted publickey for core from 147.75.109.163 port 58594 ssh2: RSA SHA256:WO1o7mVFDf5n+bNY0zV07pWGN617llOWY24GfZ+AEOU Jul 7 00:02:04.304416 sshd[4087]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:02:04.311916 systemd-logind[1467]: New session 17 of user core. Jul 7 00:02:04.318910 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 7 00:02:05.101602 sshd[4087]: pam_unix(sshd:session): session closed for user core Jul 7 00:02:05.106657 systemd[1]: sshd@17-95.217.0.60:22-147.75.109.163:58594.service: Deactivated successfully. Jul 7 00:02:05.109277 systemd[1]: session-17.scope: Deactivated successfully. Jul 7 00:02:05.110464 systemd-logind[1467]: Session 17 logged out. Waiting for processes to exit. Jul 7 00:02:05.111996 systemd-logind[1467]: Removed session 17. Jul 7 00:02:10.286187 systemd[1]: Started sshd@18-95.217.0.60:22-147.75.109.163:46154.service - OpenSSH per-connection server daemon (147.75.109.163:46154). Jul 7 00:02:11.307423 sshd[4103]: Accepted publickey for core from 147.75.109.163 port 46154 ssh2: RSA SHA256:WO1o7mVFDf5n+bNY0zV07pWGN617llOWY24GfZ+AEOU Jul 7 00:02:11.309516 sshd[4103]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:02:11.319541 systemd-logind[1467]: New session 18 of user core. Jul 7 00:02:11.326044 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 7 00:02:12.110551 sshd[4103]: pam_unix(sshd:session): session closed for user core Jul 7 00:02:12.115089 systemd[1]: sshd@18-95.217.0.60:22-147.75.109.163:46154.service: Deactivated successfully. Jul 7 00:02:12.117884 systemd[1]: session-18.scope: Deactivated successfully. Jul 7 00:02:12.119768 systemd-logind[1467]: Session 18 logged out. Waiting for processes to exit. Jul 7 00:02:12.122369 systemd-logind[1467]: Removed session 18. Jul 7 00:02:17.295233 systemd[1]: Started sshd@19-95.217.0.60:22-147.75.109.163:52260.service - OpenSSH per-connection server daemon (147.75.109.163:52260). Jul 7 00:02:18.334766 sshd[4116]: Accepted publickey for core from 147.75.109.163 port 52260 ssh2: RSA SHA256:WO1o7mVFDf5n+bNY0zV07pWGN617llOWY24GfZ+AEOU Jul 7 00:02:18.336971 sshd[4116]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:02:18.345728 systemd-logind[1467]: New session 19 of user core. Jul 7 00:02:18.356899 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 7 00:02:19.155493 sshd[4116]: pam_unix(sshd:session): session closed for user core Jul 7 00:02:19.160453 systemd[1]: sshd@19-95.217.0.60:22-147.75.109.163:52260.service: Deactivated successfully. Jul 7 00:02:19.165222 systemd[1]: session-19.scope: Deactivated successfully. Jul 7 00:02:19.168915 systemd-logind[1467]: Session 19 logged out. Waiting for processes to exit. Jul 7 00:02:19.171116 systemd-logind[1467]: Removed session 19. Jul 7 00:02:19.343088 systemd[1]: Started sshd@20-95.217.0.60:22-147.75.109.163:52262.service - OpenSSH per-connection server daemon (147.75.109.163:52262). Jul 7 00:02:20.269798 update_engine[1472]: I20250707 00:02:20.269668 1472 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jul 7 00:02:20.269798 update_engine[1472]: I20250707 00:02:20.269768 1472 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jul 7 00:02:20.275876 update_engine[1472]: I20250707 00:02:20.275753 1472 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jul 7 00:02:20.276528 update_engine[1472]: I20250707 00:02:20.276468 1472 omaha_request_params.cc:62] Current group set to lts Jul 7 00:02:20.276959 update_engine[1472]: I20250707 00:02:20.276726 1472 update_attempter.cc:499] Already updated boot flags. Skipping. Jul 7 00:02:20.276959 update_engine[1472]: I20250707 00:02:20.276751 1472 update_attempter.cc:643] Scheduling an action processor start. Jul 7 00:02:20.276959 update_engine[1472]: I20250707 00:02:20.276780 1472 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 7 00:02:20.276959 update_engine[1472]: I20250707 00:02:20.276837 1472 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jul 7 00:02:20.276959 update_engine[1472]: I20250707 00:02:20.276921 1472 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 7 00:02:20.276959 update_engine[1472]: I20250707 00:02:20.276933 1472 omaha_request_action.cc:272] Request: Jul 7 00:02:20.276959 update_engine[1472]: Jul 7 00:02:20.276959 update_engine[1472]: Jul 7 00:02:20.276959 update_engine[1472]: Jul 7 00:02:20.276959 update_engine[1472]: Jul 7 00:02:20.276959 update_engine[1472]: Jul 7 00:02:20.276959 update_engine[1472]: Jul 7 00:02:20.276959 update_engine[1472]: Jul 7 00:02:20.276959 update_engine[1472]: Jul 7 00:02:20.276959 update_engine[1472]: I20250707 00:02:20.276944 1472 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 7 00:02:20.300855 locksmithd[1501]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jul 7 00:02:20.302115 update_engine[1472]: I20250707 00:02:20.301564 1472 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 7 00:02:20.302429 update_engine[1472]: I20250707 00:02:20.302183 1472 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 7 00:02:20.303885 update_engine[1472]: E20250707 00:02:20.303840 1472 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 7 00:02:20.303942 update_engine[1472]: I20250707 00:02:20.303925 1472 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jul 7 00:02:20.370770 sshd[4130]: Accepted publickey for core from 147.75.109.163 port 52262 ssh2: RSA SHA256:WO1o7mVFDf5n+bNY0zV07pWGN617llOWY24GfZ+AEOU Jul 7 00:02:20.373778 sshd[4130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:02:20.382720 systemd-logind[1467]: New session 20 of user core. Jul 7 00:02:20.396054 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 7 00:02:22.643501 containerd[1489]: time="2025-07-07T00:02:22.643389894Z" level=info msg="StopContainer for \"d1123389833f2fe2d477b06e22655ddb0d1810a27d38da2f861185da4a819ce5\" with timeout 30 (s)" Jul 7 00:02:22.648508 containerd[1489]: time="2025-07-07T00:02:22.648388684Z" level=info msg="Stop container \"d1123389833f2fe2d477b06e22655ddb0d1810a27d38da2f861185da4a819ce5\" with signal terminated" Jul 7 00:02:22.710850 systemd[1]: run-containerd-runc-k8s.io-96209ec3b5db3183bc8313c1d9aded79944730aae0b28431f43984c4c2459882-runc.ZlMbeT.mount: Deactivated successfully. Jul 7 00:02:22.713356 systemd[1]: cri-containerd-d1123389833f2fe2d477b06e22655ddb0d1810a27d38da2f861185da4a819ce5.scope: Deactivated successfully. Jul 7 00:02:22.736912 containerd[1489]: time="2025-07-07T00:02:22.736837710Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 00:02:22.739328 containerd[1489]: time="2025-07-07T00:02:22.739302710Z" level=info msg="StopContainer for \"96209ec3b5db3183bc8313c1d9aded79944730aae0b28431f43984c4c2459882\" with timeout 2 (s)" Jul 7 00:02:22.739515 containerd[1489]: time="2025-07-07T00:02:22.739496025Z" level=info msg="Stop container \"96209ec3b5db3183bc8313c1d9aded79944730aae0b28431f43984c4c2459882\" with signal terminated" Jul 7 00:02:22.745738 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d1123389833f2fe2d477b06e22655ddb0d1810a27d38da2f861185da4a819ce5-rootfs.mount: Deactivated successfully. Jul 7 00:02:22.749535 systemd-networkd[1391]: lxc_health: Link DOWN Jul 7 00:02:22.749541 systemd-networkd[1391]: lxc_health: Lost carrier Jul 7 00:02:22.750463 systemd-resolved[1351]: lxc_health: Failed to determine whether the interface is managed, ignoring: No such file or directory Jul 7 00:02:22.759343 containerd[1489]: time="2025-07-07T00:02:22.759194314Z" level=info msg="shim disconnected" id=d1123389833f2fe2d477b06e22655ddb0d1810a27d38da2f861185da4a819ce5 namespace=k8s.io Jul 7 00:02:22.759958 containerd[1489]: time="2025-07-07T00:02:22.759919233Z" level=warning msg="cleaning up after shim disconnected" id=d1123389833f2fe2d477b06e22655ddb0d1810a27d38da2f861185da4a819ce5 namespace=k8s.io Jul 7 00:02:22.760060 containerd[1489]: time="2025-07-07T00:02:22.760047687Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 00:02:22.773256 systemd[1]: cri-containerd-96209ec3b5db3183bc8313c1d9aded79944730aae0b28431f43984c4c2459882.scope: Deactivated successfully. Jul 7 00:02:22.773994 systemd[1]: cri-containerd-96209ec3b5db3183bc8313c1d9aded79944730aae0b28431f43984c4c2459882.scope: Consumed 8.284s CPU time. Jul 7 00:02:22.779102 containerd[1489]: time="2025-07-07T00:02:22.779052374Z" level=info msg="StopContainer for \"d1123389833f2fe2d477b06e22655ddb0d1810a27d38da2f861185da4a819ce5\" returns successfully" Jul 7 00:02:22.780016 containerd[1489]: time="2025-07-07T00:02:22.779977793Z" level=info msg="StopPodSandbox for \"97276da28bac7cdab317a89b816bc96429ccc6e6b392984a2303b2864b333f0b\"" Jul 7 00:02:22.780016 containerd[1489]: time="2025-07-07T00:02:22.780000286Z" level=info msg="Container to stop \"d1123389833f2fe2d477b06e22655ddb0d1810a27d38da2f861185da4a819ce5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 00:02:22.785613 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-97276da28bac7cdab317a89b816bc96429ccc6e6b392984a2303b2864b333f0b-shm.mount: Deactivated successfully. Jul 7 00:02:22.790451 systemd[1]: cri-containerd-97276da28bac7cdab317a89b816bc96429ccc6e6b392984a2303b2864b333f0b.scope: Deactivated successfully. Jul 7 00:02:22.801873 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-96209ec3b5db3183bc8313c1d9aded79944730aae0b28431f43984c4c2459882-rootfs.mount: Deactivated successfully. Jul 7 00:02:22.810655 containerd[1489]: time="2025-07-07T00:02:22.810541261Z" level=info msg="shim disconnected" id=96209ec3b5db3183bc8313c1d9aded79944730aae0b28431f43984c4c2459882 namespace=k8s.io Jul 7 00:02:22.810655 containerd[1489]: time="2025-07-07T00:02:22.810598289Z" level=warning msg="cleaning up after shim disconnected" id=96209ec3b5db3183bc8313c1d9aded79944730aae0b28431f43984c4c2459882 namespace=k8s.io Jul 7 00:02:22.810655 containerd[1489]: time="2025-07-07T00:02:22.810605072Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 00:02:22.813967 containerd[1489]: time="2025-07-07T00:02:22.813907625Z" level=info msg="shim disconnected" id=97276da28bac7cdab317a89b816bc96429ccc6e6b392984a2303b2864b333f0b namespace=k8s.io Jul 7 00:02:22.813967 containerd[1489]: time="2025-07-07T00:02:22.813960324Z" level=warning msg="cleaning up after shim disconnected" id=97276da28bac7cdab317a89b816bc96429ccc6e6b392984a2303b2864b333f0b namespace=k8s.io Jul 7 00:02:22.813967 containerd[1489]: time="2025-07-07T00:02:22.813967217Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 00:02:22.823202 containerd[1489]: time="2025-07-07T00:02:22.823158232Z" level=warning msg="cleanup warnings time=\"2025-07-07T00:02:22Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 7 00:02:22.827003 containerd[1489]: time="2025-07-07T00:02:22.826966171Z" level=info msg="StopContainer for \"96209ec3b5db3183bc8313c1d9aded79944730aae0b28431f43984c4c2459882\" returns successfully" Jul 7 00:02:22.827794 containerd[1489]: time="2025-07-07T00:02:22.827615949Z" level=info msg="StopPodSandbox for \"1f2dfe7339217a3050d4fca2cfb3a8c78735465ad67867adab0c4f6f57ec8b73\"" Jul 7 00:02:22.827887 containerd[1489]: time="2025-07-07T00:02:22.827874638Z" level=info msg="Container to stop \"3622aa595506c93fe5ec566a6943a5b61f91c384dba317c7f8db2cdaf12939d6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 00:02:22.828019 containerd[1489]: time="2025-07-07T00:02:22.828000216Z" level=info msg="Container to stop \"30ee2e8084cc36b8420e0afe7252f9a5fb37ca5bb0fcf240e7b6108bb77f74fe\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 00:02:22.828152 containerd[1489]: time="2025-07-07T00:02:22.828141482Z" level=info msg="Container to stop \"9aa21bc0992ce9f1afa7fb72cd5e2ee51a94664a43d66cf9e92c5a23e8a23edd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 00:02:22.828195 containerd[1489]: time="2025-07-07T00:02:22.828186768Z" level=info msg="Container to stop \"12ae6ef10b2a58f6a6147148e4825c765c311a69429e0fb7ca485420b28c6daa\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 00:02:22.828258 containerd[1489]: time="2025-07-07T00:02:22.828247594Z" level=info msg="Container to stop \"96209ec3b5db3183bc8313c1d9aded79944730aae0b28431f43984c4c2459882\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 00:02:22.834811 systemd[1]: cri-containerd-1f2dfe7339217a3050d4fca2cfb3a8c78735465ad67867adab0c4f6f57ec8b73.scope: Deactivated successfully. Jul 7 00:02:22.838991 containerd[1489]: time="2025-07-07T00:02:22.838941489Z" level=info msg="TearDown network for sandbox \"97276da28bac7cdab317a89b816bc96429ccc6e6b392984a2303b2864b333f0b\" successfully" Jul 7 00:02:22.838991 containerd[1489]: time="2025-07-07T00:02:22.838977297Z" level=info msg="StopPodSandbox for \"97276da28bac7cdab317a89b816bc96429ccc6e6b392984a2303b2864b333f0b\" returns successfully" Jul 7 00:02:22.859659 containerd[1489]: time="2025-07-07T00:02:22.859569254Z" level=info msg="shim disconnected" id=1f2dfe7339217a3050d4fca2cfb3a8c78735465ad67867adab0c4f6f57ec8b73 namespace=k8s.io Jul 7 00:02:22.860552 containerd[1489]: time="2025-07-07T00:02:22.859648435Z" level=warning msg="cleaning up after shim disconnected" id=1f2dfe7339217a3050d4fca2cfb3a8c78735465ad67867adab0c4f6f57ec8b73 namespace=k8s.io Jul 7 00:02:22.860552 containerd[1489]: time="2025-07-07T00:02:22.860542205Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 00:02:22.871667 containerd[1489]: time="2025-07-07T00:02:22.871594588Z" level=info msg="TearDown network for sandbox \"1f2dfe7339217a3050d4fca2cfb3a8c78735465ad67867adab0c4f6f57ec8b73\" successfully" Jul 7 00:02:22.871667 containerd[1489]: time="2025-07-07T00:02:22.871664571Z" level=info msg="StopPodSandbox for \"1f2dfe7339217a3050d4fca2cfb3a8c78735465ad67867adab0c4f6f57ec8b73\" returns successfully" Jul 7 00:02:22.969086 kubelet[2528]: I0707 00:02:22.969022 2528 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fcf2c35e-7dca-4015-90a2-421d983bac78-cilium-config-path\") pod \"fcf2c35e-7dca-4015-90a2-421d983bac78\" (UID: \"fcf2c35e-7dca-4015-90a2-421d983bac78\") " Jul 7 00:02:22.969086 kubelet[2528]: I0707 00:02:22.969085 2528 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hst4k\" (UniqueName: \"kubernetes.io/projected/fcf2c35e-7dca-4015-90a2-421d983bac78-kube-api-access-hst4k\") pod \"fcf2c35e-7dca-4015-90a2-421d983bac78\" (UID: \"fcf2c35e-7dca-4015-90a2-421d983bac78\") " Jul 7 00:02:22.992166 kubelet[2528]: I0707 00:02:22.988780 2528 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcf2c35e-7dca-4015-90a2-421d983bac78-kube-api-access-hst4k" (OuterVolumeSpecName: "kube-api-access-hst4k") pod "fcf2c35e-7dca-4015-90a2-421d983bac78" (UID: "fcf2c35e-7dca-4015-90a2-421d983bac78"). InnerVolumeSpecName "kube-api-access-hst4k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 7 00:02:22.997113 kubelet[2528]: I0707 00:02:22.997047 2528 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fcf2c35e-7dca-4015-90a2-421d983bac78-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "fcf2c35e-7dca-4015-90a2-421d983bac78" (UID: "fcf2c35e-7dca-4015-90a2-421d983bac78"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 7 00:02:23.044792 systemd[1]: Removed slice kubepods-besteffort-podfcf2c35e_7dca_4015_90a2_421d983bac78.slice - libcontainer container kubepods-besteffort-podfcf2c35e_7dca_4015_90a2_421d983bac78.slice. Jul 7 00:02:23.071581 kubelet[2528]: I0707 00:02:23.069554 2528 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/327c3bdd-1104-4eb4-86e5-8e4700bdd490-host-proc-sys-net\") pod \"327c3bdd-1104-4eb4-86e5-8e4700bdd490\" (UID: \"327c3bdd-1104-4eb4-86e5-8e4700bdd490\") " Jul 7 00:02:23.071581 kubelet[2528]: I0707 00:02:23.069608 2528 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/327c3bdd-1104-4eb4-86e5-8e4700bdd490-clustermesh-secrets\") pod \"327c3bdd-1104-4eb4-86e5-8e4700bdd490\" (UID: \"327c3bdd-1104-4eb4-86e5-8e4700bdd490\") " Jul 7 00:02:23.071581 kubelet[2528]: I0707 00:02:23.069676 2528 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/327c3bdd-1104-4eb4-86e5-8e4700bdd490-cilium-run\") pod \"327c3bdd-1104-4eb4-86e5-8e4700bdd490\" (UID: \"327c3bdd-1104-4eb4-86e5-8e4700bdd490\") " Jul 7 00:02:23.071581 kubelet[2528]: I0707 00:02:23.069706 2528 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/327c3bdd-1104-4eb4-86e5-8e4700bdd490-hubble-tls\") pod \"327c3bdd-1104-4eb4-86e5-8e4700bdd490\" (UID: \"327c3bdd-1104-4eb4-86e5-8e4700bdd490\") " Jul 7 00:02:23.071581 kubelet[2528]: I0707 00:02:23.069730 2528 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/327c3bdd-1104-4eb4-86e5-8e4700bdd490-xtables-lock\") pod \"327c3bdd-1104-4eb4-86e5-8e4700bdd490\" (UID: \"327c3bdd-1104-4eb4-86e5-8e4700bdd490\") " Jul 7 00:02:23.071581 kubelet[2528]: I0707 00:02:23.069805 2528 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/327c3bdd-1104-4eb4-86e5-8e4700bdd490-host-proc-sys-kernel\") pod \"327c3bdd-1104-4eb4-86e5-8e4700bdd490\" (UID: \"327c3bdd-1104-4eb4-86e5-8e4700bdd490\") " Jul 7 00:02:23.075145 kubelet[2528]: I0707 00:02:23.069833 2528 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/327c3bdd-1104-4eb4-86e5-8e4700bdd490-hostproc\") pod \"327c3bdd-1104-4eb4-86e5-8e4700bdd490\" (UID: \"327c3bdd-1104-4eb4-86e5-8e4700bdd490\") " Jul 7 00:02:23.075145 kubelet[2528]: I0707 00:02:23.069885 2528 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zwvws\" (UniqueName: \"kubernetes.io/projected/327c3bdd-1104-4eb4-86e5-8e4700bdd490-kube-api-access-zwvws\") pod \"327c3bdd-1104-4eb4-86e5-8e4700bdd490\" (UID: \"327c3bdd-1104-4eb4-86e5-8e4700bdd490\") " Jul 7 00:02:23.075145 kubelet[2528]: I0707 00:02:23.069909 2528 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/327c3bdd-1104-4eb4-86e5-8e4700bdd490-bpf-maps\") pod \"327c3bdd-1104-4eb4-86e5-8e4700bdd490\" (UID: \"327c3bdd-1104-4eb4-86e5-8e4700bdd490\") " Jul 7 00:02:23.075145 kubelet[2528]: I0707 00:02:23.069938 2528 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/327c3bdd-1104-4eb4-86e5-8e4700bdd490-etc-cni-netd\") pod \"327c3bdd-1104-4eb4-86e5-8e4700bdd490\" (UID: \"327c3bdd-1104-4eb4-86e5-8e4700bdd490\") " Jul 7 00:02:23.075145 kubelet[2528]: I0707 00:02:23.069970 2528 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/327c3bdd-1104-4eb4-86e5-8e4700bdd490-cilium-config-path\") pod \"327c3bdd-1104-4eb4-86e5-8e4700bdd490\" (UID: \"327c3bdd-1104-4eb4-86e5-8e4700bdd490\") " Jul 7 00:02:23.075145 kubelet[2528]: I0707 00:02:23.069994 2528 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/327c3bdd-1104-4eb4-86e5-8e4700bdd490-cilium-cgroup\") pod \"327c3bdd-1104-4eb4-86e5-8e4700bdd490\" (UID: \"327c3bdd-1104-4eb4-86e5-8e4700bdd490\") " Jul 7 00:02:23.075435 kubelet[2528]: I0707 00:02:23.070018 2528 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/327c3bdd-1104-4eb4-86e5-8e4700bdd490-lib-modules\") pod \"327c3bdd-1104-4eb4-86e5-8e4700bdd490\" (UID: \"327c3bdd-1104-4eb4-86e5-8e4700bdd490\") " Jul 7 00:02:23.075435 kubelet[2528]: I0707 00:02:23.070041 2528 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/327c3bdd-1104-4eb4-86e5-8e4700bdd490-cni-path\") pod \"327c3bdd-1104-4eb4-86e5-8e4700bdd490\" (UID: \"327c3bdd-1104-4eb4-86e5-8e4700bdd490\") " Jul 7 00:02:23.075435 kubelet[2528]: I0707 00:02:23.070163 2528 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hst4k\" (UniqueName: \"kubernetes.io/projected/fcf2c35e-7dca-4015-90a2-421d983bac78-kube-api-access-hst4k\") on node \"ci-4081-3-4-6-7e2061accb\" DevicePath \"\"" Jul 7 00:02:23.075435 kubelet[2528]: I0707 00:02:23.070182 2528 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fcf2c35e-7dca-4015-90a2-421d983bac78-cilium-config-path\") on node \"ci-4081-3-4-6-7e2061accb\" DevicePath \"\"" Jul 7 00:02:23.075435 kubelet[2528]: I0707 00:02:23.071904 2528 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/327c3bdd-1104-4eb4-86e5-8e4700bdd490-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "327c3bdd-1104-4eb4-86e5-8e4700bdd490" (UID: "327c3bdd-1104-4eb4-86e5-8e4700bdd490"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 00:02:23.075678 kubelet[2528]: I0707 00:02:23.075591 2528 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/327c3bdd-1104-4eb4-86e5-8e4700bdd490-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "327c3bdd-1104-4eb4-86e5-8e4700bdd490" (UID: "327c3bdd-1104-4eb4-86e5-8e4700bdd490"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 7 00:02:23.075730 kubelet[2528]: I0707 00:02:23.075710 2528 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/327c3bdd-1104-4eb4-86e5-8e4700bdd490-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "327c3bdd-1104-4eb4-86e5-8e4700bdd490" (UID: "327c3bdd-1104-4eb4-86e5-8e4700bdd490"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 00:02:23.078811 kubelet[2528]: I0707 00:02:23.078778 2528 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/327c3bdd-1104-4eb4-86e5-8e4700bdd490-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "327c3bdd-1104-4eb4-86e5-8e4700bdd490" (UID: "327c3bdd-1104-4eb4-86e5-8e4700bdd490"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 00:02:23.078977 kubelet[2528]: I0707 00:02:23.078955 2528 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/327c3bdd-1104-4eb4-86e5-8e4700bdd490-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "327c3bdd-1104-4eb4-86e5-8e4700bdd490" (UID: "327c3bdd-1104-4eb4-86e5-8e4700bdd490"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 00:02:23.079088 kubelet[2528]: I0707 00:02:23.079069 2528 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/327c3bdd-1104-4eb4-86e5-8e4700bdd490-hostproc" (OuterVolumeSpecName: "hostproc") pod "327c3bdd-1104-4eb4-86e5-8e4700bdd490" (UID: "327c3bdd-1104-4eb4-86e5-8e4700bdd490"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 00:02:23.079979 kubelet[2528]: I0707 00:02:23.079945 2528 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/327c3bdd-1104-4eb4-86e5-8e4700bdd490-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "327c3bdd-1104-4eb4-86e5-8e4700bdd490" (UID: "327c3bdd-1104-4eb4-86e5-8e4700bdd490"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 00:02:23.080126 kubelet[2528]: I0707 00:02:23.080011 2528 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/327c3bdd-1104-4eb4-86e5-8e4700bdd490-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "327c3bdd-1104-4eb4-86e5-8e4700bdd490" (UID: "327c3bdd-1104-4eb4-86e5-8e4700bdd490"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 00:02:23.082010 kubelet[2528]: I0707 00:02:23.081984 2528 scope.go:117] "RemoveContainer" containerID="d1123389833f2fe2d477b06e22655ddb0d1810a27d38da2f861185da4a819ce5" Jul 7 00:02:23.082423 kubelet[2528]: I0707 00:02:23.082388 2528 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/327c3bdd-1104-4eb4-86e5-8e4700bdd490-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "327c3bdd-1104-4eb4-86e5-8e4700bdd490" (UID: "327c3bdd-1104-4eb4-86e5-8e4700bdd490"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 7 00:02:23.084255 kubelet[2528]: I0707 00:02:23.082912 2528 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/327c3bdd-1104-4eb4-86e5-8e4700bdd490-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "327c3bdd-1104-4eb4-86e5-8e4700bdd490" (UID: "327c3bdd-1104-4eb4-86e5-8e4700bdd490"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 00:02:23.084376 kubelet[2528]: I0707 00:02:23.082940 2528 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/327c3bdd-1104-4eb4-86e5-8e4700bdd490-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "327c3bdd-1104-4eb4-86e5-8e4700bdd490" (UID: "327c3bdd-1104-4eb4-86e5-8e4700bdd490"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 00:02:23.084460 kubelet[2528]: I0707 00:02:23.083027 2528 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/327c3bdd-1104-4eb4-86e5-8e4700bdd490-cni-path" (OuterVolumeSpecName: "cni-path") pod "327c3bdd-1104-4eb4-86e5-8e4700bdd490" (UID: "327c3bdd-1104-4eb4-86e5-8e4700bdd490"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 00:02:23.088181 kubelet[2528]: I0707 00:02:23.088123 2528 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/327c3bdd-1104-4eb4-86e5-8e4700bdd490-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "327c3bdd-1104-4eb4-86e5-8e4700bdd490" (UID: "327c3bdd-1104-4eb4-86e5-8e4700bdd490"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 7 00:02:23.089099 containerd[1489]: time="2025-07-07T00:02:23.089054961Z" level=info msg="RemoveContainer for \"d1123389833f2fe2d477b06e22655ddb0d1810a27d38da2f861185da4a819ce5\"" Jul 7 00:02:23.089556 kubelet[2528]: I0707 00:02:23.089521 2528 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/327c3bdd-1104-4eb4-86e5-8e4700bdd490-kube-api-access-zwvws" (OuterVolumeSpecName: "kube-api-access-zwvws") pod "327c3bdd-1104-4eb4-86e5-8e4700bdd490" (UID: "327c3bdd-1104-4eb4-86e5-8e4700bdd490"). InnerVolumeSpecName "kube-api-access-zwvws". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 7 00:02:23.098212 containerd[1489]: time="2025-07-07T00:02:23.097818057Z" level=info msg="RemoveContainer for \"d1123389833f2fe2d477b06e22655ddb0d1810a27d38da2f861185da4a819ce5\" returns successfully" Jul 7 00:02:23.098334 kubelet[2528]: I0707 00:02:23.098119 2528 scope.go:117] "RemoveContainer" containerID="d1123389833f2fe2d477b06e22655ddb0d1810a27d38da2f861185da4a819ce5" Jul 7 00:02:23.131067 containerd[1489]: time="2025-07-07T00:02:23.116126751Z" level=error msg="ContainerStatus for \"d1123389833f2fe2d477b06e22655ddb0d1810a27d38da2f861185da4a819ce5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d1123389833f2fe2d477b06e22655ddb0d1810a27d38da2f861185da4a819ce5\": not found" Jul 7 00:02:23.148900 kubelet[2528]: E0707 00:02:23.148809 2528 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d1123389833f2fe2d477b06e22655ddb0d1810a27d38da2f861185da4a819ce5\": not found" containerID="d1123389833f2fe2d477b06e22655ddb0d1810a27d38da2f861185da4a819ce5" Jul 7 00:02:23.170919 kubelet[2528]: I0707 00:02:23.170864 2528 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/327c3bdd-1104-4eb4-86e5-8e4700bdd490-host-proc-sys-kernel\") on node \"ci-4081-3-4-6-7e2061accb\" DevicePath \"\"" Jul 7 00:02:23.170919 kubelet[2528]: I0707 00:02:23.170904 2528 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/327c3bdd-1104-4eb4-86e5-8e4700bdd490-hostproc\") on node \"ci-4081-3-4-6-7e2061accb\" DevicePath \"\"" Jul 7 00:02:23.170919 kubelet[2528]: I0707 00:02:23.170918 2528 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zwvws\" (UniqueName: \"kubernetes.io/projected/327c3bdd-1104-4eb4-86e5-8e4700bdd490-kube-api-access-zwvws\") on node \"ci-4081-3-4-6-7e2061accb\" DevicePath \"\"" Jul 7 00:02:23.171114 kubelet[2528]: I0707 00:02:23.170930 2528 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/327c3bdd-1104-4eb4-86e5-8e4700bdd490-bpf-maps\") on node \"ci-4081-3-4-6-7e2061accb\" DevicePath \"\"" Jul 7 00:02:23.171114 kubelet[2528]: I0707 00:02:23.170971 2528 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/327c3bdd-1104-4eb4-86e5-8e4700bdd490-etc-cni-netd\") on node \"ci-4081-3-4-6-7e2061accb\" DevicePath \"\"" Jul 7 00:02:23.171114 kubelet[2528]: I0707 00:02:23.170983 2528 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/327c3bdd-1104-4eb4-86e5-8e4700bdd490-cilium-config-path\") on node \"ci-4081-3-4-6-7e2061accb\" DevicePath \"\"" Jul 7 00:02:23.171114 kubelet[2528]: I0707 00:02:23.170995 2528 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/327c3bdd-1104-4eb4-86e5-8e4700bdd490-cilium-cgroup\") on node \"ci-4081-3-4-6-7e2061accb\" DevicePath \"\"" Jul 7 00:02:23.171114 kubelet[2528]: I0707 00:02:23.171010 2528 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/327c3bdd-1104-4eb4-86e5-8e4700bdd490-lib-modules\") on node \"ci-4081-3-4-6-7e2061accb\" DevicePath \"\"" Jul 7 00:02:23.171114 kubelet[2528]: I0707 00:02:23.171022 2528 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/327c3bdd-1104-4eb4-86e5-8e4700bdd490-cni-path\") on node \"ci-4081-3-4-6-7e2061accb\" DevicePath \"\"" Jul 7 00:02:23.171114 kubelet[2528]: I0707 00:02:23.171034 2528 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/327c3bdd-1104-4eb4-86e5-8e4700bdd490-host-proc-sys-net\") on node \"ci-4081-3-4-6-7e2061accb\" DevicePath \"\"" Jul 7 00:02:23.171114 kubelet[2528]: I0707 00:02:23.171047 2528 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/327c3bdd-1104-4eb4-86e5-8e4700bdd490-clustermesh-secrets\") on node \"ci-4081-3-4-6-7e2061accb\" DevicePath \"\"" Jul 7 00:02:23.171329 kubelet[2528]: I0707 00:02:23.171068 2528 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/327c3bdd-1104-4eb4-86e5-8e4700bdd490-cilium-run\") on node \"ci-4081-3-4-6-7e2061accb\" DevicePath \"\"" Jul 7 00:02:23.171329 kubelet[2528]: I0707 00:02:23.171079 2528 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/327c3bdd-1104-4eb4-86e5-8e4700bdd490-hubble-tls\") on node \"ci-4081-3-4-6-7e2061accb\" DevicePath \"\"" Jul 7 00:02:23.171329 kubelet[2528]: I0707 00:02:23.171090 2528 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/327c3bdd-1104-4eb4-86e5-8e4700bdd490-xtables-lock\") on node \"ci-4081-3-4-6-7e2061accb\" DevicePath \"\"" Jul 7 00:02:23.179724 kubelet[2528]: I0707 00:02:23.158467 2528 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d1123389833f2fe2d477b06e22655ddb0d1810a27d38da2f861185da4a819ce5"} err="failed to get container status \"d1123389833f2fe2d477b06e22655ddb0d1810a27d38da2f861185da4a819ce5\": rpc error: code = NotFound desc = an error occurred when try to find container \"d1123389833f2fe2d477b06e22655ddb0d1810a27d38da2f861185da4a819ce5\": not found" Jul 7 00:02:23.179724 kubelet[2528]: I0707 00:02:23.179537 2528 scope.go:117] "RemoveContainer" containerID="96209ec3b5db3183bc8313c1d9aded79944730aae0b28431f43984c4c2459882" Jul 7 00:02:23.181322 containerd[1489]: time="2025-07-07T00:02:23.181286839Z" level=info msg="RemoveContainer for \"96209ec3b5db3183bc8313c1d9aded79944730aae0b28431f43984c4c2459882\"" Jul 7 00:02:23.185241 containerd[1489]: time="2025-07-07T00:02:23.185206149Z" level=info msg="RemoveContainer for \"96209ec3b5db3183bc8313c1d9aded79944730aae0b28431f43984c4c2459882\" returns successfully" Jul 7 00:02:23.186093 kubelet[2528]: I0707 00:02:23.185472 2528 scope.go:117] "RemoveContainer" containerID="12ae6ef10b2a58f6a6147148e4825c765c311a69429e0fb7ca485420b28c6daa" Jul 7 00:02:23.186905 containerd[1489]: time="2025-07-07T00:02:23.186862320Z" level=info msg="RemoveContainer for \"12ae6ef10b2a58f6a6147148e4825c765c311a69429e0fb7ca485420b28c6daa\"" Jul 7 00:02:23.191599 containerd[1489]: time="2025-07-07T00:02:23.191573136Z" level=info msg="RemoveContainer for \"12ae6ef10b2a58f6a6147148e4825c765c311a69429e0fb7ca485420b28c6daa\" returns successfully" Jul 7 00:02:23.191873 kubelet[2528]: I0707 00:02:23.191856 2528 scope.go:117] "RemoveContainer" containerID="30ee2e8084cc36b8420e0afe7252f9a5fb37ca5bb0fcf240e7b6108bb77f74fe" Jul 7 00:02:23.193167 containerd[1489]: time="2025-07-07T00:02:23.193132995Z" level=info msg="RemoveContainer for \"30ee2e8084cc36b8420e0afe7252f9a5fb37ca5bb0fcf240e7b6108bb77f74fe\"" Jul 7 00:02:23.196699 containerd[1489]: time="2025-07-07T00:02:23.196582175Z" level=info msg="RemoveContainer for \"30ee2e8084cc36b8420e0afe7252f9a5fb37ca5bb0fcf240e7b6108bb77f74fe\" returns successfully" Jul 7 00:02:23.196820 kubelet[2528]: I0707 00:02:23.196802 2528 scope.go:117] "RemoveContainer" containerID="9aa21bc0992ce9f1afa7fb72cd5e2ee51a94664a43d66cf9e92c5a23e8a23edd" Jul 7 00:02:23.198004 containerd[1489]: time="2025-07-07T00:02:23.197941145Z" level=info msg="RemoveContainer for \"9aa21bc0992ce9f1afa7fb72cd5e2ee51a94664a43d66cf9e92c5a23e8a23edd\"" Jul 7 00:02:23.201245 containerd[1489]: time="2025-07-07T00:02:23.201205446Z" level=info msg="RemoveContainer for \"9aa21bc0992ce9f1afa7fb72cd5e2ee51a94664a43d66cf9e92c5a23e8a23edd\" returns successfully" Jul 7 00:02:23.201831 kubelet[2528]: I0707 00:02:23.201468 2528 scope.go:117] "RemoveContainer" containerID="3622aa595506c93fe5ec566a6943a5b61f91c384dba317c7f8db2cdaf12939d6" Jul 7 00:02:23.202789 containerd[1489]: time="2025-07-07T00:02:23.202763722Z" level=info msg="RemoveContainer for \"3622aa595506c93fe5ec566a6943a5b61f91c384dba317c7f8db2cdaf12939d6\"" Jul 7 00:02:23.206484 containerd[1489]: time="2025-07-07T00:02:23.206402140Z" level=info msg="RemoveContainer for \"3622aa595506c93fe5ec566a6943a5b61f91c384dba317c7f8db2cdaf12939d6\" returns successfully" Jul 7 00:02:23.206907 kubelet[2528]: I0707 00:02:23.206657 2528 scope.go:117] "RemoveContainer" containerID="96209ec3b5db3183bc8313c1d9aded79944730aae0b28431f43984c4c2459882" Jul 7 00:02:23.206971 containerd[1489]: time="2025-07-07T00:02:23.206842203Z" level=error msg="ContainerStatus for \"96209ec3b5db3183bc8313c1d9aded79944730aae0b28431f43984c4c2459882\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"96209ec3b5db3183bc8313c1d9aded79944730aae0b28431f43984c4c2459882\": not found" Jul 7 00:02:23.207168 kubelet[2528]: E0707 00:02:23.207149 2528 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"96209ec3b5db3183bc8313c1d9aded79944730aae0b28431f43984c4c2459882\": not found" containerID="96209ec3b5db3183bc8313c1d9aded79944730aae0b28431f43984c4c2459882" Jul 7 00:02:23.207377 kubelet[2528]: I0707 00:02:23.207283 2528 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"96209ec3b5db3183bc8313c1d9aded79944730aae0b28431f43984c4c2459882"} err="failed to get container status \"96209ec3b5db3183bc8313c1d9aded79944730aae0b28431f43984c4c2459882\": rpc error: code = NotFound desc = an error occurred when try to find container \"96209ec3b5db3183bc8313c1d9aded79944730aae0b28431f43984c4c2459882\": not found" Jul 7 00:02:23.207377 kubelet[2528]: I0707 00:02:23.207312 2528 scope.go:117] "RemoveContainer" containerID="12ae6ef10b2a58f6a6147148e4825c765c311a69429e0fb7ca485420b28c6daa" Jul 7 00:02:23.207718 containerd[1489]: time="2025-07-07T00:02:23.207509144Z" level=error msg="ContainerStatus for \"12ae6ef10b2a58f6a6147148e4825c765c311a69429e0fb7ca485420b28c6daa\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"12ae6ef10b2a58f6a6147148e4825c765c311a69429e0fb7ca485420b28c6daa\": not found" Jul 7 00:02:23.207943 kubelet[2528]: E0707 00:02:23.207834 2528 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"12ae6ef10b2a58f6a6147148e4825c765c311a69429e0fb7ca485420b28c6daa\": not found" containerID="12ae6ef10b2a58f6a6147148e4825c765c311a69429e0fb7ca485420b28c6daa" Jul 7 00:02:23.207943 kubelet[2528]: I0707 00:02:23.207871 2528 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"12ae6ef10b2a58f6a6147148e4825c765c311a69429e0fb7ca485420b28c6daa"} err="failed to get container status \"12ae6ef10b2a58f6a6147148e4825c765c311a69429e0fb7ca485420b28c6daa\": rpc error: code = NotFound desc = an error occurred when try to find container \"12ae6ef10b2a58f6a6147148e4825c765c311a69429e0fb7ca485420b28c6daa\": not found" Jul 7 00:02:23.207943 kubelet[2528]: I0707 00:02:23.207889 2528 scope.go:117] "RemoveContainer" containerID="30ee2e8084cc36b8420e0afe7252f9a5fb37ca5bb0fcf240e7b6108bb77f74fe" Jul 7 00:02:23.208323 containerd[1489]: time="2025-07-07T00:02:23.208180012Z" level=error msg="ContainerStatus for \"30ee2e8084cc36b8420e0afe7252f9a5fb37ca5bb0fcf240e7b6108bb77f74fe\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"30ee2e8084cc36b8420e0afe7252f9a5fb37ca5bb0fcf240e7b6108bb77f74fe\": not found" Jul 7 00:02:23.208580 kubelet[2528]: E0707 00:02:23.208478 2528 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"30ee2e8084cc36b8420e0afe7252f9a5fb37ca5bb0fcf240e7b6108bb77f74fe\": not found" containerID="30ee2e8084cc36b8420e0afe7252f9a5fb37ca5bb0fcf240e7b6108bb77f74fe" Jul 7 00:02:23.208580 kubelet[2528]: I0707 00:02:23.208526 2528 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"30ee2e8084cc36b8420e0afe7252f9a5fb37ca5bb0fcf240e7b6108bb77f74fe"} err="failed to get container status \"30ee2e8084cc36b8420e0afe7252f9a5fb37ca5bb0fcf240e7b6108bb77f74fe\": rpc error: code = NotFound desc = an error occurred when try to find container \"30ee2e8084cc36b8420e0afe7252f9a5fb37ca5bb0fcf240e7b6108bb77f74fe\": not found" Jul 7 00:02:23.208580 kubelet[2528]: I0707 00:02:23.208545 2528 scope.go:117] "RemoveContainer" containerID="9aa21bc0992ce9f1afa7fb72cd5e2ee51a94664a43d66cf9e92c5a23e8a23edd" Jul 7 00:02:23.209145 containerd[1489]: time="2025-07-07T00:02:23.208931573Z" level=error msg="ContainerStatus for \"9aa21bc0992ce9f1afa7fb72cd5e2ee51a94664a43d66cf9e92c5a23e8a23edd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9aa21bc0992ce9f1afa7fb72cd5e2ee51a94664a43d66cf9e92c5a23e8a23edd\": not found" Jul 7 00:02:23.209211 kubelet[2528]: E0707 00:02:23.209035 2528 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9aa21bc0992ce9f1afa7fb72cd5e2ee51a94664a43d66cf9e92c5a23e8a23edd\": not found" containerID="9aa21bc0992ce9f1afa7fb72cd5e2ee51a94664a43d66cf9e92c5a23e8a23edd" Jul 7 00:02:23.209211 kubelet[2528]: I0707 00:02:23.209058 2528 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9aa21bc0992ce9f1afa7fb72cd5e2ee51a94664a43d66cf9e92c5a23e8a23edd"} err="failed to get container status \"9aa21bc0992ce9f1afa7fb72cd5e2ee51a94664a43d66cf9e92c5a23e8a23edd\": rpc error: code = NotFound desc = an error occurred when try to find container \"9aa21bc0992ce9f1afa7fb72cd5e2ee51a94664a43d66cf9e92c5a23e8a23edd\": not found" Jul 7 00:02:23.209211 kubelet[2528]: I0707 00:02:23.209074 2528 scope.go:117] "RemoveContainer" containerID="3622aa595506c93fe5ec566a6943a5b61f91c384dba317c7f8db2cdaf12939d6" Jul 7 00:02:23.209696 containerd[1489]: time="2025-07-07T00:02:23.209476664Z" level=error msg="ContainerStatus for \"3622aa595506c93fe5ec566a6943a5b61f91c384dba317c7f8db2cdaf12939d6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3622aa595506c93fe5ec566a6943a5b61f91c384dba317c7f8db2cdaf12939d6\": not found" Jul 7 00:02:23.209820 kubelet[2528]: E0707 00:02:23.209777 2528 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3622aa595506c93fe5ec566a6943a5b61f91c384dba317c7f8db2cdaf12939d6\": not found" containerID="3622aa595506c93fe5ec566a6943a5b61f91c384dba317c7f8db2cdaf12939d6" Jul 7 00:02:23.209820 kubelet[2528]: I0707 00:02:23.209802 2528 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3622aa595506c93fe5ec566a6943a5b61f91c384dba317c7f8db2cdaf12939d6"} err="failed to get container status \"3622aa595506c93fe5ec566a6943a5b61f91c384dba317c7f8db2cdaf12939d6\": rpc error: code = NotFound desc = an error occurred when try to find container \"3622aa595506c93fe5ec566a6943a5b61f91c384dba317c7f8db2cdaf12939d6\": not found" Jul 7 00:02:23.377765 systemd[1]: Removed slice kubepods-burstable-pod327c3bdd_1104_4eb4_86e5_8e4700bdd490.slice - libcontainer container kubepods-burstable-pod327c3bdd_1104_4eb4_86e5_8e4700bdd490.slice. Jul 7 00:02:23.378242 systemd[1]: kubepods-burstable-pod327c3bdd_1104_4eb4_86e5_8e4700bdd490.slice: Consumed 8.368s CPU time. Jul 7 00:02:23.708310 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-97276da28bac7cdab317a89b816bc96429ccc6e6b392984a2303b2864b333f0b-rootfs.mount: Deactivated successfully. Jul 7 00:02:23.708481 systemd[1]: var-lib-kubelet-pods-fcf2c35e\x2d7dca\x2d4015\x2d90a2\x2d421d983bac78-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhst4k.mount: Deactivated successfully. Jul 7 00:02:23.708595 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1f2dfe7339217a3050d4fca2cfb3a8c78735465ad67867adab0c4f6f57ec8b73-rootfs.mount: Deactivated successfully. Jul 7 00:02:23.708759 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1f2dfe7339217a3050d4fca2cfb3a8c78735465ad67867adab0c4f6f57ec8b73-shm.mount: Deactivated successfully. Jul 7 00:02:23.708879 systemd[1]: var-lib-kubelet-pods-327c3bdd\x2d1104\x2d4eb4\x2d86e5\x2d8e4700bdd490-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzwvws.mount: Deactivated successfully. Jul 7 00:02:23.708981 systemd[1]: var-lib-kubelet-pods-327c3bdd\x2d1104\x2d4eb4\x2d86e5\x2d8e4700bdd490-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 7 00:02:23.709080 systemd[1]: var-lib-kubelet-pods-327c3bdd\x2d1104\x2d4eb4\x2d86e5\x2d8e4700bdd490-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 7 00:02:24.630350 sshd[4130]: pam_unix(sshd:session): session closed for user core Jul 7 00:02:24.637161 systemd[1]: sshd@20-95.217.0.60:22-147.75.109.163:52262.service: Deactivated successfully. Jul 7 00:02:24.640389 systemd[1]: session-20.scope: Deactivated successfully. Jul 7 00:02:24.641276 systemd[1]: session-20.scope: Consumed 1.005s CPU time. Jul 7 00:02:24.642944 systemd-logind[1467]: Session 20 logged out. Waiting for processes to exit. Jul 7 00:02:24.644792 systemd-logind[1467]: Removed session 20. Jul 7 00:02:24.812222 systemd[1]: Started sshd@21-95.217.0.60:22-147.75.109.163:52270.service - OpenSSH per-connection server daemon (147.75.109.163:52270). Jul 7 00:02:25.035876 kubelet[2528]: I0707 00:02:25.035795 2528 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="327c3bdd-1104-4eb4-86e5-8e4700bdd490" path="/var/lib/kubelet/pods/327c3bdd-1104-4eb4-86e5-8e4700bdd490/volumes" Jul 7 00:02:25.037143 kubelet[2528]: I0707 00:02:25.037079 2528 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fcf2c35e-7dca-4015-90a2-421d983bac78" path="/var/lib/kubelet/pods/fcf2c35e-7dca-4015-90a2-421d983bac78/volumes" Jul 7 00:02:25.218337 kubelet[2528]: E0707 00:02:25.204188 2528 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 7 00:02:25.841049 sshd[4296]: Accepted publickey for core from 147.75.109.163 port 52270 ssh2: RSA SHA256:WO1o7mVFDf5n+bNY0zV07pWGN617llOWY24GfZ+AEOU Jul 7 00:02:25.843469 sshd[4296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:02:25.851974 systemd-logind[1467]: New session 21 of user core. Jul 7 00:02:25.857910 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 7 00:02:27.345346 kubelet[2528]: E0707 00:02:27.345303 2528 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="327c3bdd-1104-4eb4-86e5-8e4700bdd490" containerName="apply-sysctl-overwrites" Jul 7 00:02:27.346500 kubelet[2528]: E0707 00:02:27.345861 2528 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="327c3bdd-1104-4eb4-86e5-8e4700bdd490" containerName="clean-cilium-state" Jul 7 00:02:27.346500 kubelet[2528]: E0707 00:02:27.345879 2528 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="327c3bdd-1104-4eb4-86e5-8e4700bdd490" containerName="mount-bpf-fs" Jul 7 00:02:27.346500 kubelet[2528]: E0707 00:02:27.345887 2528 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fcf2c35e-7dca-4015-90a2-421d983bac78" containerName="cilium-operator" Jul 7 00:02:27.346500 kubelet[2528]: E0707 00:02:27.345894 2528 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="327c3bdd-1104-4eb4-86e5-8e4700bdd490" containerName="cilium-agent" Jul 7 00:02:27.346500 kubelet[2528]: E0707 00:02:27.345902 2528 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="327c3bdd-1104-4eb4-86e5-8e4700bdd490" containerName="mount-cgroup" Jul 7 00:02:27.346500 kubelet[2528]: I0707 00:02:27.345954 2528 memory_manager.go:354] "RemoveStaleState removing state" podUID="327c3bdd-1104-4eb4-86e5-8e4700bdd490" containerName="cilium-agent" Jul 7 00:02:27.346500 kubelet[2528]: I0707 00:02:27.345962 2528 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcf2c35e-7dca-4015-90a2-421d983bac78" containerName="cilium-operator" Jul 7 00:02:27.367955 systemd[1]: Created slice kubepods-burstable-podc046f2db_83ea_410c_b954_a2510040bc1b.slice - libcontainer container kubepods-burstable-podc046f2db_83ea_410c_b954_a2510040bc1b.slice. Jul 7 00:02:27.406800 kubelet[2528]: I0707 00:02:27.406725 2528 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c046f2db-83ea-410c-b954-a2510040bc1b-cni-path\") pod \"cilium-rf9m8\" (UID: \"c046f2db-83ea-410c-b954-a2510040bc1b\") " pod="kube-system/cilium-rf9m8" Jul 7 00:02:27.406800 kubelet[2528]: I0707 00:02:27.406781 2528 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c046f2db-83ea-410c-b954-a2510040bc1b-hubble-tls\") pod \"cilium-rf9m8\" (UID: \"c046f2db-83ea-410c-b954-a2510040bc1b\") " pod="kube-system/cilium-rf9m8" Jul 7 00:02:27.406800 kubelet[2528]: I0707 00:02:27.406812 2528 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c046f2db-83ea-410c-b954-a2510040bc1b-hostproc\") pod \"cilium-rf9m8\" (UID: \"c046f2db-83ea-410c-b954-a2510040bc1b\") " pod="kube-system/cilium-rf9m8" Jul 7 00:02:27.407036 kubelet[2528]: I0707 00:02:27.406831 2528 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c046f2db-83ea-410c-b954-a2510040bc1b-bpf-maps\") pod \"cilium-rf9m8\" (UID: \"c046f2db-83ea-410c-b954-a2510040bc1b\") " pod="kube-system/cilium-rf9m8" Jul 7 00:02:27.407036 kubelet[2528]: I0707 00:02:27.406852 2528 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c046f2db-83ea-410c-b954-a2510040bc1b-host-proc-sys-kernel\") pod \"cilium-rf9m8\" (UID: \"c046f2db-83ea-410c-b954-a2510040bc1b\") " pod="kube-system/cilium-rf9m8" Jul 7 00:02:27.407036 kubelet[2528]: I0707 00:02:27.406871 2528 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c046f2db-83ea-410c-b954-a2510040bc1b-xtables-lock\") pod \"cilium-rf9m8\" (UID: \"c046f2db-83ea-410c-b954-a2510040bc1b\") " pod="kube-system/cilium-rf9m8" Jul 7 00:02:27.407036 kubelet[2528]: I0707 00:02:27.406889 2528 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c046f2db-83ea-410c-b954-a2510040bc1b-clustermesh-secrets\") pod \"cilium-rf9m8\" (UID: \"c046f2db-83ea-410c-b954-a2510040bc1b\") " pod="kube-system/cilium-rf9m8" Jul 7 00:02:27.407036 kubelet[2528]: I0707 00:02:27.406908 2528 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c046f2db-83ea-410c-b954-a2510040bc1b-cilium-config-path\") pod \"cilium-rf9m8\" (UID: \"c046f2db-83ea-410c-b954-a2510040bc1b\") " pod="kube-system/cilium-rf9m8" Jul 7 00:02:27.407036 kubelet[2528]: I0707 00:02:27.406927 2528 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c046f2db-83ea-410c-b954-a2510040bc1b-cilium-run\") pod \"cilium-rf9m8\" (UID: \"c046f2db-83ea-410c-b954-a2510040bc1b\") " pod="kube-system/cilium-rf9m8" Jul 7 00:02:27.407273 kubelet[2528]: I0707 00:02:27.406944 2528 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c046f2db-83ea-410c-b954-a2510040bc1b-cilium-ipsec-secrets\") pod \"cilium-rf9m8\" (UID: \"c046f2db-83ea-410c-b954-a2510040bc1b\") " pod="kube-system/cilium-rf9m8" Jul 7 00:02:27.407273 kubelet[2528]: I0707 00:02:27.406962 2528 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkpwt\" (UniqueName: \"kubernetes.io/projected/c046f2db-83ea-410c-b954-a2510040bc1b-kube-api-access-wkpwt\") pod \"cilium-rf9m8\" (UID: \"c046f2db-83ea-410c-b954-a2510040bc1b\") " pod="kube-system/cilium-rf9m8" Jul 7 00:02:27.407273 kubelet[2528]: I0707 00:02:27.406982 2528 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c046f2db-83ea-410c-b954-a2510040bc1b-lib-modules\") pod \"cilium-rf9m8\" (UID: \"c046f2db-83ea-410c-b954-a2510040bc1b\") " pod="kube-system/cilium-rf9m8" Jul 7 00:02:27.407273 kubelet[2528]: I0707 00:02:27.407002 2528 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c046f2db-83ea-410c-b954-a2510040bc1b-cilium-cgroup\") pod \"cilium-rf9m8\" (UID: \"c046f2db-83ea-410c-b954-a2510040bc1b\") " pod="kube-system/cilium-rf9m8" Jul 7 00:02:27.407273 kubelet[2528]: I0707 00:02:27.407018 2528 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c046f2db-83ea-410c-b954-a2510040bc1b-etc-cni-netd\") pod \"cilium-rf9m8\" (UID: \"c046f2db-83ea-410c-b954-a2510040bc1b\") " pod="kube-system/cilium-rf9m8" Jul 7 00:02:27.407501 kubelet[2528]: I0707 00:02:27.407037 2528 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c046f2db-83ea-410c-b954-a2510040bc1b-host-proc-sys-net\") pod \"cilium-rf9m8\" (UID: \"c046f2db-83ea-410c-b954-a2510040bc1b\") " pod="kube-system/cilium-rf9m8" Jul 7 00:02:27.561928 sshd[4296]: pam_unix(sshd:session): session closed for user core Jul 7 00:02:27.573868 systemd[1]: sshd@21-95.217.0.60:22-147.75.109.163:52270.service: Deactivated successfully. Jul 7 00:02:27.576200 systemd[1]: session-21.scope: Deactivated successfully. Jul 7 00:02:27.577414 systemd-logind[1467]: Session 21 logged out. Waiting for processes to exit. Jul 7 00:02:27.579087 systemd-logind[1467]: Removed session 21. Jul 7 00:02:27.684745 containerd[1489]: time="2025-07-07T00:02:27.684341738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rf9m8,Uid:c046f2db-83ea-410c-b954-a2510040bc1b,Namespace:kube-system,Attempt:0,}" Jul 7 00:02:27.738168 systemd[1]: Started sshd@22-95.217.0.60:22-147.75.109.163:42356.service - OpenSSH per-connection server daemon (147.75.109.163:42356). Jul 7 00:02:27.741554 containerd[1489]: time="2025-07-07T00:02:27.739901001Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:02:27.741554 containerd[1489]: time="2025-07-07T00:02:27.740829186Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:02:27.741554 containerd[1489]: time="2025-07-07T00:02:27.740859403Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:02:27.741554 containerd[1489]: time="2025-07-07T00:02:27.740989339Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:02:27.775816 systemd[1]: Started cri-containerd-0364cff49519ca601627761d2cbfbbb0646418069e7856bb8c17a9b151d3c45d.scope - libcontainer container 0364cff49519ca601627761d2cbfbbb0646418069e7856bb8c17a9b151d3c45d. Jul 7 00:02:27.807152 containerd[1489]: time="2025-07-07T00:02:27.807065570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rf9m8,Uid:c046f2db-83ea-410c-b954-a2510040bc1b,Namespace:kube-system,Attempt:0,} returns sandbox id \"0364cff49519ca601627761d2cbfbbb0646418069e7856bb8c17a9b151d3c45d\"" Jul 7 00:02:27.821185 containerd[1489]: time="2025-07-07T00:02:27.821127892Z" level=info msg="CreateContainer within sandbox \"0364cff49519ca601627761d2cbfbbb0646418069e7856bb8c17a9b151d3c45d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 7 00:02:27.835558 containerd[1489]: time="2025-07-07T00:02:27.835464392Z" level=info msg="CreateContainer within sandbox \"0364cff49519ca601627761d2cbfbbb0646418069e7856bb8c17a9b151d3c45d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"51a4173cf58fe5b26803472160a20f09ee217f87ea4d7814583768d095203030\"" Jul 7 00:02:27.836425 containerd[1489]: time="2025-07-07T00:02:27.836380965Z" level=info msg="StartContainer for \"51a4173cf58fe5b26803472160a20f09ee217f87ea4d7814583768d095203030\"" Jul 7 00:02:27.868817 systemd[1]: Started cri-containerd-51a4173cf58fe5b26803472160a20f09ee217f87ea4d7814583768d095203030.scope - libcontainer container 51a4173cf58fe5b26803472160a20f09ee217f87ea4d7814583768d095203030. Jul 7 00:02:27.900681 containerd[1489]: time="2025-07-07T00:02:27.900475970Z" level=info msg="StartContainer for \"51a4173cf58fe5b26803472160a20f09ee217f87ea4d7814583768d095203030\" returns successfully" Jul 7 00:02:27.917344 systemd[1]: cri-containerd-51a4173cf58fe5b26803472160a20f09ee217f87ea4d7814583768d095203030.scope: Deactivated successfully. Jul 7 00:02:27.961512 containerd[1489]: time="2025-07-07T00:02:27.961410758Z" level=info msg="shim disconnected" id=51a4173cf58fe5b26803472160a20f09ee217f87ea4d7814583768d095203030 namespace=k8s.io Jul 7 00:02:27.961512 containerd[1489]: time="2025-07-07T00:02:27.961497673Z" level=warning msg="cleaning up after shim disconnected" id=51a4173cf58fe5b26803472160a20f09ee217f87ea4d7814583768d095203030 namespace=k8s.io Jul 7 00:02:27.961512 containerd[1489]: time="2025-07-07T00:02:27.961509845Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 00:02:28.098454 containerd[1489]: time="2025-07-07T00:02:28.098169752Z" level=info msg="CreateContainer within sandbox \"0364cff49519ca601627761d2cbfbbb0646418069e7856bb8c17a9b151d3c45d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 7 00:02:28.123367 containerd[1489]: time="2025-07-07T00:02:28.123185349Z" level=info msg="CreateContainer within sandbox \"0364cff49519ca601627761d2cbfbbb0646418069e7856bb8c17a9b151d3c45d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"03d4dc81871e9ec5659feaf4e83e34a60ceb38f714270f1f1dae41f4c0b96e7e\"" Jul 7 00:02:28.124307 containerd[1489]: time="2025-07-07T00:02:28.124232981Z" level=info msg="StartContainer for \"03d4dc81871e9ec5659feaf4e83e34a60ceb38f714270f1f1dae41f4c0b96e7e\"" Jul 7 00:02:28.172021 systemd[1]: Started cri-containerd-03d4dc81871e9ec5659feaf4e83e34a60ceb38f714270f1f1dae41f4c0b96e7e.scope - libcontainer container 03d4dc81871e9ec5659feaf4e83e34a60ceb38f714270f1f1dae41f4c0b96e7e. Jul 7 00:02:28.209487 containerd[1489]: time="2025-07-07T00:02:28.209434466Z" level=info msg="StartContainer for \"03d4dc81871e9ec5659feaf4e83e34a60ceb38f714270f1f1dae41f4c0b96e7e\" returns successfully" Jul 7 00:02:28.222381 systemd[1]: cri-containerd-03d4dc81871e9ec5659feaf4e83e34a60ceb38f714270f1f1dae41f4c0b96e7e.scope: Deactivated successfully. Jul 7 00:02:28.252550 containerd[1489]: time="2025-07-07T00:02:28.252431139Z" level=info msg="shim disconnected" id=03d4dc81871e9ec5659feaf4e83e34a60ceb38f714270f1f1dae41f4c0b96e7e namespace=k8s.io Jul 7 00:02:28.252550 containerd[1489]: time="2025-07-07T00:02:28.252494268Z" level=warning msg="cleaning up after shim disconnected" id=03d4dc81871e9ec5659feaf4e83e34a60ceb38f714270f1f1dae41f4c0b96e7e namespace=k8s.io Jul 7 00:02:28.252550 containerd[1489]: time="2025-07-07T00:02:28.252504928Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 00:02:28.772151 sshd[4326]: Accepted publickey for core from 147.75.109.163 port 42356 ssh2: RSA SHA256:WO1o7mVFDf5n+bNY0zV07pWGN617llOWY24GfZ+AEOU Jul 7 00:02:28.774330 sshd[4326]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:02:28.782707 systemd-logind[1467]: New session 22 of user core. Jul 7 00:02:28.790865 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 7 00:02:29.101161 containerd[1489]: time="2025-07-07T00:02:29.100855332Z" level=info msg="CreateContainer within sandbox \"0364cff49519ca601627761d2cbfbbb0646418069e7856bb8c17a9b151d3c45d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 7 00:02:29.128205 containerd[1489]: time="2025-07-07T00:02:29.128033303Z" level=info msg="CreateContainer within sandbox \"0364cff49519ca601627761d2cbfbbb0646418069e7856bb8c17a9b151d3c45d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4ef9cad8ef1ebd22fa63f79484fbfc5fd231150b9f8dd3bc548afe3cf7854d5e\"" Jul 7 00:02:29.137678 containerd[1489]: time="2025-07-07T00:02:29.135573312Z" level=info msg="StartContainer for \"4ef9cad8ef1ebd22fa63f79484fbfc5fd231150b9f8dd3bc548afe3cf7854d5e\"" Jul 7 00:02:29.138020 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1312245372.mount: Deactivated successfully. Jul 7 00:02:29.182813 systemd[1]: Started cri-containerd-4ef9cad8ef1ebd22fa63f79484fbfc5fd231150b9f8dd3bc548afe3cf7854d5e.scope - libcontainer container 4ef9cad8ef1ebd22fa63f79484fbfc5fd231150b9f8dd3bc548afe3cf7854d5e. Jul 7 00:02:29.222077 containerd[1489]: time="2025-07-07T00:02:29.222005566Z" level=info msg="StartContainer for \"4ef9cad8ef1ebd22fa63f79484fbfc5fd231150b9f8dd3bc548afe3cf7854d5e\" returns successfully" Jul 7 00:02:29.229449 systemd[1]: cri-containerd-4ef9cad8ef1ebd22fa63f79484fbfc5fd231150b9f8dd3bc548afe3cf7854d5e.scope: Deactivated successfully. Jul 7 00:02:29.260561 containerd[1489]: time="2025-07-07T00:02:29.260492013Z" level=info msg="shim disconnected" id=4ef9cad8ef1ebd22fa63f79484fbfc5fd231150b9f8dd3bc548afe3cf7854d5e namespace=k8s.io Jul 7 00:02:29.260907 containerd[1489]: time="2025-07-07T00:02:29.260870589Z" level=warning msg="cleaning up after shim disconnected" id=4ef9cad8ef1ebd22fa63f79484fbfc5fd231150b9f8dd3bc548afe3cf7854d5e namespace=k8s.io Jul 7 00:02:29.260907 containerd[1489]: time="2025-07-07T00:02:29.260892239Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 00:02:29.467007 sshd[4326]: pam_unix(sshd:session): session closed for user core Jul 7 00:02:29.471654 systemd[1]: sshd@22-95.217.0.60:22-147.75.109.163:42356.service: Deactivated successfully. Jul 7 00:02:29.474570 systemd[1]: session-22.scope: Deactivated successfully. Jul 7 00:02:29.477966 systemd-logind[1467]: Session 22 logged out. Waiting for processes to exit. Jul 7 00:02:29.480293 systemd-logind[1467]: Removed session 22. Jul 7 00:02:29.517201 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4ef9cad8ef1ebd22fa63f79484fbfc5fd231150b9f8dd3bc548afe3cf7854d5e-rootfs.mount: Deactivated successfully. Jul 7 00:02:29.655788 systemd[1]: Started sshd@23-95.217.0.60:22-147.75.109.163:42372.service - OpenSSH per-connection server daemon (147.75.109.163:42372). Jul 7 00:02:30.109981 containerd[1489]: time="2025-07-07T00:02:30.109733117Z" level=info msg="CreateContainer within sandbox \"0364cff49519ca601627761d2cbfbbb0646418069e7856bb8c17a9b151d3c45d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 7 00:02:30.148261 containerd[1489]: time="2025-07-07T00:02:30.147586881Z" level=info msg="CreateContainer within sandbox \"0364cff49519ca601627761d2cbfbbb0646418069e7856bb8c17a9b151d3c45d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d95435309b86ab603a09393461c686963164e0f98c036927e1a7cb3f36284cdf\"" Jul 7 00:02:30.151471 containerd[1489]: time="2025-07-07T00:02:30.151420211Z" level=info msg="StartContainer for \"d95435309b86ab603a09393461c686963164e0f98c036927e1a7cb3f36284cdf\"" Jul 7 00:02:30.195928 systemd[1]: Started cri-containerd-d95435309b86ab603a09393461c686963164e0f98c036927e1a7cb3f36284cdf.scope - libcontainer container d95435309b86ab603a09393461c686963164e0f98c036927e1a7cb3f36284cdf. Jul 7 00:02:30.220532 kubelet[2528]: E0707 00:02:30.220469 2528 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 7 00:02:30.227276 update_engine[1472]: I20250707 00:02:30.225126 1472 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 7 00:02:30.227276 update_engine[1472]: I20250707 00:02:30.225404 1472 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 7 00:02:30.227276 update_engine[1472]: I20250707 00:02:30.225832 1472 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 7 00:02:30.228341 update_engine[1472]: E20250707 00:02:30.228077 1472 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 7 00:02:30.228341 update_engine[1472]: I20250707 00:02:30.228166 1472 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jul 7 00:02:30.228274 systemd[1]: cri-containerd-d95435309b86ab603a09393461c686963164e0f98c036927e1a7cb3f36284cdf.scope: Deactivated successfully. Jul 7 00:02:30.230429 containerd[1489]: time="2025-07-07T00:02:30.230353210Z" level=info msg="StartContainer for \"d95435309b86ab603a09393461c686963164e0f98c036927e1a7cb3f36284cdf\" returns successfully" Jul 7 00:02:30.259189 containerd[1489]: time="2025-07-07T00:02:30.259109258Z" level=info msg="shim disconnected" id=d95435309b86ab603a09393461c686963164e0f98c036927e1a7cb3f36284cdf namespace=k8s.io Jul 7 00:02:30.259189 containerd[1489]: time="2025-07-07T00:02:30.259175773Z" level=warning msg="cleaning up after shim disconnected" id=d95435309b86ab603a09393461c686963164e0f98c036927e1a7cb3f36284cdf namespace=k8s.io Jul 7 00:02:30.259189 containerd[1489]: time="2025-07-07T00:02:30.259185892Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 00:02:30.516908 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d95435309b86ab603a09393461c686963164e0f98c036927e1a7cb3f36284cdf-rootfs.mount: Deactivated successfully. Jul 7 00:02:30.695918 sshd[4540]: Accepted publickey for core from 147.75.109.163 port 42372 ssh2: RSA SHA256:WO1o7mVFDf5n+bNY0zV07pWGN617llOWY24GfZ+AEOU Jul 7 00:02:30.698336 sshd[4540]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:02:30.707787 systemd-logind[1467]: New session 23 of user core. Jul 7 00:02:30.712954 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 7 00:02:31.115325 containerd[1489]: time="2025-07-07T00:02:31.115118494Z" level=info msg="CreateContainer within sandbox \"0364cff49519ca601627761d2cbfbbb0646418069e7856bb8c17a9b151d3c45d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 7 00:02:31.156965 kubelet[2528]: I0707 00:02:31.155136 2528 setters.go:600] "Node became not ready" node="ci-4081-3-4-6-7e2061accb" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-07T00:02:31Z","lastTransitionTime":"2025-07-07T00:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 7 00:02:31.157174 containerd[1489]: time="2025-07-07T00:02:31.156793370Z" level=info msg="CreateContainer within sandbox \"0364cff49519ca601627761d2cbfbbb0646418069e7856bb8c17a9b151d3c45d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6668a36d9f3cf10773c1660299251cc14b6bc308972734dcf2c83af2a46ddebc\"" Jul 7 00:02:31.158312 containerd[1489]: time="2025-07-07T00:02:31.158279331Z" level=info msg="StartContainer for \"6668a36d9f3cf10773c1660299251cc14b6bc308972734dcf2c83af2a46ddebc\"" Jul 7 00:02:31.207902 systemd[1]: Started cri-containerd-6668a36d9f3cf10773c1660299251cc14b6bc308972734dcf2c83af2a46ddebc.scope - libcontainer container 6668a36d9f3cf10773c1660299251cc14b6bc308972734dcf2c83af2a46ddebc. Jul 7 00:02:31.240539 containerd[1489]: time="2025-07-07T00:02:31.240418633Z" level=info msg="StartContainer for \"6668a36d9f3cf10773c1660299251cc14b6bc308972734dcf2c83af2a46ddebc\" returns successfully" Jul 7 00:02:31.834685 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jul 7 00:02:32.144100 kubelet[2528]: I0707 00:02:32.143825 2528 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-rf9m8" podStartSLOduration=5.143797941 podStartE2EDuration="5.143797941s" podCreationTimestamp="2025-07-07 00:02:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:02:32.141313612 +0000 UTC m=+347.264237002" watchObservedRunningTime="2025-07-07 00:02:32.143797941 +0000 UTC m=+347.266721310" Jul 7 00:02:34.866360 systemd-networkd[1391]: lxc_health: Link UP Jul 7 00:02:34.873003 systemd-networkd[1391]: lxc_health: Gained carrier Jul 7 00:02:36.887831 systemd-networkd[1391]: lxc_health: Gained IPv6LL Jul 7 00:02:38.114528 kubelet[2528]: E0707 00:02:38.114481 2528 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:60722->127.0.0.1:44799: write tcp 127.0.0.1:60722->127.0.0.1:44799: write: broken pipe Jul 7 00:02:40.233759 update_engine[1472]: I20250707 00:02:40.233685 1472 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 7 00:02:40.236010 update_engine[1472]: I20250707 00:02:40.234025 1472 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 7 00:02:40.236010 update_engine[1472]: I20250707 00:02:40.234318 1472 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 7 00:02:40.236010 update_engine[1472]: E20250707 00:02:40.235721 1472 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 7 00:02:40.236010 update_engine[1472]: I20250707 00:02:40.235798 1472 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jul 7 00:02:40.450105 sshd[4540]: pam_unix(sshd:session): session closed for user core Jul 7 00:02:40.455392 systemd[1]: sshd@23-95.217.0.60:22-147.75.109.163:42372.service: Deactivated successfully. Jul 7 00:02:40.459514 systemd[1]: session-23.scope: Deactivated successfully. Jul 7 00:02:40.461505 systemd-logind[1467]: Session 23 logged out. Waiting for processes to exit. Jul 7 00:02:40.464015 systemd-logind[1467]: Removed session 23. Jul 7 00:02:45.053242 containerd[1489]: time="2025-07-07T00:02:45.053133226Z" level=info msg="StopPodSandbox for \"97276da28bac7cdab317a89b816bc96429ccc6e6b392984a2303b2864b333f0b\"" Jul 7 00:02:45.053800 containerd[1489]: time="2025-07-07T00:02:45.053270827Z" level=info msg="TearDown network for sandbox \"97276da28bac7cdab317a89b816bc96429ccc6e6b392984a2303b2864b333f0b\" successfully" Jul 7 00:02:45.053800 containerd[1489]: time="2025-07-07T00:02:45.053290715Z" level=info msg="StopPodSandbox for \"97276da28bac7cdab317a89b816bc96429ccc6e6b392984a2303b2864b333f0b\" returns successfully" Jul 7 00:02:45.066303 containerd[1489]: time="2025-07-07T00:02:45.066207183Z" level=info msg="RemovePodSandbox for \"97276da28bac7cdab317a89b816bc96429ccc6e6b392984a2303b2864b333f0b\"" Jul 7 00:02:45.070864 containerd[1489]: time="2025-07-07T00:02:45.070779557Z" level=info msg="Forcibly stopping sandbox \"97276da28bac7cdab317a89b816bc96429ccc6e6b392984a2303b2864b333f0b\"" Jul 7 00:02:45.071103 containerd[1489]: time="2025-07-07T00:02:45.070900115Z" level=info msg="TearDown network for sandbox \"97276da28bac7cdab317a89b816bc96429ccc6e6b392984a2303b2864b333f0b\" successfully" Jul 7 00:02:45.078085 containerd[1489]: time="2025-07-07T00:02:45.077993893Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"97276da28bac7cdab317a89b816bc96429ccc6e6b392984a2303b2864b333f0b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 00:02:45.095663 containerd[1489]: time="2025-07-07T00:02:45.095544552Z" level=info msg="RemovePodSandbox \"97276da28bac7cdab317a89b816bc96429ccc6e6b392984a2303b2864b333f0b\" returns successfully" Jul 7 00:02:45.096400 containerd[1489]: time="2025-07-07T00:02:45.096348393Z" level=info msg="StopPodSandbox for \"1f2dfe7339217a3050d4fca2cfb3a8c78735465ad67867adab0c4f6f57ec8b73\"" Jul 7 00:02:45.096504 containerd[1489]: time="2025-07-07T00:02:45.096476165Z" level=info msg="TearDown network for sandbox \"1f2dfe7339217a3050d4fca2cfb3a8c78735465ad67867adab0c4f6f57ec8b73\" successfully" Jul 7 00:02:45.096552 containerd[1489]: time="2025-07-07T00:02:45.096497616Z" level=info msg="StopPodSandbox for \"1f2dfe7339217a3050d4fca2cfb3a8c78735465ad67867adab0c4f6f57ec8b73\" returns successfully" Jul 7 00:02:45.097143 containerd[1489]: time="2025-07-07T00:02:45.096970060Z" level=info msg="RemovePodSandbox for \"1f2dfe7339217a3050d4fca2cfb3a8c78735465ad67867adab0c4f6f57ec8b73\"" Jul 7 00:02:45.097143 containerd[1489]: time="2025-07-07T00:02:45.097009754Z" level=info msg="Forcibly stopping sandbox \"1f2dfe7339217a3050d4fca2cfb3a8c78735465ad67867adab0c4f6f57ec8b73\"" Jul 7 00:02:45.097143 containerd[1489]: time="2025-07-07T00:02:45.097116407Z" level=info msg="TearDown network for sandbox \"1f2dfe7339217a3050d4fca2cfb3a8c78735465ad67867adab0c4f6f57ec8b73\" successfully" Jul 7 00:02:45.102383 containerd[1489]: time="2025-07-07T00:02:45.102309657Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1f2dfe7339217a3050d4fca2cfb3a8c78735465ad67867adab0c4f6f57ec8b73\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 00:02:45.102383 containerd[1489]: time="2025-07-07T00:02:45.102377615Z" level=info msg="RemovePodSandbox \"1f2dfe7339217a3050d4fca2cfb3a8c78735465ad67867adab0c4f6f57ec8b73\" returns successfully" Jul 7 00:02:50.231944 update_engine[1472]: I20250707 00:02:50.231844 1472 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 7 00:02:50.232478 update_engine[1472]: I20250707 00:02:50.232199 1472 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 7 00:02:50.232778 update_engine[1472]: I20250707 00:02:50.232537 1472 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 7 00:02:50.233548 update_engine[1472]: E20250707 00:02:50.233489 1472 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 7 00:02:50.233698 update_engine[1472]: I20250707 00:02:50.233567 1472 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 7 00:02:50.233698 update_engine[1472]: I20250707 00:02:50.233579 1472 omaha_request_action.cc:617] Omaha request response: Jul 7 00:02:50.234226 update_engine[1472]: E20250707 00:02:50.233702 1472 omaha_request_action.cc:636] Omaha request network transfer failed. Jul 7 00:02:50.234226 update_engine[1472]: I20250707 00:02:50.233732 1472 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jul 7 00:02:50.234226 update_engine[1472]: I20250707 00:02:50.233741 1472 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 7 00:02:50.234226 update_engine[1472]: I20250707 00:02:50.233750 1472 update_attempter.cc:306] Processing Done. Jul 7 00:02:50.234226 update_engine[1472]: E20250707 00:02:50.233770 1472 update_attempter.cc:619] Update failed. Jul 7 00:02:50.237688 update_engine[1472]: I20250707 00:02:50.236881 1472 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jul 7 00:02:50.237688 update_engine[1472]: I20250707 00:02:50.236917 1472 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jul 7 00:02:50.237688 update_engine[1472]: I20250707 00:02:50.236948 1472 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jul 7 00:02:50.237688 update_engine[1472]: I20250707 00:02:50.237057 1472 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 7 00:02:50.237688 update_engine[1472]: I20250707 00:02:50.237092 1472 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 7 00:02:50.237688 update_engine[1472]: I20250707 00:02:50.237102 1472 omaha_request_action.cc:272] Request: Jul 7 00:02:50.237688 update_engine[1472]: Jul 7 00:02:50.237688 update_engine[1472]: Jul 7 00:02:50.237688 update_engine[1472]: Jul 7 00:02:50.237688 update_engine[1472]: Jul 7 00:02:50.237688 update_engine[1472]: Jul 7 00:02:50.237688 update_engine[1472]: Jul 7 00:02:50.237688 update_engine[1472]: I20250707 00:02:50.237126 1472 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 7 00:02:50.237688 update_engine[1472]: I20250707 00:02:50.237340 1472 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 7 00:02:50.237688 update_engine[1472]: I20250707 00:02:50.237568 1472 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 7 00:02:50.239996 locksmithd[1501]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jul 7 00:02:50.241442 update_engine[1472]: E20250707 00:02:50.240763 1472 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 7 00:02:50.241442 update_engine[1472]: I20250707 00:02:50.240926 1472 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 7 00:02:50.241442 update_engine[1472]: I20250707 00:02:50.240942 1472 omaha_request_action.cc:617] Omaha request response: Jul 7 00:02:50.241442 update_engine[1472]: I20250707 00:02:50.240953 1472 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 7 00:02:50.241442 update_engine[1472]: I20250707 00:02:50.240962 1472 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 7 00:02:50.241442 update_engine[1472]: I20250707 00:02:50.240971 1472 update_attempter.cc:306] Processing Done. Jul 7 00:02:50.241442 update_engine[1472]: I20250707 00:02:50.241029 1472 update_attempter.cc:310] Error event sent. Jul 7 00:02:50.241442 update_engine[1472]: I20250707 00:02:50.241045 1472 update_check_scheduler.cc:74] Next update check in 48m42s Jul 7 00:02:50.242574 locksmithd[1501]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jul 7 00:03:11.541069 kubelet[2528]: E0707 00:03:11.540950 2528 controller.go:195] "Failed to update lease" err="Put \"https://95.217.0.60:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-4-6-7e2061accb?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jul 7 00:03:11.772826 kubelet[2528]: E0707 00:03:11.772517 2528 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:38848->10.0.0.2:2379: read: connection timed out" Jul 7 00:03:11.781247 systemd[1]: cri-containerd-34a57c5f0303e495071d4c9fa7353cc508e1348e8f2e3ad5a8d20f904332f4b3.scope: Deactivated successfully. Jul 7 00:03:11.781777 systemd[1]: cri-containerd-34a57c5f0303e495071d4c9fa7353cc508e1348e8f2e3ad5a8d20f904332f4b3.scope: Consumed 2.150s CPU time, 24.3M memory peak, 0B memory swap peak. Jul 7 00:03:11.820748 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-34a57c5f0303e495071d4c9fa7353cc508e1348e8f2e3ad5a8d20f904332f4b3-rootfs.mount: Deactivated successfully. Jul 7 00:03:11.832247 containerd[1489]: time="2025-07-07T00:03:11.832153858Z" level=info msg="shim disconnected" id=34a57c5f0303e495071d4c9fa7353cc508e1348e8f2e3ad5a8d20f904332f4b3 namespace=k8s.io Jul 7 00:03:11.832247 containerd[1489]: time="2025-07-07T00:03:11.832240362Z" level=warning msg="cleaning up after shim disconnected" id=34a57c5f0303e495071d4c9fa7353cc508e1348e8f2e3ad5a8d20f904332f4b3 namespace=k8s.io Jul 7 00:03:11.832247 containerd[1489]: time="2025-07-07T00:03:11.832255811Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 00:03:12.218663 kubelet[2528]: I0707 00:03:12.218383 2528 scope.go:117] "RemoveContainer" containerID="34a57c5f0303e495071d4c9fa7353cc508e1348e8f2e3ad5a8d20f904332f4b3" Jul 7 00:03:12.224138 containerd[1489]: time="2025-07-07T00:03:12.224051772Z" level=info msg="CreateContainer within sandbox \"867d863b88feedbcf931a3057f17a60cd51a013efd9c97fc0465d10a86470948\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jul 7 00:03:12.247031 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3369797644.mount: Deactivated successfully. Jul 7 00:03:12.247261 containerd[1489]: time="2025-07-07T00:03:12.247176004Z" level=info msg="CreateContainer within sandbox \"867d863b88feedbcf931a3057f17a60cd51a013efd9c97fc0465d10a86470948\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"15ee663692a23f801074bd22d3a556ad879592f370ddb0cfca60615425806fd4\"" Jul 7 00:03:12.250485 containerd[1489]: time="2025-07-07T00:03:12.249118534Z" level=info msg="StartContainer for \"15ee663692a23f801074bd22d3a556ad879592f370ddb0cfca60615425806fd4\"" Jul 7 00:03:12.291887 systemd[1]: Started cri-containerd-15ee663692a23f801074bd22d3a556ad879592f370ddb0cfca60615425806fd4.scope - libcontainer container 15ee663692a23f801074bd22d3a556ad879592f370ddb0cfca60615425806fd4. Jul 7 00:03:12.354685 containerd[1489]: time="2025-07-07T00:03:12.354615172Z" level=info msg="StartContainer for \"15ee663692a23f801074bd22d3a556ad879592f370ddb0cfca60615425806fd4\" returns successfully" Jul 7 00:03:12.370383 systemd[1]: cri-containerd-d95f363eccedf9841e97dfdec0018a839757e30325f6ea8de700ba022adad5a1.scope: Deactivated successfully. Jul 7 00:03:12.371196 systemd[1]: cri-containerd-d95f363eccedf9841e97dfdec0018a839757e30325f6ea8de700ba022adad5a1.scope: Consumed 8.067s CPU time, 23.7M memory peak, 0B memory swap peak. Jul 7 00:03:12.407131 containerd[1489]: time="2025-07-07T00:03:12.407002257Z" level=info msg="shim disconnected" id=d95f363eccedf9841e97dfdec0018a839757e30325f6ea8de700ba022adad5a1 namespace=k8s.io Jul 7 00:03:12.407692 containerd[1489]: time="2025-07-07T00:03:12.407430076Z" level=warning msg="cleaning up after shim disconnected" id=d95f363eccedf9841e97dfdec0018a839757e30325f6ea8de700ba022adad5a1 namespace=k8s.io Jul 7 00:03:12.407692 containerd[1489]: time="2025-07-07T00:03:12.407451818Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 00:03:12.420241 containerd[1489]: time="2025-07-07T00:03:12.420151129Z" level=warning msg="cleanup warnings time=\"2025-07-07T00:03:12Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 7 00:03:12.819109 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d95f363eccedf9841e97dfdec0018a839757e30325f6ea8de700ba022adad5a1-rootfs.mount: Deactivated successfully. Jul 7 00:03:13.222967 kubelet[2528]: I0707 00:03:13.222587 2528 scope.go:117] "RemoveContainer" containerID="d95f363eccedf9841e97dfdec0018a839757e30325f6ea8de700ba022adad5a1" Jul 7 00:03:13.227228 containerd[1489]: time="2025-07-07T00:03:13.226463148Z" level=info msg="CreateContainer within sandbox \"9b62e3a295b421030de9e4510b3647d4047c8b0cd6906b378ae52a839fdaa250\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jul 7 00:03:13.252909 containerd[1489]: time="2025-07-07T00:03:13.252835475Z" level=info msg="CreateContainer within sandbox \"9b62e3a295b421030de9e4510b3647d4047c8b0cd6906b378ae52a839fdaa250\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"4665b348dd74c73c2868666f9d2a3e297a4bec515d2e82619cafa4860c8deba2\"" Jul 7 00:03:13.253840 containerd[1489]: time="2025-07-07T00:03:13.253806556Z" level=info msg="StartContainer for \"4665b348dd74c73c2868666f9d2a3e297a4bec515d2e82619cafa4860c8deba2\"" Jul 7 00:03:13.254541 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3047874083.mount: Deactivated successfully. Jul 7 00:03:13.302873 systemd[1]: Started cri-containerd-4665b348dd74c73c2868666f9d2a3e297a4bec515d2e82619cafa4860c8deba2.scope - libcontainer container 4665b348dd74c73c2868666f9d2a3e297a4bec515d2e82619cafa4860c8deba2. Jul 7 00:03:13.361024 containerd[1489]: time="2025-07-07T00:03:13.360932236Z" level=info msg="StartContainer for \"4665b348dd74c73c2868666f9d2a3e297a4bec515d2e82619cafa4860c8deba2\" returns successfully" Jul 7 00:03:15.748739 kubelet[2528]: E0707 00:03:15.742551 2528 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:38688->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-3-4-6-7e2061accb.184fcf300996e396 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-3-4-6-7e2061accb,UID:a8ba97ea8a79604b926b2c1da1ab06f5,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-3-4-6-7e2061accb,},FirstTimestamp:2025-07-07 00:03:05.296675734 +0000 UTC m=+380.419599103,LastTimestamp:2025-07-07 00:03:05.296675734 +0000 UTC m=+380.419599103,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-4-6-7e2061accb,}"