Nov 8 00:23:40.970950 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Nov 7 22:45:04 -00 2025 Nov 8 00:23:40.970979 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:23:40.970991 kernel: BIOS-provided physical RAM map: Nov 8 00:23:40.970999 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Nov 8 00:23:40.971006 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Nov 8 00:23:40.971014 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 8 00:23:40.971023 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007cfdbfff] usable Nov 8 00:23:40.971031 kernel: BIOS-e820: [mem 0x000000007cfdc000-0x000000007cffffff] reserved Nov 8 00:23:40.971041 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Nov 8 00:23:40.971050 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Nov 8 00:23:40.971058 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 8 00:23:40.971066 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 8 00:23:40.971073 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 8 00:23:40.971081 kernel: NX (Execute Disable) protection: active Nov 8 00:23:40.971092 kernel: APIC: Static calls initialized Nov 8 00:23:40.971102 kernel: SMBIOS 3.0.0 present. Nov 8 00:23:40.971110 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Nov 8 00:23:40.971117 kernel: Hypervisor detected: KVM Nov 8 00:23:40.971126 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 8 00:23:40.971134 kernel: kvm-clock: using sched offset of 3602028636 cycles Nov 8 00:23:40.971142 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 8 00:23:40.971151 kernel: tsc: Detected 2445.404 MHz processor Nov 8 00:23:40.971159 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 8 00:23:40.971170 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 8 00:23:40.971178 kernel: last_pfn = 0x7cfdc max_arch_pfn = 0x400000000 Nov 8 00:23:40.971187 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 8 00:23:40.971195 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 8 00:23:40.971203 kernel: Using GB pages for direct mapping Nov 8 00:23:40.971211 kernel: ACPI: Early table checksum verification disabled Nov 8 00:23:40.971220 kernel: ACPI: RSDP 0x00000000000F5270 000014 (v00 BOCHS ) Nov 8 00:23:40.971229 kernel: ACPI: RSDT 0x000000007CFE2693 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:23:40.971237 kernel: ACPI: FACP 0x000000007CFE2483 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:23:40.971247 kernel: ACPI: DSDT 0x000000007CFE0040 002443 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:23:40.971256 kernel: ACPI: FACS 0x000000007CFE0000 000040 Nov 8 00:23:40.971265 kernel: ACPI: APIC 0x000000007CFE2577 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:23:40.971273 kernel: ACPI: HPET 0x000000007CFE25F7 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:23:40.971282 kernel: ACPI: MCFG 0x000000007CFE262F 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:23:40.971306 kernel: ACPI: WAET 0x000000007CFE266B 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:23:40.971315 kernel: ACPI: Reserving FACP table memory at [mem 0x7cfe2483-0x7cfe2576] Nov 8 00:23:40.971324 kernel: ACPI: Reserving DSDT table memory at [mem 0x7cfe0040-0x7cfe2482] Nov 8 00:23:40.971339 kernel: ACPI: Reserving FACS table memory at [mem 0x7cfe0000-0x7cfe003f] Nov 8 00:23:40.971364 kernel: ACPI: Reserving APIC table memory at [mem 0x7cfe2577-0x7cfe25f6] Nov 8 00:23:40.971373 kernel: ACPI: Reserving HPET table memory at [mem 0x7cfe25f7-0x7cfe262e] Nov 8 00:23:40.971382 kernel: ACPI: Reserving MCFG table memory at [mem 0x7cfe262f-0x7cfe266a] Nov 8 00:23:40.971391 kernel: ACPI: Reserving WAET table memory at [mem 0x7cfe266b-0x7cfe2692] Nov 8 00:23:40.971401 kernel: No NUMA configuration found Nov 8 00:23:40.971412 kernel: Faking a node at [mem 0x0000000000000000-0x000000007cfdbfff] Nov 8 00:23:40.971421 kernel: NODE_DATA(0) allocated [mem 0x7cfd6000-0x7cfdbfff] Nov 8 00:23:40.971430 kernel: Zone ranges: Nov 8 00:23:40.971440 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 8 00:23:40.971449 kernel: DMA32 [mem 0x0000000001000000-0x000000007cfdbfff] Nov 8 00:23:40.971458 kernel: Normal empty Nov 8 00:23:40.971467 kernel: Movable zone start for each node Nov 8 00:23:40.971475 kernel: Early memory node ranges Nov 8 00:23:40.971484 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 8 00:23:40.971492 kernel: node 0: [mem 0x0000000000100000-0x000000007cfdbfff] Nov 8 00:23:40.971503 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007cfdbfff] Nov 8 00:23:40.971513 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 8 00:23:40.971521 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 8 00:23:40.971530 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Nov 8 00:23:40.971538 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 8 00:23:40.971547 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 8 00:23:40.971556 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 8 00:23:40.971565 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 8 00:23:40.971574 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 8 00:23:40.971586 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 8 00:23:40.971594 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 8 00:23:40.971603 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 8 00:23:40.971612 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 8 00:23:40.971622 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 8 00:23:40.971631 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Nov 8 00:23:40.971639 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 8 00:23:40.971648 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Nov 8 00:23:40.971657 kernel: Booting paravirtualized kernel on KVM Nov 8 00:23:40.971668 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 8 00:23:40.971677 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 8 00:23:40.971686 kernel: percpu: Embedded 58 pages/cpu s196712 r8192 d32664 u1048576 Nov 8 00:23:40.971695 kernel: pcpu-alloc: s196712 r8192 d32664 u1048576 alloc=1*2097152 Nov 8 00:23:40.971703 kernel: pcpu-alloc: [0] 0 1 Nov 8 00:23:40.971712 kernel: kvm-guest: PV spinlocks disabled, no host support Nov 8 00:23:40.971722 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:23:40.971731 kernel: random: crng init done Nov 8 00:23:40.971742 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 8 00:23:40.971750 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 8 00:23:40.971759 kernel: Fallback order for Node 0: 0 Nov 8 00:23:40.971768 kernel: Built 1 zonelists, mobility grouping on. Total pages: 503708 Nov 8 00:23:40.971778 kernel: Policy zone: DMA32 Nov 8 00:23:40.971787 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 8 00:23:40.971796 kernel: Memory: 1922052K/2047464K available (12288K kernel code, 2288K rwdata, 22748K rodata, 42880K init, 2320K bss, 125152K reserved, 0K cma-reserved) Nov 8 00:23:40.971805 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 8 00:23:40.971814 kernel: ftrace: allocating 37980 entries in 149 pages Nov 8 00:23:40.971824 kernel: ftrace: allocated 149 pages with 4 groups Nov 8 00:23:40.971833 kernel: Dynamic Preempt: voluntary Nov 8 00:23:40.971841 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 8 00:23:40.971851 kernel: rcu: RCU event tracing is enabled. Nov 8 00:23:40.971860 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 8 00:23:40.971869 kernel: Trampoline variant of Tasks RCU enabled. Nov 8 00:23:40.971878 kernel: Rude variant of Tasks RCU enabled. Nov 8 00:23:40.971888 kernel: Tracing variant of Tasks RCU enabled. Nov 8 00:23:40.971897 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 8 00:23:40.971906 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 8 00:23:40.971916 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Nov 8 00:23:40.971925 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 8 00:23:40.971934 kernel: Console: colour VGA+ 80x25 Nov 8 00:23:40.971943 kernel: printk: console [tty0] enabled Nov 8 00:23:40.971952 kernel: printk: console [ttyS0] enabled Nov 8 00:23:40.971962 kernel: ACPI: Core revision 20230628 Nov 8 00:23:40.971971 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 8 00:23:40.971980 kernel: APIC: Switch to symmetric I/O mode setup Nov 8 00:23:40.971989 kernel: x2apic enabled Nov 8 00:23:40.971999 kernel: APIC: Switched APIC routing to: physical x2apic Nov 8 00:23:40.972008 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 8 00:23:40.972017 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Nov 8 00:23:40.972025 kernel: Calibrating delay loop (skipped) preset value.. 4890.80 BogoMIPS (lpj=2445404) Nov 8 00:23:40.972034 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 8 00:23:40.972043 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 8 00:23:40.972051 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 8 00:23:40.972060 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 8 00:23:40.972076 kernel: Spectre V2 : Mitigation: Retpolines Nov 8 00:23:40.972087 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 8 00:23:40.972097 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Nov 8 00:23:40.972108 kernel: active return thunk: retbleed_return_thunk Nov 8 00:23:40.972117 kernel: RETBleed: Mitigation: untrained return thunk Nov 8 00:23:40.972127 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 8 00:23:40.972135 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 8 00:23:40.972145 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 8 00:23:40.972156 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 8 00:23:40.972165 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 8 00:23:40.972174 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 8 00:23:40.972184 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Nov 8 00:23:40.972194 kernel: Freeing SMP alternatives memory: 32K Nov 8 00:23:40.972203 kernel: pid_max: default: 32768 minimum: 301 Nov 8 00:23:40.972212 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 8 00:23:40.972222 kernel: landlock: Up and running. Nov 8 00:23:40.972231 kernel: SELinux: Initializing. Nov 8 00:23:40.972244 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 8 00:23:40.972255 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 8 00:23:40.972266 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0) Nov 8 00:23:40.972277 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 8 00:23:40.975330 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 8 00:23:40.975363 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 8 00:23:40.975374 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 8 00:23:40.975384 kernel: ... version: 0 Nov 8 00:23:40.975393 kernel: ... bit width: 48 Nov 8 00:23:40.975406 kernel: ... generic registers: 6 Nov 8 00:23:40.975415 kernel: ... value mask: 0000ffffffffffff Nov 8 00:23:40.975424 kernel: ... max period: 00007fffffffffff Nov 8 00:23:40.975434 kernel: ... fixed-purpose events: 0 Nov 8 00:23:40.975444 kernel: ... event mask: 000000000000003f Nov 8 00:23:40.975454 kernel: signal: max sigframe size: 1776 Nov 8 00:23:40.975464 kernel: rcu: Hierarchical SRCU implementation. Nov 8 00:23:40.975477 kernel: rcu: Max phase no-delay instances is 400. Nov 8 00:23:40.975487 kernel: smp: Bringing up secondary CPUs ... Nov 8 00:23:40.975501 kernel: smpboot: x86: Booting SMP configuration: Nov 8 00:23:40.975510 kernel: .... node #0, CPUs: #1 Nov 8 00:23:40.975517 kernel: smp: Brought up 1 node, 2 CPUs Nov 8 00:23:40.975527 kernel: smpboot: Max logical packages: 1 Nov 8 00:23:40.975537 kernel: smpboot: Total of 2 processors activated (9781.61 BogoMIPS) Nov 8 00:23:40.975547 kernel: devtmpfs: initialized Nov 8 00:23:40.975556 kernel: x86/mm: Memory block size: 128MB Nov 8 00:23:40.975567 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 8 00:23:40.975578 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 8 00:23:40.975592 kernel: pinctrl core: initialized pinctrl subsystem Nov 8 00:23:40.975603 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 8 00:23:40.975613 kernel: audit: initializing netlink subsys (disabled) Nov 8 00:23:40.975624 kernel: audit: type=2000 audit(1762561420.059:1): state=initialized audit_enabled=0 res=1 Nov 8 00:23:40.975635 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 8 00:23:40.975646 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 8 00:23:40.975656 kernel: cpuidle: using governor menu Nov 8 00:23:40.975667 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 8 00:23:40.975677 kernel: dca service started, version 1.12.1 Nov 8 00:23:40.975691 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Nov 8 00:23:40.975701 kernel: PCI: Using configuration type 1 for base access Nov 8 00:23:40.975712 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 8 00:23:40.975722 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 8 00:23:40.975733 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 8 00:23:40.975744 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 8 00:23:40.975754 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 8 00:23:40.975765 kernel: ACPI: Added _OSI(Module Device) Nov 8 00:23:40.975775 kernel: ACPI: Added _OSI(Processor Device) Nov 8 00:23:40.975787 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 8 00:23:40.975798 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 8 00:23:40.975809 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 8 00:23:40.975819 kernel: ACPI: Interpreter enabled Nov 8 00:23:40.975830 kernel: ACPI: PM: (supports S0 S5) Nov 8 00:23:40.975840 kernel: ACPI: Using IOAPIC for interrupt routing Nov 8 00:23:40.975851 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 8 00:23:40.975861 kernel: PCI: Using E820 reservations for host bridge windows Nov 8 00:23:40.975870 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 8 00:23:40.975881 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 8 00:23:40.976056 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 8 00:23:40.976175 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 8 00:23:40.976272 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 8 00:23:40.976301 kernel: PCI host bridge to bus 0000:00 Nov 8 00:23:40.976423 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 8 00:23:40.976510 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 8 00:23:40.976595 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 8 00:23:40.976990 kernel: pci_bus 0000:00: root bus resource [mem 0x7d000000-0xafffffff window] Nov 8 00:23:40.977092 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 8 00:23:40.977185 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Nov 8 00:23:40.977279 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 8 00:23:40.977456 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Nov 8 00:23:40.977584 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 Nov 8 00:23:40.977696 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfb800000-0xfbffffff pref] Nov 8 00:23:40.977802 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfd200000-0xfd203fff 64bit pref] Nov 8 00:23:40.977908 kernel: pci 0000:00:01.0: reg 0x20: [mem 0xfea10000-0xfea10fff] Nov 8 00:23:40.978016 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea00000-0xfea0ffff pref] Nov 8 00:23:40.978126 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 8 00:23:40.978240 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Nov 8 00:23:40.978403 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea11000-0xfea11fff] Nov 8 00:23:40.978523 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Nov 8 00:23:40.978631 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea12000-0xfea12fff] Nov 8 00:23:40.978746 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Nov 8 00:23:40.978853 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea13000-0xfea13fff] Nov 8 00:23:40.978969 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Nov 8 00:23:40.979089 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea14000-0xfea14fff] Nov 8 00:23:40.979210 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Nov 8 00:23:40.981477 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea15000-0xfea15fff] Nov 8 00:23:40.981604 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Nov 8 00:23:40.981716 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea16000-0xfea16fff] Nov 8 00:23:40.981840 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Nov 8 00:23:40.981954 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea17000-0xfea17fff] Nov 8 00:23:40.982063 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Nov 8 00:23:40.982167 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea18000-0xfea18fff] Nov 8 00:23:40.982276 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Nov 8 00:23:40.982421 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfea19000-0xfea19fff] Nov 8 00:23:40.982535 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Nov 8 00:23:40.982646 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 8 00:23:40.982756 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Nov 8 00:23:40.982861 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc040-0xc05f] Nov 8 00:23:40.982967 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea1a000-0xfea1afff] Nov 8 00:23:40.983082 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Nov 8 00:23:40.983186 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Nov 8 00:23:40.986917 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Nov 8 00:23:40.987072 kernel: pci 0000:01:00.0: reg 0x14: [mem 0xfe880000-0xfe880fff] Nov 8 00:23:40.987199 kernel: pci 0000:01:00.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Nov 8 00:23:40.987368 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfe800000-0xfe87ffff pref] Nov 8 00:23:40.987478 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Nov 8 00:23:40.987595 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Nov 8 00:23:40.987690 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Nov 8 00:23:40.987798 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Nov 8 00:23:40.987902 kernel: pci 0000:02:00.0: reg 0x10: [mem 0xfe600000-0xfe603fff 64bit] Nov 8 00:23:40.988001 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Nov 8 00:23:40.988094 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Nov 8 00:23:40.988201 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Nov 8 00:23:40.988343 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Nov 8 00:23:40.988477 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfe400000-0xfe400fff] Nov 8 00:23:40.988587 kernel: pci 0000:03:00.0: reg 0x20: [mem 0xfcc00000-0xfcc03fff 64bit pref] Nov 8 00:23:40.988687 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Nov 8 00:23:40.988787 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Nov 8 00:23:40.988882 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Nov 8 00:23:40.988994 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Nov 8 00:23:40.989097 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Nov 8 00:23:40.989194 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Nov 8 00:23:40.989325 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Nov 8 00:23:40.989444 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Nov 8 00:23:40.989554 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Nov 8 00:23:40.989656 kernel: pci 0000:05:00.0: reg 0x14: [mem 0xfe000000-0xfe000fff] Nov 8 00:23:40.989756 kernel: pci 0000:05:00.0: reg 0x20: [mem 0xfc800000-0xfc803fff 64bit pref] Nov 8 00:23:40.989853 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Nov 8 00:23:40.989949 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Nov 8 00:23:40.990042 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Nov 8 00:23:40.990161 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Nov 8 00:23:40.990265 kernel: pci 0000:06:00.0: reg 0x14: [mem 0xfde00000-0xfde00fff] Nov 8 00:23:40.992548 kernel: pci 0000:06:00.0: reg 0x20: [mem 0xfc600000-0xfc603fff 64bit pref] Nov 8 00:23:40.992664 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Nov 8 00:23:40.992767 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Nov 8 00:23:40.992869 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Nov 8 00:23:40.992884 kernel: acpiphp: Slot [0] registered Nov 8 00:23:40.993000 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Nov 8 00:23:40.993110 kernel: pci 0000:07:00.0: reg 0x14: [mem 0xfdc80000-0xfdc80fff] Nov 8 00:23:40.993214 kernel: pci 0000:07:00.0: reg 0x20: [mem 0xfc400000-0xfc403fff 64bit pref] Nov 8 00:23:40.993341 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfdc00000-0xfdc7ffff pref] Nov 8 00:23:40.993475 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Nov 8 00:23:40.993578 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Nov 8 00:23:40.993701 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Nov 8 00:23:40.993718 kernel: acpiphp: Slot [0-2] registered Nov 8 00:23:40.993822 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Nov 8 00:23:40.993921 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Nov 8 00:23:40.994020 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Nov 8 00:23:40.994033 kernel: acpiphp: Slot [0-3] registered Nov 8 00:23:40.994131 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Nov 8 00:23:40.994231 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Nov 8 00:23:40.996415 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Nov 8 00:23:40.996435 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 8 00:23:40.996451 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 8 00:23:40.996460 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 8 00:23:40.996469 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 8 00:23:40.996478 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 8 00:23:40.996488 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 8 00:23:40.996497 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 8 00:23:40.996506 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 8 00:23:40.996515 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 8 00:23:40.996525 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 8 00:23:40.996538 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 8 00:23:40.996549 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 8 00:23:40.996559 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 8 00:23:40.996569 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 8 00:23:40.996580 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 8 00:23:40.996591 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 8 00:23:40.996602 kernel: iommu: Default domain type: Translated Nov 8 00:23:40.996612 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 8 00:23:40.996623 kernel: PCI: Using ACPI for IRQ routing Nov 8 00:23:40.996636 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 8 00:23:40.996647 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Nov 8 00:23:40.996658 kernel: e820: reserve RAM buffer [mem 0x7cfdc000-0x7fffffff] Nov 8 00:23:40.996782 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 8 00:23:40.996899 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 8 00:23:40.996999 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 8 00:23:40.997012 kernel: vgaarb: loaded Nov 8 00:23:40.997022 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 8 00:23:40.997032 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 8 00:23:40.997046 kernel: clocksource: Switched to clocksource kvm-clock Nov 8 00:23:40.997056 kernel: VFS: Disk quotas dquot_6.6.0 Nov 8 00:23:40.997066 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 8 00:23:40.997075 kernel: pnp: PnP ACPI init Nov 8 00:23:40.997180 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Nov 8 00:23:40.997196 kernel: pnp: PnP ACPI: found 5 devices Nov 8 00:23:40.997206 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 8 00:23:40.997216 kernel: NET: Registered PF_INET protocol family Nov 8 00:23:40.997229 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 8 00:23:40.997239 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Nov 8 00:23:40.997248 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 8 00:23:40.997258 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 8 00:23:40.997267 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 8 00:23:40.997276 kernel: TCP: Hash tables configured (established 16384 bind 16384) Nov 8 00:23:40.998993 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 8 00:23:40.999010 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 8 00:23:40.999022 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 8 00:23:40.999039 kernel: NET: Registered PF_XDP protocol family Nov 8 00:23:40.999176 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Nov 8 00:23:40.999336 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Nov 8 00:23:40.999456 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Nov 8 00:23:40.999576 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x1000-0x1fff] Nov 8 00:23:40.999694 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x2000-0x2fff] Nov 8 00:23:40.999803 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x3000-0x3fff] Nov 8 00:23:40.999922 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Nov 8 00:23:41.000028 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Nov 8 00:23:41.000144 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Nov 8 00:23:41.000257 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Nov 8 00:23:41.001517 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Nov 8 00:23:41.001617 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Nov 8 00:23:41.001729 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Nov 8 00:23:41.001839 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Nov 8 00:23:41.001944 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Nov 8 00:23:41.002047 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Nov 8 00:23:41.002145 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Nov 8 00:23:41.002249 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Nov 8 00:23:41.003429 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Nov 8 00:23:41.003555 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Nov 8 00:23:41.003666 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Nov 8 00:23:41.003785 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Nov 8 00:23:41.003915 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Nov 8 00:23:41.004029 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Nov 8 00:23:41.004140 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Nov 8 00:23:41.004239 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Nov 8 00:23:41.005476 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Nov 8 00:23:41.005589 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Nov 8 00:23:41.005695 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Nov 8 00:23:41.005801 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Nov 8 00:23:41.005903 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Nov 8 00:23:41.005998 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Nov 8 00:23:41.006104 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Nov 8 00:23:41.006203 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Nov 8 00:23:41.006332 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Nov 8 00:23:41.006461 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Nov 8 00:23:41.006568 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 8 00:23:41.006662 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 8 00:23:41.006753 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 8 00:23:41.006844 kernel: pci_bus 0000:00: resource 7 [mem 0x7d000000-0xafffffff window] Nov 8 00:23:41.006930 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Nov 8 00:23:41.007018 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Nov 8 00:23:41.007128 kernel: pci_bus 0000:01: resource 1 [mem 0xfe800000-0xfe9fffff] Nov 8 00:23:41.007228 kernel: pci_bus 0000:01: resource 2 [mem 0xfd000000-0xfd1fffff 64bit pref] Nov 8 00:23:41.009425 kernel: pci_bus 0000:02: resource 1 [mem 0xfe600000-0xfe7fffff] Nov 8 00:23:41.009534 kernel: pci_bus 0000:02: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Nov 8 00:23:41.009641 kernel: pci_bus 0000:03: resource 1 [mem 0xfe400000-0xfe5fffff] Nov 8 00:23:41.009796 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Nov 8 00:23:41.009980 kernel: pci_bus 0000:04: resource 1 [mem 0xfe200000-0xfe3fffff] Nov 8 00:23:41.010081 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Nov 8 00:23:41.010186 kernel: pci_bus 0000:05: resource 1 [mem 0xfe000000-0xfe1fffff] Nov 8 00:23:41.011308 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Nov 8 00:23:41.011451 kernel: pci_bus 0000:06: resource 1 [mem 0xfde00000-0xfdffffff] Nov 8 00:23:41.011554 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Nov 8 00:23:41.011663 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Nov 8 00:23:41.011759 kernel: pci_bus 0000:07: resource 1 [mem 0xfdc00000-0xfddfffff] Nov 8 00:23:41.011851 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Nov 8 00:23:41.011955 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Nov 8 00:23:41.012052 kernel: pci_bus 0000:08: resource 1 [mem 0xfda00000-0xfdbfffff] Nov 8 00:23:41.012144 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Nov 8 00:23:41.012246 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Nov 8 00:23:41.014406 kernel: pci_bus 0000:09: resource 1 [mem 0xfd800000-0xfd9fffff] Nov 8 00:23:41.014519 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Nov 8 00:23:41.014540 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 8 00:23:41.014553 kernel: PCI: CLS 0 bytes, default 64 Nov 8 00:23:41.014564 kernel: Initialise system trusted keyrings Nov 8 00:23:41.014576 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Nov 8 00:23:41.014588 kernel: Key type asymmetric registered Nov 8 00:23:41.014599 kernel: Asymmetric key parser 'x509' registered Nov 8 00:23:41.014616 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 8 00:23:41.014628 kernel: io scheduler mq-deadline registered Nov 8 00:23:41.014640 kernel: io scheduler kyber registered Nov 8 00:23:41.014651 kernel: io scheduler bfq registered Nov 8 00:23:41.014762 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Nov 8 00:23:41.014866 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Nov 8 00:23:41.014966 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Nov 8 00:23:41.015067 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Nov 8 00:23:41.015167 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Nov 8 00:23:41.015272 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Nov 8 00:23:41.015418 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Nov 8 00:23:41.015515 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Nov 8 00:23:41.015617 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Nov 8 00:23:41.015717 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Nov 8 00:23:41.015815 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Nov 8 00:23:41.015907 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Nov 8 00:23:41.016009 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Nov 8 00:23:41.016105 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Nov 8 00:23:41.016210 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Nov 8 00:23:41.022265 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Nov 8 00:23:41.022315 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 8 00:23:41.022446 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Nov 8 00:23:41.022547 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Nov 8 00:23:41.022562 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 8 00:23:41.022574 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Nov 8 00:23:41.022592 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 8 00:23:41.022603 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 8 00:23:41.022614 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 8 00:23:41.022624 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 8 00:23:41.022634 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 8 00:23:41.022747 kernel: rtc_cmos 00:03: RTC can wake from S4 Nov 8 00:23:41.022764 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 8 00:23:41.022848 kernel: rtc_cmos 00:03: registered as rtc0 Nov 8 00:23:41.022942 kernel: rtc_cmos 00:03: setting system clock to 2025-11-08T00:23:40 UTC (1762561420) Nov 8 00:23:41.023025 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Nov 8 00:23:41.023039 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Nov 8 00:23:41.023050 kernel: NET: Registered PF_INET6 protocol family Nov 8 00:23:41.023060 kernel: Segment Routing with IPv6 Nov 8 00:23:41.023069 kernel: In-situ OAM (IOAM) with IPv6 Nov 8 00:23:41.023082 kernel: NET: Registered PF_PACKET protocol family Nov 8 00:23:41.023092 kernel: Key type dns_resolver registered Nov 8 00:23:41.023101 kernel: IPI shorthand broadcast: enabled Nov 8 00:23:41.023113 kernel: sched_clock: Marking stable (1524009379, 230604764)->(1793108563, -38494420) Nov 8 00:23:41.023123 kernel: registered taskstats version 1 Nov 8 00:23:41.023132 kernel: Loading compiled-in X.509 certificates Nov 8 00:23:41.023144 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: cf7a35a152685ec84a621291e4ce58c959319dfd' Nov 8 00:23:41.023153 kernel: Key type .fscrypt registered Nov 8 00:23:41.023164 kernel: Key type fscrypt-provisioning registered Nov 8 00:23:41.023175 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 8 00:23:41.023184 kernel: ima: Allocated hash algorithm: sha1 Nov 8 00:23:41.023195 kernel: ima: No architecture policies found Nov 8 00:23:41.023206 kernel: clk: Disabling unused clocks Nov 8 00:23:41.023216 kernel: Freeing unused kernel image (initmem) memory: 42880K Nov 8 00:23:41.023226 kernel: Write protecting the kernel read-only data: 36864k Nov 8 00:23:41.023235 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Nov 8 00:23:41.023246 kernel: Run /init as init process Nov 8 00:23:41.023256 kernel: with arguments: Nov 8 00:23:41.023266 kernel: /init Nov 8 00:23:41.023275 kernel: with environment: Nov 8 00:23:41.023300 kernel: HOME=/ Nov 8 00:23:41.023312 kernel: TERM=linux Nov 8 00:23:41.023324 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 8 00:23:41.023338 systemd[1]: Detected virtualization kvm. Nov 8 00:23:41.023362 systemd[1]: Detected architecture x86-64. Nov 8 00:23:41.023373 systemd[1]: Running in initrd. Nov 8 00:23:41.023383 systemd[1]: No hostname configured, using default hostname. Nov 8 00:23:41.023393 systemd[1]: Hostname set to . Nov 8 00:23:41.023406 systemd[1]: Initializing machine ID from VM UUID. Nov 8 00:23:41.023416 systemd[1]: Queued start job for default target initrd.target. Nov 8 00:23:41.023427 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:23:41.023437 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:23:41.023449 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 8 00:23:41.023459 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 8 00:23:41.023470 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 8 00:23:41.023481 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 8 00:23:41.023495 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 8 00:23:41.023506 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 8 00:23:41.023517 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:23:41.023528 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:23:41.023539 systemd[1]: Reached target paths.target - Path Units. Nov 8 00:23:41.023550 systemd[1]: Reached target slices.target - Slice Units. Nov 8 00:23:41.023561 systemd[1]: Reached target swap.target - Swaps. Nov 8 00:23:41.023574 systemd[1]: Reached target timers.target - Timer Units. Nov 8 00:23:41.023585 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:23:41.023597 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:23:41.023607 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 8 00:23:41.023617 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 8 00:23:41.023627 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:23:41.023638 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 8 00:23:41.023649 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:23:41.023660 systemd[1]: Reached target sockets.target - Socket Units. Nov 8 00:23:41.023672 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 8 00:23:41.023683 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 8 00:23:41.023693 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 8 00:23:41.023703 systemd[1]: Starting systemd-fsck-usr.service... Nov 8 00:23:41.023713 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 8 00:23:41.023724 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 8 00:23:41.023734 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:23:41.023769 systemd-journald[187]: Collecting audit messages is disabled. Nov 8 00:23:41.023798 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 8 00:23:41.023810 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:23:41.023820 systemd[1]: Finished systemd-fsck-usr.service. Nov 8 00:23:41.023835 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 8 00:23:41.023848 systemd-journald[187]: Journal started Nov 8 00:23:41.023872 systemd-journald[187]: Runtime Journal (/run/log/journal/ef39ea31895a47e98138c06600e226bd) is 4.8M, max 38.4M, 33.6M free. Nov 8 00:23:40.997650 systemd-modules-load[188]: Inserted module 'overlay' Nov 8 00:23:41.074328 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 8 00:23:41.074390 kernel: Bridge firewalling registered Nov 8 00:23:41.035799 systemd-modules-load[188]: Inserted module 'br_netfilter' Nov 8 00:23:41.077732 systemd[1]: Started systemd-journald.service - Journal Service. Nov 8 00:23:41.083669 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 8 00:23:41.084549 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:23:41.093463 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:23:41.095437 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:23:41.098499 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 8 00:23:41.101722 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:23:41.116443 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 8 00:23:41.122546 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:23:41.124568 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:23:41.126338 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:23:41.127335 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:23:41.134483 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 8 00:23:41.137424 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 8 00:23:41.149996 dracut-cmdline[221]: dracut-dracut-053 Nov 8 00:23:41.153683 dracut-cmdline[221]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:23:41.173633 systemd-resolved[223]: Positive Trust Anchors: Nov 8 00:23:41.174381 systemd-resolved[223]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 8 00:23:41.174410 systemd-resolved[223]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 8 00:23:41.177563 systemd-resolved[223]: Defaulting to hostname 'linux'. Nov 8 00:23:41.186475 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 8 00:23:41.187620 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:23:41.231359 kernel: SCSI subsystem initialized Nov 8 00:23:41.240316 kernel: Loading iSCSI transport class v2.0-870. Nov 8 00:23:41.253336 kernel: iscsi: registered transport (tcp) Nov 8 00:23:41.272040 kernel: iscsi: registered transport (qla4xxx) Nov 8 00:23:41.272122 kernel: QLogic iSCSI HBA Driver Nov 8 00:23:41.312744 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 8 00:23:41.318465 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 8 00:23:41.346698 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 8 00:23:41.346761 kernel: device-mapper: uevent: version 1.0.3 Nov 8 00:23:41.348931 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 8 00:23:41.392336 kernel: raid6: avx2x4 gen() 33101 MB/s Nov 8 00:23:41.410331 kernel: raid6: avx2x2 gen() 25539 MB/s Nov 8 00:23:41.429664 kernel: raid6: avx2x1 gen() 23802 MB/s Nov 8 00:23:41.429730 kernel: raid6: using algorithm avx2x4 gen() 33101 MB/s Nov 8 00:23:41.448439 kernel: raid6: .... xor() 3615 MB/s, rmw enabled Nov 8 00:23:41.448509 kernel: raid6: using avx2x2 recovery algorithm Nov 8 00:23:41.468368 kernel: xor: automatically using best checksumming function avx Nov 8 00:23:41.590339 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 8 00:23:41.602665 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:23:41.609489 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:23:41.619673 systemd-udevd[406]: Using default interface naming scheme 'v255'. Nov 8 00:23:41.622755 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:23:41.632504 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 8 00:23:41.646245 dracut-pre-trigger[417]: rd.md=0: removing MD RAID activation Nov 8 00:23:41.675346 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:23:41.681464 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 8 00:23:41.735489 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:23:41.744482 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 8 00:23:41.757976 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 8 00:23:41.760778 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:23:41.764182 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:23:41.765332 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 8 00:23:41.773400 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 8 00:23:41.791300 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:23:41.816642 kernel: scsi host0: Virtio SCSI HBA Nov 8 00:23:41.826334 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Nov 8 00:23:41.842327 kernel: cryptd: max_cpu_qlen set to 1000 Nov 8 00:23:41.864635 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:23:41.917750 kernel: AVX2 version of gcm_enc/dec engaged. Nov 8 00:23:41.917774 kernel: AES CTR mode by8 optimization enabled Nov 8 00:23:41.917782 kernel: ACPI: bus type USB registered Nov 8 00:23:41.917803 kernel: usbcore: registered new interface driver usbfs Nov 8 00:23:41.917811 kernel: usbcore: registered new interface driver hub Nov 8 00:23:41.917818 kernel: usbcore: registered new device driver usb Nov 8 00:23:41.917825 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Nov 8 00:23:41.917985 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Nov 8 00:23:41.918096 kernel: libata version 3.00 loaded. Nov 8 00:23:41.864746 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:23:41.919313 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:23:41.920959 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:23:41.921123 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:23:41.923433 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:23:41.929560 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Nov 8 00:23:41.934530 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Nov 8 00:23:41.934694 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Nov 8 00:23:41.936939 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Nov 8 00:23:41.942775 kernel: ahci 0000:00:1f.2: version 3.0 Nov 8 00:23:41.942952 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 8 00:23:41.942965 kernel: hub 1-0:1.0: USB hub found Nov 8 00:23:41.937778 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:23:41.958025 kernel: hub 1-0:1.0: 4 ports detected Nov 8 00:23:41.958191 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Nov 8 00:23:41.958410 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Nov 8 00:23:41.958518 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 8 00:23:41.958604 kernel: hub 2-0:1.0: USB hub found Nov 8 00:23:41.958704 kernel: hub 2-0:1.0: 4 ports detected Nov 8 00:23:41.958784 kernel: scsi host1: ahci Nov 8 00:23:41.960568 kernel: scsi host2: ahci Nov 8 00:23:41.963330 kernel: scsi host3: ahci Nov 8 00:23:41.963592 kernel: sd 0:0:0:0: Power-on or device reset occurred Nov 8 00:23:41.963701 kernel: sd 0:0:0:0: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Nov 8 00:23:41.963789 kernel: scsi host4: ahci Nov 8 00:23:41.963874 kernel: sd 0:0:0:0: [sda] Write Protect is off Nov 8 00:23:41.964142 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Nov 8 00:23:41.964264 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Nov 8 00:23:41.964398 kernel: scsi host5: ahci Nov 8 00:23:41.965309 kernel: scsi host6: ahci Nov 8 00:23:41.965478 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a100 irq 51 Nov 8 00:23:41.965489 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a180 irq 51 Nov 8 00:23:41.965497 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a200 irq 51 Nov 8 00:23:41.965504 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a280 irq 51 Nov 8 00:23:41.965515 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a300 irq 51 Nov 8 00:23:41.965523 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a380 irq 51 Nov 8 00:23:41.968307 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 8 00:23:41.968329 kernel: GPT:17805311 != 80003071 Nov 8 00:23:41.968338 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 8 00:23:41.968346 kernel: GPT:17805311 != 80003071 Nov 8 00:23:41.968363 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 8 00:23:41.968371 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:23:41.968379 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Nov 8 00:23:42.080166 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:23:42.086613 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:23:42.097007 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:23:42.189421 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Nov 8 00:23:42.283315 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 8 00:23:42.283437 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 8 00:23:42.287143 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 8 00:23:42.290338 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 8 00:23:42.290404 kernel: ata3: SATA link down (SStatus 0 SControl 300) Nov 8 00:23:42.295545 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Nov 8 00:23:42.295591 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Nov 8 00:23:42.297772 kernel: ata1.00: applying bridge limits Nov 8 00:23:42.299785 kernel: ata1.00: configured for UDMA/100 Nov 8 00:23:42.305327 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Nov 8 00:23:42.339325 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 8 00:23:42.357109 kernel: usbcore: registered new interface driver usbhid Nov 8 00:23:42.357176 kernel: usbhid: USB HID core driver Nov 8 00:23:42.361427 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Nov 8 00:23:42.361795 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 8 00:23:42.380528 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Nov 8 00:23:42.380599 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Nov 8 00:23:42.388314 kernel: BTRFS: device fsid a2737782-a37e-42f9-8b56-489a87f47acc devid 1 transid 35 /dev/sda3 scanned by (udev-worker) (450) Nov 8 00:23:42.389326 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Nov 8 00:23:42.398311 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (456) Nov 8 00:23:42.408971 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Nov 8 00:23:42.417146 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Nov 8 00:23:42.422383 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Nov 8 00:23:42.426032 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Nov 8 00:23:42.427030 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Nov 8 00:23:42.437518 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 8 00:23:42.443053 disk-uuid[576]: Primary Header is updated. Nov 8 00:23:42.443053 disk-uuid[576]: Secondary Entries is updated. Nov 8 00:23:42.443053 disk-uuid[576]: Secondary Header is updated. Nov 8 00:23:42.460323 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:23:42.466317 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:23:42.481315 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:23:43.477840 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:23:43.477907 disk-uuid[577]: The operation has completed successfully. Nov 8 00:23:43.529504 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 8 00:23:43.529618 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 8 00:23:43.543472 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 8 00:23:43.548750 sh[597]: Success Nov 8 00:23:43.564333 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Nov 8 00:23:43.609863 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 8 00:23:43.617404 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 8 00:23:43.620786 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 8 00:23:43.639362 kernel: BTRFS info (device dm-0): first mount of filesystem a2737782-a37e-42f9-8b56-489a87f47acc Nov 8 00:23:43.639430 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:23:43.643015 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 8 00:23:43.649724 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 8 00:23:43.649767 kernel: BTRFS info (device dm-0): using free space tree Nov 8 00:23:43.661318 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 8 00:23:43.662979 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 8 00:23:43.664074 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 8 00:23:43.673490 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 8 00:23:43.676454 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 8 00:23:43.689014 kernel: BTRFS info (device sda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:23:43.689057 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:23:43.689069 kernel: BTRFS info (device sda6): using free space tree Nov 8 00:23:43.699228 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 8 00:23:43.699280 kernel: BTRFS info (device sda6): auto enabling async discard Nov 8 00:23:43.708386 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 8 00:23:43.713307 kernel: BTRFS info (device sda6): last unmount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:23:43.720008 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 8 00:23:43.726472 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 8 00:23:43.763265 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:23:43.776575 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 8 00:23:43.799168 systemd-networkd[778]: lo: Link UP Nov 8 00:23:43.799176 systemd-networkd[778]: lo: Gained carrier Nov 8 00:23:43.802150 systemd-networkd[778]: Enumeration completed Nov 8 00:23:43.802366 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 8 00:23:43.803929 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:23:43.803932 systemd-networkd[778]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:23:43.804487 systemd[1]: Reached target network.target - Network. Nov 8 00:23:43.805016 systemd-networkd[778]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:23:43.805019 systemd-networkd[778]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:23:43.805660 systemd-networkd[778]: eth0: Link UP Nov 8 00:23:43.805663 systemd-networkd[778]: eth0: Gained carrier Nov 8 00:23:43.814860 ignition[733]: Ignition 2.19.0 Nov 8 00:23:43.805669 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:23:43.814866 ignition[733]: Stage: fetch-offline Nov 8 00:23:43.811039 systemd-networkd[778]: eth1: Link UP Nov 8 00:23:43.814896 ignition[733]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:23:43.811042 systemd-networkd[778]: eth1: Gained carrier Nov 8 00:23:43.814902 ignition[733]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 8 00:23:43.811050 systemd-networkd[778]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:23:43.814982 ignition[733]: parsed url from cmdline: "" Nov 8 00:23:43.816357 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:23:43.814985 ignition[733]: no config URL provided Nov 8 00:23:43.823576 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 8 00:23:43.814989 ignition[733]: reading system config file "/usr/lib/ignition/user.ign" Nov 8 00:23:43.814994 ignition[733]: no config at "/usr/lib/ignition/user.ign" Nov 8 00:23:43.814999 ignition[733]: failed to fetch config: resource requires networking Nov 8 00:23:43.815153 ignition[733]: Ignition finished successfully Nov 8 00:23:43.832335 ignition[786]: Ignition 2.19.0 Nov 8 00:23:43.832346 ignition[786]: Stage: fetch Nov 8 00:23:43.832494 ignition[786]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:23:43.832502 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 8 00:23:43.832568 ignition[786]: parsed url from cmdline: "" Nov 8 00:23:43.832571 ignition[786]: no config URL provided Nov 8 00:23:43.832575 ignition[786]: reading system config file "/usr/lib/ignition/user.ign" Nov 8 00:23:43.832580 ignition[786]: no config at "/usr/lib/ignition/user.ign" Nov 8 00:23:43.832596 ignition[786]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Nov 8 00:23:43.832710 ignition[786]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Nov 8 00:23:43.849345 systemd-networkd[778]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Nov 8 00:23:43.878356 systemd-networkd[778]: eth0: DHCPv4 address 65.109.8.72/32, gateway 172.31.1.1 acquired from 172.31.1.1 Nov 8 00:23:44.033820 ignition[786]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Nov 8 00:23:44.037721 ignition[786]: GET result: OK Nov 8 00:23:44.037788 ignition[786]: parsing config with SHA512: 0c5605dbaa85f7e136a4636859dd27b2ad86a7576c3832a20bb6bb7c8c458d817c9d2e7fa50b0371b6508dfcc06e016c4a332ad9eddc31140678c142353de5b5 Nov 8 00:23:44.041605 unknown[786]: fetched base config from "system" Nov 8 00:23:44.041615 unknown[786]: fetched base config from "system" Nov 8 00:23:44.041990 ignition[786]: fetch: fetch complete Nov 8 00:23:44.041633 unknown[786]: fetched user config from "hetzner" Nov 8 00:23:44.041994 ignition[786]: fetch: fetch passed Nov 8 00:23:44.043939 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 8 00:23:44.042029 ignition[786]: Ignition finished successfully Nov 8 00:23:44.052497 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 8 00:23:44.064332 ignition[794]: Ignition 2.19.0 Nov 8 00:23:44.064349 ignition[794]: Stage: kargs Nov 8 00:23:44.064577 ignition[794]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:23:44.066708 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 8 00:23:44.064590 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 8 00:23:44.065527 ignition[794]: kargs: kargs passed Nov 8 00:23:44.065575 ignition[794]: Ignition finished successfully Nov 8 00:23:44.084523 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 8 00:23:44.096065 ignition[801]: Ignition 2.19.0 Nov 8 00:23:44.096075 ignition[801]: Stage: disks Nov 8 00:23:44.101157 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 8 00:23:44.096236 ignition[801]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:23:44.111949 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 8 00:23:44.096245 ignition[801]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 8 00:23:44.113956 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 8 00:23:44.097144 ignition[801]: disks: disks passed Nov 8 00:23:44.116767 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 8 00:23:44.097185 ignition[801]: Ignition finished successfully Nov 8 00:23:44.119680 systemd[1]: Reached target sysinit.target - System Initialization. Nov 8 00:23:44.122804 systemd[1]: Reached target basic.target - Basic System. Nov 8 00:23:44.132566 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 8 00:23:44.152343 systemd-fsck[810]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Nov 8 00:23:44.156922 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 8 00:23:44.166473 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 8 00:23:44.246312 kernel: EXT4-fs (sda9): mounted filesystem 3cd35b5c-4e0e-45c1-abc9-cf70eebd42df r/w with ordered data mode. Quota mode: none. Nov 8 00:23:44.246951 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 8 00:23:44.248261 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 8 00:23:44.259393 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:23:44.262413 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 8 00:23:44.266216 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Nov 8 00:23:44.271881 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 8 00:23:44.298024 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (818) Nov 8 00:23:44.298065 kernel: BTRFS info (device sda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:23:44.298086 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:23:44.298104 kernel: BTRFS info (device sda6): using free space tree Nov 8 00:23:44.298121 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 8 00:23:44.298146 kernel: BTRFS info (device sda6): auto enabling async discard Nov 8 00:23:44.271915 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:23:44.296513 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:23:44.300639 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 8 00:23:44.310525 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 8 00:23:44.339889 coreos-metadata[820]: Nov 08 00:23:44.339 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Nov 8 00:23:44.341731 coreos-metadata[820]: Nov 08 00:23:44.341 INFO Fetch successful Nov 8 00:23:44.343866 coreos-metadata[820]: Nov 08 00:23:44.343 INFO wrote hostname ci-4081-3-6-n-d839b30383 to /sysroot/etc/hostname Nov 8 00:23:44.345137 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 8 00:23:44.354268 initrd-setup-root[846]: cut: /sysroot/etc/passwd: No such file or directory Nov 8 00:23:44.359102 initrd-setup-root[853]: cut: /sysroot/etc/group: No such file or directory Nov 8 00:23:44.362965 initrd-setup-root[860]: cut: /sysroot/etc/shadow: No such file or directory Nov 8 00:23:44.366559 initrd-setup-root[867]: cut: /sysroot/etc/gshadow: No such file or directory Nov 8 00:23:44.432644 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 8 00:23:44.439410 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 8 00:23:44.443413 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 8 00:23:44.449491 kernel: BTRFS info (device sda6): last unmount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:23:44.467577 ignition[936]: INFO : Ignition 2.19.0 Nov 8 00:23:44.470426 ignition[936]: INFO : Stage: mount Nov 8 00:23:44.470426 ignition[936]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:23:44.470426 ignition[936]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 8 00:23:44.475250 ignition[936]: INFO : mount: mount passed Nov 8 00:23:44.475250 ignition[936]: INFO : Ignition finished successfully Nov 8 00:23:44.471173 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 8 00:23:44.479409 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 8 00:23:44.480211 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 8 00:23:44.636937 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 8 00:23:44.644700 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:23:44.673343 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (948) Nov 8 00:23:44.681613 kernel: BTRFS info (device sda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:23:44.681672 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:23:44.687167 kernel: BTRFS info (device sda6): using free space tree Nov 8 00:23:44.703062 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 8 00:23:44.703112 kernel: BTRFS info (device sda6): auto enabling async discard Nov 8 00:23:44.707096 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:23:44.739345 ignition[964]: INFO : Ignition 2.19.0 Nov 8 00:23:44.739345 ignition[964]: INFO : Stage: files Nov 8 00:23:44.739345 ignition[964]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:23:44.739345 ignition[964]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 8 00:23:44.748148 ignition[964]: DEBUG : files: compiled without relabeling support, skipping Nov 8 00:23:44.748148 ignition[964]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 8 00:23:44.748148 ignition[964]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 8 00:23:44.755363 ignition[964]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 8 00:23:44.755363 ignition[964]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 8 00:23:44.755363 ignition[964]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 8 00:23:44.754263 unknown[964]: wrote ssh authorized keys file for user: core Nov 8 00:23:44.764795 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 8 00:23:44.764795 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Nov 8 00:23:45.300609 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 8 00:23:45.408444 systemd-networkd[778]: eth0: Gained IPv6LL Nov 8 00:23:45.613062 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 8 00:23:45.613062 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 8 00:23:45.615923 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Nov 8 00:23:45.720457 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 8 00:23:45.793708 systemd-networkd[778]: eth1: Gained IPv6LL Nov 8 00:23:45.811780 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 8 00:23:45.811780 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Nov 8 00:23:45.815460 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Nov 8 00:23:45.815460 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:23:45.815460 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:23:45.815460 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:23:45.815460 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:23:45.815460 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:23:45.815460 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:23:45.815460 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:23:45.815460 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:23:45.815460 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 8 00:23:45.815460 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 8 00:23:45.815460 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 8 00:23:45.815460 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Nov 8 00:23:46.122908 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Nov 8 00:23:46.342718 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 8 00:23:46.342718 ignition[964]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Nov 8 00:23:46.346385 ignition[964]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:23:46.346385 ignition[964]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:23:46.346385 ignition[964]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Nov 8 00:23:46.346385 ignition[964]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Nov 8 00:23:46.346385 ignition[964]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Nov 8 00:23:46.346385 ignition[964]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Nov 8 00:23:46.346385 ignition[964]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Nov 8 00:23:46.346385 ignition[964]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Nov 8 00:23:46.346385 ignition[964]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Nov 8 00:23:46.346385 ignition[964]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:23:46.346385 ignition[964]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:23:46.346385 ignition[964]: INFO : files: files passed Nov 8 00:23:46.346385 ignition[964]: INFO : Ignition finished successfully Nov 8 00:23:46.346862 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 8 00:23:46.361487 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 8 00:23:46.364436 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 8 00:23:46.367380 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 8 00:23:46.367481 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 8 00:23:46.374559 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:23:46.374559 initrd-setup-root-after-ignition[994]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:23:46.378344 initrd-setup-root-after-ignition[998]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:23:46.377157 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:23:46.378563 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 8 00:23:46.386459 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 8 00:23:46.402238 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 8 00:23:46.402349 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 8 00:23:46.404031 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 8 00:23:46.405199 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 8 00:23:46.406681 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 8 00:23:46.416562 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 8 00:23:46.425142 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:23:46.429420 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 8 00:23:46.437143 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:23:46.437950 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:23:46.439458 systemd[1]: Stopped target timers.target - Timer Units. Nov 8 00:23:46.440843 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 8 00:23:46.440941 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:23:46.442562 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 8 00:23:46.443516 systemd[1]: Stopped target basic.target - Basic System. Nov 8 00:23:46.444922 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 8 00:23:46.446261 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:23:46.447597 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 8 00:23:46.449041 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 8 00:23:46.450502 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:23:46.451978 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 8 00:23:46.453426 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 8 00:23:46.454889 systemd[1]: Stopped target swap.target - Swaps. Nov 8 00:23:46.456252 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 8 00:23:46.456381 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:23:46.457971 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:23:46.458936 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:23:46.460250 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 8 00:23:46.460616 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:23:46.461787 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 8 00:23:46.461908 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 8 00:23:46.463748 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 8 00:23:46.463840 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:23:46.464770 systemd[1]: ignition-files.service: Deactivated successfully. Nov 8 00:23:46.464848 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 8 00:23:46.466062 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 8 00:23:46.466140 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 8 00:23:46.477919 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 8 00:23:46.481509 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 8 00:23:46.482143 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 8 00:23:46.483902 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:23:46.487327 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 8 00:23:46.488126 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:23:46.490688 ignition[1018]: INFO : Ignition 2.19.0 Nov 8 00:23:46.490688 ignition[1018]: INFO : Stage: umount Nov 8 00:23:46.490688 ignition[1018]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:23:46.490688 ignition[1018]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 8 00:23:46.490688 ignition[1018]: INFO : umount: umount passed Nov 8 00:23:46.490688 ignition[1018]: INFO : Ignition finished successfully Nov 8 00:23:46.492075 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 8 00:23:46.492175 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 8 00:23:46.496911 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 8 00:23:46.496996 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 8 00:23:46.502194 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 8 00:23:46.502241 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 8 00:23:46.506567 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 8 00:23:46.506633 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 8 00:23:46.507655 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 8 00:23:46.507703 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 8 00:23:46.508439 systemd[1]: Stopped target network.target - Network. Nov 8 00:23:46.510664 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 8 00:23:46.510719 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:23:46.512767 systemd[1]: Stopped target paths.target - Path Units. Nov 8 00:23:46.513363 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 8 00:23:46.517459 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:23:46.518801 systemd[1]: Stopped target slices.target - Slice Units. Nov 8 00:23:46.520072 systemd[1]: Stopped target sockets.target - Socket Units. Nov 8 00:23:46.521387 systemd[1]: iscsid.socket: Deactivated successfully. Nov 8 00:23:46.521445 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:23:46.522646 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 8 00:23:46.522684 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:23:46.524066 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 8 00:23:46.524119 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 8 00:23:46.525535 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 8 00:23:46.525575 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 8 00:23:46.527000 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 8 00:23:46.528412 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 8 00:23:46.530778 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 8 00:23:46.531243 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 8 00:23:46.531335 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 8 00:23:46.532411 systemd-networkd[778]: eth0: DHCPv6 lease lost Nov 8 00:23:46.533035 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 8 00:23:46.533114 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 8 00:23:46.536363 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 8 00:23:46.536425 systemd-networkd[778]: eth1: DHCPv6 lease lost Nov 8 00:23:46.536484 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 8 00:23:46.538780 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 8 00:23:46.538891 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 8 00:23:46.540377 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 8 00:23:46.540433 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:23:46.547517 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 8 00:23:46.549932 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 8 00:23:46.550001 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:23:46.550764 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 8 00:23:46.550826 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:23:46.551628 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 8 00:23:46.551667 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 8 00:23:46.553076 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 8 00:23:46.553115 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:23:46.554591 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:23:46.564879 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 8 00:23:46.564997 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:23:46.566857 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 8 00:23:46.566933 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 8 00:23:46.568447 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 8 00:23:46.568493 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 8 00:23:46.569704 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 8 00:23:46.569729 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:23:46.571014 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 8 00:23:46.571050 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:23:46.572983 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 8 00:23:46.573017 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 8 00:23:46.574335 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:23:46.574368 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:23:46.582496 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 8 00:23:46.585356 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 8 00:23:46.585436 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:23:46.586190 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 8 00:23:46.586233 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:23:46.587765 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 8 00:23:46.587805 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:23:46.588626 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:23:46.588665 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:23:46.590627 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 8 00:23:46.590694 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 8 00:23:46.592059 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 8 00:23:46.608564 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 8 00:23:46.614906 systemd[1]: Switching root. Nov 8 00:23:46.657845 systemd-journald[187]: Journal stopped Nov 8 00:23:47.542950 systemd-journald[187]: Received SIGTERM from PID 1 (systemd). Nov 8 00:23:47.543006 kernel: SELinux: policy capability network_peer_controls=1 Nov 8 00:23:47.543020 kernel: SELinux: policy capability open_perms=1 Nov 8 00:23:47.543028 kernel: SELinux: policy capability extended_socket_class=1 Nov 8 00:23:47.543043 kernel: SELinux: policy capability always_check_network=0 Nov 8 00:23:47.543053 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 8 00:23:47.543061 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 8 00:23:47.543069 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 8 00:23:47.543078 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 8 00:23:47.543086 kernel: audit: type=1403 audit(1762561426.829:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 8 00:23:47.543094 systemd[1]: Successfully loaded SELinux policy in 48.507ms. Nov 8 00:23:47.543112 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.184ms. Nov 8 00:23:47.543121 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 8 00:23:47.543130 systemd[1]: Detected virtualization kvm. Nov 8 00:23:47.543138 systemd[1]: Detected architecture x86-64. Nov 8 00:23:47.543146 systemd[1]: Detected first boot. Nov 8 00:23:47.543156 systemd[1]: Hostname set to . Nov 8 00:23:47.543164 systemd[1]: Initializing machine ID from VM UUID. Nov 8 00:23:47.543172 zram_generator::config[1061]: No configuration found. Nov 8 00:23:47.543181 systemd[1]: Populated /etc with preset unit settings. Nov 8 00:23:47.543189 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 8 00:23:47.543197 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 8 00:23:47.543205 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 8 00:23:47.543214 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 8 00:23:47.543223 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 8 00:23:47.543234 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 8 00:23:47.543243 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 8 00:23:47.543251 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 8 00:23:47.543259 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 8 00:23:47.543267 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 8 00:23:47.543275 systemd[1]: Created slice user.slice - User and Session Slice. Nov 8 00:23:47.544008 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:23:47.544041 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:23:47.544063 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 8 00:23:47.544082 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 8 00:23:47.544098 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 8 00:23:47.544115 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 8 00:23:47.544129 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 8 00:23:47.544143 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:23:47.544158 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 8 00:23:47.544181 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 8 00:23:47.544191 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 8 00:23:47.544199 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 8 00:23:47.544207 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:23:47.544218 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 8 00:23:47.544226 systemd[1]: Reached target slices.target - Slice Units. Nov 8 00:23:47.544277 systemd[1]: Reached target swap.target - Swaps. Nov 8 00:23:47.544305 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 8 00:23:47.544318 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 8 00:23:47.544326 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:23:47.544335 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 8 00:23:47.544343 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:23:47.544351 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 8 00:23:47.544359 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 8 00:23:47.544368 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 8 00:23:47.544380 systemd[1]: Mounting media.mount - External Media Directory... Nov 8 00:23:47.544390 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:23:47.544414 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 8 00:23:47.544426 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 8 00:23:47.544442 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 8 00:23:47.544457 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 8 00:23:47.544474 systemd[1]: Reached target machines.target - Containers. Nov 8 00:23:47.544492 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 8 00:23:47.544502 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:23:47.544510 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 8 00:23:47.544519 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 8 00:23:47.544527 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:23:47.544535 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 8 00:23:47.544543 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:23:47.544551 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 8 00:23:47.544559 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:23:47.544569 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 8 00:23:47.544578 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 8 00:23:47.544586 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 8 00:23:47.544594 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 8 00:23:47.544603 systemd[1]: Stopped systemd-fsck-usr.service. Nov 8 00:23:47.544611 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 8 00:23:47.544619 kernel: loop: module loaded Nov 8 00:23:47.544629 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 8 00:23:47.544637 kernel: fuse: init (API version 7.39) Nov 8 00:23:47.544647 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 8 00:23:47.544674 systemd-journald[1151]: Collecting audit messages is disabled. Nov 8 00:23:47.544694 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 8 00:23:47.544704 systemd-journald[1151]: Journal started Nov 8 00:23:47.544722 systemd-journald[1151]: Runtime Journal (/run/log/journal/ef39ea31895a47e98138c06600e226bd) is 4.8M, max 38.4M, 33.6M free. Nov 8 00:23:47.259360 systemd[1]: Queued start job for default target multi-user.target. Nov 8 00:23:47.276572 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Nov 8 00:23:47.276969 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 8 00:23:47.558317 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 8 00:23:47.563277 kernel: ACPI: bus type drm_connector registered Nov 8 00:23:47.569341 systemd[1]: verity-setup.service: Deactivated successfully. Nov 8 00:23:47.569428 systemd[1]: Stopped verity-setup.service. Nov 8 00:23:47.583369 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:23:47.583454 systemd[1]: Started systemd-journald.service - Journal Service. Nov 8 00:23:47.580899 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 8 00:23:47.581955 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 8 00:23:47.582870 systemd[1]: Mounted media.mount - External Media Directory. Nov 8 00:23:47.583833 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 8 00:23:47.584637 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 8 00:23:47.585488 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 8 00:23:47.586367 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 8 00:23:47.587429 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:23:47.588590 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 8 00:23:47.588794 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 8 00:23:47.589774 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:23:47.589954 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:23:47.590950 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 8 00:23:47.591130 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 8 00:23:47.592208 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:23:47.592443 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:23:47.593512 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 8 00:23:47.593674 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 8 00:23:47.594626 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:23:47.594850 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:23:47.595897 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 8 00:23:47.597019 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 8 00:23:47.598252 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 8 00:23:47.605693 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 8 00:23:47.611816 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 8 00:23:47.615550 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 8 00:23:47.617385 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 8 00:23:47.617430 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 8 00:23:47.619253 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 8 00:23:47.625038 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 8 00:23:47.636534 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 8 00:23:47.638036 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:23:47.640691 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 8 00:23:47.643436 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 8 00:23:47.644634 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 8 00:23:47.646005 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 8 00:23:47.647371 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 8 00:23:47.649433 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:23:47.653398 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 8 00:23:47.655420 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 8 00:23:47.658630 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:23:47.662520 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 8 00:23:47.663498 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 8 00:23:47.670618 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 8 00:23:47.682861 systemd-journald[1151]: Time spent on flushing to /var/log/journal/ef39ea31895a47e98138c06600e226bd is 23.147ms for 1137 entries. Nov 8 00:23:47.682861 systemd-journald[1151]: System Journal (/var/log/journal/ef39ea31895a47e98138c06600e226bd) is 8.0M, max 584.8M, 576.8M free. Nov 8 00:23:47.724962 systemd-journald[1151]: Received client request to flush runtime journal. Nov 8 00:23:47.726472 kernel: loop0: detected capacity change from 0 to 8 Nov 8 00:23:47.726491 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 8 00:23:47.689464 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 8 00:23:47.690761 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 8 00:23:47.693483 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 8 00:23:47.702769 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 8 00:23:47.719771 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:23:47.729688 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 8 00:23:47.743326 systemd-tmpfiles[1182]: ACLs are not supported, ignoring. Nov 8 00:23:47.743340 systemd-tmpfiles[1182]: ACLs are not supported, ignoring. Nov 8 00:23:47.743931 udevadm[1189]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Nov 8 00:23:47.749418 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:23:47.755440 kernel: loop1: detected capacity change from 0 to 224512 Nov 8 00:23:47.757501 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 8 00:23:47.763700 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 8 00:23:47.767597 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 8 00:23:47.778807 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 8 00:23:47.790496 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 8 00:23:47.801263 kernel: loop2: detected capacity change from 0 to 142488 Nov 8 00:23:47.817470 systemd-tmpfiles[1205]: ACLs are not supported, ignoring. Nov 8 00:23:47.818376 systemd-tmpfiles[1205]: ACLs are not supported, ignoring. Nov 8 00:23:47.827934 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:23:47.852306 kernel: loop3: detected capacity change from 0 to 140768 Nov 8 00:23:47.895321 kernel: loop4: detected capacity change from 0 to 8 Nov 8 00:23:47.902517 kernel: loop5: detected capacity change from 0 to 224512 Nov 8 00:23:47.933349 kernel: loop6: detected capacity change from 0 to 142488 Nov 8 00:23:47.957364 kernel: loop7: detected capacity change from 0 to 140768 Nov 8 00:23:47.974751 (sd-merge)[1210]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Nov 8 00:23:47.975161 (sd-merge)[1210]: Merged extensions into '/usr'. Nov 8 00:23:47.980959 systemd[1]: Reloading requested from client PID 1181 ('systemd-sysext') (unit systemd-sysext.service)... Nov 8 00:23:47.981228 systemd[1]: Reloading... Nov 8 00:23:48.060333 zram_generator::config[1232]: No configuration found. Nov 8 00:23:48.083009 ldconfig[1176]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 8 00:23:48.166019 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:23:48.205350 systemd[1]: Reloading finished in 223 ms. Nov 8 00:23:48.223790 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 8 00:23:48.225139 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 8 00:23:48.237785 systemd[1]: Starting ensure-sysext.service... Nov 8 00:23:48.240257 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 8 00:23:48.267608 systemd[1]: Reloading requested from client PID 1279 ('systemctl') (unit ensure-sysext.service)... Nov 8 00:23:48.267631 systemd[1]: Reloading... Nov 8 00:23:48.279638 systemd-tmpfiles[1280]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 8 00:23:48.279875 systemd-tmpfiles[1280]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 8 00:23:48.280474 systemd-tmpfiles[1280]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 8 00:23:48.280735 systemd-tmpfiles[1280]: ACLs are not supported, ignoring. Nov 8 00:23:48.280837 systemd-tmpfiles[1280]: ACLs are not supported, ignoring. Nov 8 00:23:48.283035 systemd-tmpfiles[1280]: Detected autofs mount point /boot during canonicalization of boot. Nov 8 00:23:48.283181 systemd-tmpfiles[1280]: Skipping /boot Nov 8 00:23:48.289224 systemd-tmpfiles[1280]: Detected autofs mount point /boot during canonicalization of boot. Nov 8 00:23:48.289236 systemd-tmpfiles[1280]: Skipping /boot Nov 8 00:23:48.322321 zram_generator::config[1307]: No configuration found. Nov 8 00:23:48.411557 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:23:48.454779 systemd[1]: Reloading finished in 186 ms. Nov 8 00:23:48.470390 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 8 00:23:48.475760 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:23:48.484499 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 8 00:23:48.490144 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 8 00:23:48.499470 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 8 00:23:48.503068 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 8 00:23:48.513448 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:23:48.516537 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 8 00:23:48.529521 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 8 00:23:48.533524 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:23:48.533667 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:23:48.535501 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:23:48.538557 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:23:48.545538 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:23:48.546392 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:23:48.546509 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:23:48.551039 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:23:48.551202 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:23:48.551451 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:23:48.551559 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:23:48.554048 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:23:48.556328 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:23:48.562513 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 8 00:23:48.564052 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:23:48.564218 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:23:48.568335 systemd[1]: Finished ensure-sysext.service. Nov 8 00:23:48.580443 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 8 00:23:48.582377 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 8 00:23:48.583900 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 8 00:23:48.594360 augenrules[1379]: No rules Nov 8 00:23:48.594406 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 8 00:23:48.596330 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 8 00:23:48.604890 systemd-udevd[1364]: Using default interface naming scheme 'v255'. Nov 8 00:23:48.606007 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:23:48.606130 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:23:48.607581 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:23:48.607699 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:23:48.608723 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:23:48.608821 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:23:48.610141 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 8 00:23:48.610461 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 8 00:23:48.611829 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 8 00:23:48.611884 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 8 00:23:48.613888 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 8 00:23:48.618363 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 8 00:23:48.641198 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:23:48.643987 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 8 00:23:48.657660 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 8 00:23:48.659712 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 8 00:23:48.770428 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 8 00:23:48.771778 systemd[1]: Reached target time-set.target - System Time Set. Nov 8 00:23:48.778057 systemd-resolved[1361]: Positive Trust Anchors: Nov 8 00:23:48.779442 systemd-resolved[1361]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 8 00:23:48.779474 systemd-resolved[1361]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 8 00:23:48.782538 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 8 00:23:48.787146 systemd-resolved[1361]: Using system hostname 'ci-4081-3-6-n-d839b30383'. Nov 8 00:23:48.788901 systemd-networkd[1414]: lo: Link UP Nov 8 00:23:48.791330 systemd-networkd[1414]: lo: Gained carrier Nov 8 00:23:48.793244 systemd-networkd[1414]: Enumeration completed Nov 8 00:23:48.794371 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 8 00:23:48.795163 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 8 00:23:48.795744 systemd-networkd[1414]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:23:48.795855 systemd-networkd[1414]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:23:48.795960 systemd[1]: Reached target network.target - Network. Nov 8 00:23:48.796879 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:23:48.797370 systemd-networkd[1414]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:23:48.797446 systemd-networkd[1414]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:23:48.798529 systemd-networkd[1414]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:23:48.798610 systemd-networkd[1414]: eth0: Link UP Nov 8 00:23:48.798649 systemd-networkd[1414]: eth0: Gained carrier Nov 8 00:23:48.798686 systemd-networkd[1414]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:23:48.801860 systemd-networkd[1414]: eth1: Link UP Nov 8 00:23:48.802259 systemd-networkd[1414]: eth1: Gained carrier Nov 8 00:23:48.802659 systemd-networkd[1414]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:23:48.802717 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 8 00:23:48.823852 systemd-networkd[1414]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:23:48.849299 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (1410) Nov 8 00:23:48.854907 systemd-networkd[1414]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Nov 8 00:23:48.857109 systemd-timesyncd[1378]: Network configuration changed, trying to establish connection. Nov 8 00:23:48.869528 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 8 00:23:48.868387 systemd-networkd[1414]: eth0: DHCPv4 address 65.109.8.72/32, gateway 172.31.1.1 acquired from 172.31.1.1 Nov 8 00:23:48.869728 systemd-timesyncd[1378]: Network configuration changed, trying to establish connection. Nov 8 00:23:48.890123 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Nov 8 00:23:48.892496 kernel: mousedev: PS/2 mouse device common for all mice Nov 8 00:23:48.892556 kernel: ACPI: button: Power Button [PWRF] Nov 8 00:23:48.894713 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Nov 8 00:23:48.894762 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:23:48.894840 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:23:48.901609 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:23:48.906435 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:23:48.911494 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:23:48.912279 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:23:48.913623 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 8 00:23:48.915610 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 8 00:23:48.915631 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:23:48.915909 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:23:48.916028 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:23:48.933189 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:23:48.933399 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:23:48.935190 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:23:48.937400 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Nov 8 00:23:48.936341 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:23:48.942985 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 8 00:23:48.946255 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 8 00:23:48.947194 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 8 00:23:48.954693 kernel: EDAC MC: Ver: 3.0.0 Nov 8 00:23:48.965311 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 8 00:23:48.971495 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Nov 8 00:23:48.971820 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 8 00:23:48.976582 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:23:48.994088 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Nov 8 00:23:48.994151 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Nov 8 00:23:49.000076 kernel: Console: switching to colour dummy device 80x25 Nov 8 00:23:49.001773 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Nov 8 00:23:49.001808 kernel: [drm] features: -context_init Nov 8 00:23:49.003236 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:23:49.003619 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:23:49.005843 kernel: [drm] number of scanouts: 1 Nov 8 00:23:49.005870 kernel: [drm] number of cap sets: 0 Nov 8 00:23:49.007493 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Nov 8 00:23:49.008614 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:23:49.012062 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Nov 8 00:23:49.012091 kernel: Console: switching to colour frame buffer device 160x50 Nov 8 00:23:49.020077 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Nov 8 00:23:49.021228 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:23:49.021404 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:23:49.028462 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:23:49.077959 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:23:49.134458 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 8 00:23:49.139524 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 8 00:23:49.150309 lvm[1462]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 8 00:23:49.182759 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 8 00:23:49.185063 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:23:49.185231 systemd[1]: Reached target sysinit.target - System Initialization. Nov 8 00:23:49.185531 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 8 00:23:49.185696 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 8 00:23:49.186074 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 8 00:23:49.186272 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 8 00:23:49.187068 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 8 00:23:49.187153 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 8 00:23:49.187189 systemd[1]: Reached target paths.target - Path Units. Nov 8 00:23:49.187247 systemd[1]: Reached target timers.target - Timer Units. Nov 8 00:23:49.189507 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 8 00:23:49.191999 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 8 00:23:49.196962 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 8 00:23:49.198621 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 8 00:23:49.199391 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 8 00:23:49.199569 systemd[1]: Reached target sockets.target - Socket Units. Nov 8 00:23:49.199640 systemd[1]: Reached target basic.target - Basic System. Nov 8 00:23:49.199751 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 8 00:23:49.199781 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 8 00:23:49.201141 systemd[1]: Starting containerd.service - containerd container runtime... Nov 8 00:23:49.206624 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 8 00:23:49.210337 lvm[1466]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 8 00:23:49.215360 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 8 00:23:49.222487 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 8 00:23:49.226510 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 8 00:23:49.226982 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 8 00:23:49.233490 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 8 00:23:49.238653 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 8 00:23:49.248913 jq[1470]: false Nov 8 00:23:49.249498 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Nov 8 00:23:49.253369 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 8 00:23:49.263463 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 8 00:23:49.265769 dbus-daemon[1469]: [system] SELinux support is enabled Nov 8 00:23:49.269488 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 8 00:23:49.271546 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 8 00:23:49.272003 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 8 00:23:49.281512 systemd[1]: Starting update-engine.service - Update Engine... Nov 8 00:23:49.298805 coreos-metadata[1468]: Nov 08 00:23:49.297 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Nov 8 00:23:49.297502 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 8 00:23:49.301652 coreos-metadata[1468]: Nov 08 00:23:49.299 INFO Fetch successful Nov 8 00:23:49.301652 coreos-metadata[1468]: Nov 08 00:23:49.299 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Nov 8 00:23:49.300085 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 8 00:23:49.304394 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 8 00:23:49.309352 coreos-metadata[1468]: Nov 08 00:23:49.307 INFO Fetch successful Nov 8 00:23:49.312595 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 8 00:23:49.312717 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 8 00:23:49.312928 systemd[1]: motdgen.service: Deactivated successfully. Nov 8 00:23:49.313032 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 8 00:23:49.316154 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 8 00:23:49.316342 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 8 00:23:49.323737 extend-filesystems[1473]: Found loop4 Nov 8 00:23:49.323737 extend-filesystems[1473]: Found loop5 Nov 8 00:23:49.323737 extend-filesystems[1473]: Found loop6 Nov 8 00:23:49.323737 extend-filesystems[1473]: Found loop7 Nov 8 00:23:49.323737 extend-filesystems[1473]: Found sda Nov 8 00:23:49.323737 extend-filesystems[1473]: Found sda1 Nov 8 00:23:49.323737 extend-filesystems[1473]: Found sda2 Nov 8 00:23:49.323737 extend-filesystems[1473]: Found sda3 Nov 8 00:23:49.323737 extend-filesystems[1473]: Found usr Nov 8 00:23:49.323737 extend-filesystems[1473]: Found sda4 Nov 8 00:23:49.323737 extend-filesystems[1473]: Found sda6 Nov 8 00:23:49.323737 extend-filesystems[1473]: Found sda7 Nov 8 00:23:49.323737 extend-filesystems[1473]: Found sda9 Nov 8 00:23:49.323737 extend-filesystems[1473]: Checking size of /dev/sda9 Nov 8 00:23:49.441962 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Nov 8 00:23:49.442003 update_engine[1480]: I20251108 00:23:49.328962 1480 main.cc:92] Flatcar Update Engine starting Nov 8 00:23:49.442003 update_engine[1480]: I20251108 00:23:49.334715 1480 update_check_scheduler.cc:74] Next update check in 3m56s Nov 8 00:23:49.343944 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 8 00:23:49.447163 jq[1489]: true Nov 8 00:23:49.447274 extend-filesystems[1473]: Resized partition /dev/sda9 Nov 8 00:23:49.343974 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 8 00:23:49.450229 extend-filesystems[1513]: resize2fs 1.47.1 (20-May-2024) Nov 8 00:23:49.371211 (ntainerd)[1501]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 8 00:23:49.462156 tar[1493]: linux-amd64/LICENSE Nov 8 00:23:49.462156 tar[1493]: linux-amd64/helm Nov 8 00:23:49.397924 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 8 00:23:49.466394 jq[1502]: true Nov 8 00:23:49.397956 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 8 00:23:49.406584 systemd[1]: Started update-engine.service - Update Engine. Nov 8 00:23:49.417667 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 8 00:23:49.443957 systemd-logind[1479]: New seat seat0. Nov 8 00:23:49.455794 systemd-logind[1479]: Watching system buttons on /dev/input/event2 (Power Button) Nov 8 00:23:49.455808 systemd-logind[1479]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 8 00:23:49.455964 systemd[1]: Started systemd-logind.service - User Login Management. Nov 8 00:23:49.509853 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 8 00:23:49.513991 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 8 00:23:49.541083 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (1405) Nov 8 00:23:49.589327 bash[1540]: Updated "/home/core/.ssh/authorized_keys" Nov 8 00:23:49.593571 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 8 00:23:49.609306 sshd_keygen[1494]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 8 00:23:49.611299 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Nov 8 00:23:49.614784 systemd[1]: Starting sshkeys.service... Nov 8 00:23:49.634906 extend-filesystems[1513]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Nov 8 00:23:49.634906 extend-filesystems[1513]: old_desc_blocks = 1, new_desc_blocks = 5 Nov 8 00:23:49.634906 extend-filesystems[1513]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Nov 8 00:23:49.636185 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 8 00:23:49.641496 extend-filesystems[1473]: Resized filesystem in /dev/sda9 Nov 8 00:23:49.641496 extend-filesystems[1473]: Found sr0 Nov 8 00:23:49.646638 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 8 00:23:49.648970 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 8 00:23:49.649107 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 8 00:23:49.661515 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 8 00:23:49.674029 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 8 00:23:49.684692 systemd[1]: issuegen.service: Deactivated successfully. Nov 8 00:23:49.684871 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 8 00:23:49.685345 containerd[1501]: time="2025-11-08T00:23:49.685250771Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 8 00:23:49.694677 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 8 00:23:49.700759 locksmithd[1518]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 8 00:23:49.710311 coreos-metadata[1558]: Nov 08 00:23:49.709 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Nov 8 00:23:49.710792 coreos-metadata[1558]: Nov 08 00:23:49.710 INFO Fetch successful Nov 8 00:23:49.712999 unknown[1558]: wrote ssh authorized keys file for user: core Nov 8 00:23:49.716667 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 8 00:23:49.723269 containerd[1501]: time="2025-11-08T00:23:49.723236924Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:23:49.723597 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 8 00:23:49.726911 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 8 00:23:49.730522 systemd[1]: Reached target getty.target - Login Prompts. Nov 8 00:23:49.735149 containerd[1501]: time="2025-11-08T00:23:49.735111615Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.113-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:23:49.735250 containerd[1501]: time="2025-11-08T00:23:49.735235347Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 8 00:23:49.735321 containerd[1501]: time="2025-11-08T00:23:49.735309917Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 8 00:23:49.735524 containerd[1501]: time="2025-11-08T00:23:49.735509141Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 8 00:23:49.735583 containerd[1501]: time="2025-11-08T00:23:49.735573101Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 8 00:23:49.735686 containerd[1501]: time="2025-11-08T00:23:49.735669572Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:23:49.735744 containerd[1501]: time="2025-11-08T00:23:49.735733281Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:23:49.735948 containerd[1501]: time="2025-11-08T00:23:49.735930932Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:23:49.735999 containerd[1501]: time="2025-11-08T00:23:49.735988630Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 8 00:23:49.736042 containerd[1501]: time="2025-11-08T00:23:49.736032282Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:23:49.736076 containerd[1501]: time="2025-11-08T00:23:49.736067999Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 8 00:23:49.736195 containerd[1501]: time="2025-11-08T00:23:49.736181452Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:23:49.736492 containerd[1501]: time="2025-11-08T00:23:49.736476656Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:23:49.736652 containerd[1501]: time="2025-11-08T00:23:49.736638078Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:23:49.736698 containerd[1501]: time="2025-11-08T00:23:49.736689094Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 8 00:23:49.736829 containerd[1501]: time="2025-11-08T00:23:49.736816443Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 8 00:23:49.736913 containerd[1501]: time="2025-11-08T00:23:49.736901042Z" level=info msg="metadata content store policy set" policy=shared Nov 8 00:23:49.740707 containerd[1501]: time="2025-11-08T00:23:49.740690098Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 8 00:23:49.740932 containerd[1501]: time="2025-11-08T00:23:49.740921041Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 8 00:23:49.741006 containerd[1501]: time="2025-11-08T00:23:49.740995200Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 8 00:23:49.741054 containerd[1501]: time="2025-11-08T00:23:49.741045644Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 8 00:23:49.741109 containerd[1501]: time="2025-11-08T00:23:49.741098194Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 8 00:23:49.741319 containerd[1501]: time="2025-11-08T00:23:49.741231633Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 8 00:23:49.742314 containerd[1501]: time="2025-11-08T00:23:49.742106134Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 8 00:23:49.742314 containerd[1501]: time="2025-11-08T00:23:49.742204217Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 8 00:23:49.742314 containerd[1501]: time="2025-11-08T00:23:49.742217743Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 8 00:23:49.742314 containerd[1501]: time="2025-11-08T00:23:49.742228293Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 8 00:23:49.742314 containerd[1501]: time="2025-11-08T00:23:49.742244834Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 8 00:23:49.742314 containerd[1501]: time="2025-11-08T00:23:49.742256145Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 8 00:23:49.742314 containerd[1501]: time="2025-11-08T00:23:49.742265633Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 8 00:23:49.742314 containerd[1501]: time="2025-11-08T00:23:49.742275992Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 8 00:23:49.742742 containerd[1501]: time="2025-11-08T00:23:49.742464566Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 8 00:23:49.742742 containerd[1501]: time="2025-11-08T00:23:49.742482820Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 8 00:23:49.742742 containerd[1501]: time="2025-11-08T00:23:49.742492458Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 8 00:23:49.742742 containerd[1501]: time="2025-11-08T00:23:49.742502657Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 8 00:23:49.742742 containerd[1501]: time="2025-11-08T00:23:49.742526983Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 8 00:23:49.742742 containerd[1501]: time="2025-11-08T00:23:49.742538775Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 8 00:23:49.742742 containerd[1501]: time="2025-11-08T00:23:49.742551088Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 8 00:23:49.742742 containerd[1501]: time="2025-11-08T00:23:49.742561508Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 8 00:23:49.742742 containerd[1501]: time="2025-11-08T00:23:49.742571867Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 8 00:23:49.742742 containerd[1501]: time="2025-11-08T00:23:49.742585402Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 8 00:23:49.742742 containerd[1501]: time="2025-11-08T00:23:49.742594910Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 8 00:23:49.742742 containerd[1501]: time="2025-11-08T00:23:49.742604228Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 8 00:23:49.742742 containerd[1501]: time="2025-11-08T00:23:49.742613405Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 8 00:23:49.742742 containerd[1501]: time="2025-11-08T00:23:49.742627391Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 8 00:23:49.742938 containerd[1501]: time="2025-11-08T00:23:49.742637260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 8 00:23:49.742938 containerd[1501]: time="2025-11-08T00:23:49.742647960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 8 00:23:49.742938 containerd[1501]: time="2025-11-08T00:23:49.742657868Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 8 00:23:49.742938 containerd[1501]: time="2025-11-08T00:23:49.742669310Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 8 00:23:49.742938 containerd[1501]: time="2025-11-08T00:23:49.742686412Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 8 00:23:49.742938 containerd[1501]: time="2025-11-08T00:23:49.742695359Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 8 00:23:49.742938 containerd[1501]: time="2025-11-08T00:23:49.742705878Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 8 00:23:49.743304 containerd[1501]: time="2025-11-08T00:23:49.743115276Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 8 00:23:49.743304 containerd[1501]: time="2025-11-08T00:23:49.743138560Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 8 00:23:49.743304 containerd[1501]: time="2025-11-08T00:23:49.743147687Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 8 00:23:49.743304 containerd[1501]: time="2025-11-08T00:23:49.743156554Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 8 00:23:49.743304 containerd[1501]: time="2025-11-08T00:23:49.743163878Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 8 00:23:49.743304 containerd[1501]: time="2025-11-08T00:23:49.743173595Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 8 00:23:49.743304 containerd[1501]: time="2025-11-08T00:23:49.743245921Z" level=info msg="NRI interface is disabled by configuration." Nov 8 00:23:49.743304 containerd[1501]: time="2025-11-08T00:23:49.743258144Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 8 00:23:49.744046 containerd[1501]: time="2025-11-08T00:23:49.743640902Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 8 00:23:49.744046 containerd[1501]: time="2025-11-08T00:23:49.743694122Z" level=info msg="Connect containerd service" Nov 8 00:23:49.744046 containerd[1501]: time="2025-11-08T00:23:49.743722575Z" level=info msg="using legacy CRI server" Nov 8 00:23:49.744046 containerd[1501]: time="2025-11-08T00:23:49.743728086Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 8 00:23:49.744046 containerd[1501]: time="2025-11-08T00:23:49.743828774Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 8 00:23:49.744744 containerd[1501]: time="2025-11-08T00:23:49.744726989Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 8 00:23:49.745018 containerd[1501]: time="2025-11-08T00:23:49.745004800Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 8 00:23:49.745150 containerd[1501]: time="2025-11-08T00:23:49.745137579Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 8 00:23:49.745924 containerd[1501]: time="2025-11-08T00:23:49.745076214Z" level=info msg="Start subscribing containerd event" Nov 8 00:23:49.746091 containerd[1501]: time="2025-11-08T00:23:49.745982584Z" level=info msg="Start recovering state" Nov 8 00:23:49.746091 containerd[1501]: time="2025-11-08T00:23:49.746036315Z" level=info msg="Start event monitor" Nov 8 00:23:49.746091 containerd[1501]: time="2025-11-08T00:23:49.746050161Z" level=info msg="Start snapshots syncer" Nov 8 00:23:49.746091 containerd[1501]: time="2025-11-08T00:23:49.746057154Z" level=info msg="Start cni network conf syncer for default" Nov 8 00:23:49.746091 containerd[1501]: time="2025-11-08T00:23:49.746063005Z" level=info msg="Start streaming server" Nov 8 00:23:49.746346 systemd[1]: Started containerd.service - containerd container runtime. Nov 8 00:23:49.749304 containerd[1501]: time="2025-11-08T00:23:49.748819625Z" level=info msg="containerd successfully booted in 0.064628s" Nov 8 00:23:49.753899 update-ssh-keys[1577]: Updated "/home/core/.ssh/authorized_keys" Nov 8 00:23:49.754388 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 8 00:23:49.759435 systemd[1]: Finished sshkeys.service. Nov 8 00:23:49.953957 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 8 00:23:49.965555 systemd[1]: Started sshd@0-65.109.8.72:22-147.75.109.163:55534.service - OpenSSH per-connection server daemon (147.75.109.163:55534). Nov 8 00:23:50.046335 tar[1493]: linux-amd64/README.md Nov 8 00:23:50.055168 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 8 00:23:50.272501 systemd-networkd[1414]: eth1: Gained IPv6LL Nov 8 00:23:50.273084 systemd-timesyncd[1378]: Network configuration changed, trying to establish connection. Nov 8 00:23:50.275456 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 8 00:23:50.277135 systemd[1]: Reached target network-online.target - Network is Online. Nov 8 00:23:50.286505 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:23:50.289778 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 8 00:23:50.310352 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 8 00:23:50.528684 systemd-networkd[1414]: eth0: Gained IPv6LL Nov 8 00:23:50.530902 systemd-timesyncd[1378]: Network configuration changed, trying to establish connection. Nov 8 00:23:51.085134 sshd[1582]: Accepted publickey for core from 147.75.109.163 port 55534 ssh2: RSA SHA256:OlzoI32JgcpjQ1LH303EkMKY9qIGtPKb42SZOMj04EQ Nov 8 00:23:51.086805 sshd[1582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:23:51.098602 systemd-logind[1479]: New session 1 of user core. Nov 8 00:23:51.100493 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 8 00:23:51.111250 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 8 00:23:51.124053 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 8 00:23:51.135458 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 8 00:23:51.141484 (systemd)[1600]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 8 00:23:51.216620 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:23:51.219587 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 8 00:23:51.225664 (kubelet)[1611]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:23:51.276716 systemd[1600]: Queued start job for default target default.target. Nov 8 00:23:51.282865 systemd[1600]: Created slice app.slice - User Application Slice. Nov 8 00:23:51.283018 systemd[1600]: Reached target paths.target - Paths. Nov 8 00:23:51.283037 systemd[1600]: Reached target timers.target - Timers. Nov 8 00:23:51.284572 systemd[1600]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 8 00:23:51.297303 systemd[1600]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 8 00:23:51.297602 systemd[1600]: Reached target sockets.target - Sockets. Nov 8 00:23:51.297781 systemd[1600]: Reached target basic.target - Basic System. Nov 8 00:23:51.297845 systemd[1600]: Reached target default.target - Main User Target. Nov 8 00:23:51.297891 systemd[1600]: Startup finished in 149ms. Nov 8 00:23:51.297915 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 8 00:23:51.305465 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 8 00:23:51.307699 systemd[1]: Startup finished in 1.665s (kernel) + 6.100s (initrd) + 4.525s (userspace) = 12.291s. Nov 8 00:23:51.741070 kubelet[1611]: E1108 00:23:51.740998 1611 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:23:51.743146 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:23:51.743379 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:23:52.089404 systemd[1]: Started sshd@1-65.109.8.72:22-147.75.109.163:51792.service - OpenSSH per-connection server daemon (147.75.109.163:51792). Nov 8 00:23:53.200174 sshd[1627]: Accepted publickey for core from 147.75.109.163 port 51792 ssh2: RSA SHA256:OlzoI32JgcpjQ1LH303EkMKY9qIGtPKb42SZOMj04EQ Nov 8 00:23:53.201678 sshd[1627]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:23:53.205971 systemd-logind[1479]: New session 2 of user core. Nov 8 00:23:53.212513 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 8 00:23:53.969262 sshd[1627]: pam_unix(sshd:session): session closed for user core Nov 8 00:23:53.972185 systemd[1]: sshd@1-65.109.8.72:22-147.75.109.163:51792.service: Deactivated successfully. Nov 8 00:23:53.974472 systemd[1]: session-2.scope: Deactivated successfully. Nov 8 00:23:53.975903 systemd-logind[1479]: Session 2 logged out. Waiting for processes to exit. Nov 8 00:23:53.977316 systemd-logind[1479]: Removed session 2. Nov 8 00:23:54.126785 systemd[1]: Started sshd@2-65.109.8.72:22-147.75.109.163:51800.service - OpenSSH per-connection server daemon (147.75.109.163:51800). Nov 8 00:23:55.144236 sshd[1634]: Accepted publickey for core from 147.75.109.163 port 51800 ssh2: RSA SHA256:OlzoI32JgcpjQ1LH303EkMKY9qIGtPKb42SZOMj04EQ Nov 8 00:23:55.145746 sshd[1634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:23:55.150367 systemd-logind[1479]: New session 3 of user core. Nov 8 00:23:55.157585 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 8 00:23:55.843150 sshd[1634]: pam_unix(sshd:session): session closed for user core Nov 8 00:23:55.846853 systemd[1]: sshd@2-65.109.8.72:22-147.75.109.163:51800.service: Deactivated successfully. Nov 8 00:23:55.848871 systemd[1]: session-3.scope: Deactivated successfully. Nov 8 00:23:55.850493 systemd-logind[1479]: Session 3 logged out. Waiting for processes to exit. Nov 8 00:23:55.851851 systemd-logind[1479]: Removed session 3. Nov 8 00:23:56.014217 systemd[1]: Started sshd@3-65.109.8.72:22-147.75.109.163:51812.service - OpenSSH per-connection server daemon (147.75.109.163:51812). Nov 8 00:23:57.010982 sshd[1641]: Accepted publickey for core from 147.75.109.163 port 51812 ssh2: RSA SHA256:OlzoI32JgcpjQ1LH303EkMKY9qIGtPKb42SZOMj04EQ Nov 8 00:23:57.012843 sshd[1641]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:23:57.018998 systemd-logind[1479]: New session 4 of user core. Nov 8 00:23:57.024469 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 8 00:23:57.704440 sshd[1641]: pam_unix(sshd:session): session closed for user core Nov 8 00:23:57.708195 systemd-logind[1479]: Session 4 logged out. Waiting for processes to exit. Nov 8 00:23:57.708675 systemd[1]: sshd@3-65.109.8.72:22-147.75.109.163:51812.service: Deactivated successfully. Nov 8 00:23:57.710793 systemd[1]: session-4.scope: Deactivated successfully. Nov 8 00:23:57.711554 systemd-logind[1479]: Removed session 4. Nov 8 00:23:57.877619 systemd[1]: Started sshd@4-65.109.8.72:22-147.75.109.163:51820.service - OpenSSH per-connection server daemon (147.75.109.163:51820). Nov 8 00:23:58.885324 sshd[1648]: Accepted publickey for core from 147.75.109.163 port 51820 ssh2: RSA SHA256:OlzoI32JgcpjQ1LH303EkMKY9qIGtPKb42SZOMj04EQ Nov 8 00:23:58.886778 sshd[1648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:23:58.891633 systemd-logind[1479]: New session 5 of user core. Nov 8 00:23:58.897448 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 8 00:23:59.431040 sudo[1651]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 8 00:23:59.431417 sudo[1651]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:23:59.448337 sudo[1651]: pam_unix(sudo:session): session closed for user root Nov 8 00:23:59.613184 sshd[1648]: pam_unix(sshd:session): session closed for user core Nov 8 00:23:59.617216 systemd[1]: sshd@4-65.109.8.72:22-147.75.109.163:51820.service: Deactivated successfully. Nov 8 00:23:59.618978 systemd[1]: session-5.scope: Deactivated successfully. Nov 8 00:23:59.619686 systemd-logind[1479]: Session 5 logged out. Waiting for processes to exit. Nov 8 00:23:59.621056 systemd-logind[1479]: Removed session 5. Nov 8 00:23:59.788573 systemd[1]: Started sshd@5-65.109.8.72:22-147.75.109.163:32944.service - OpenSSH per-connection server daemon (147.75.109.163:32944). Nov 8 00:24:00.797438 sshd[1656]: Accepted publickey for core from 147.75.109.163 port 32944 ssh2: RSA SHA256:OlzoI32JgcpjQ1LH303EkMKY9qIGtPKb42SZOMj04EQ Nov 8 00:24:00.798990 sshd[1656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:24:00.803963 systemd-logind[1479]: New session 6 of user core. Nov 8 00:24:00.809482 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 8 00:24:01.328593 sudo[1660]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 8 00:24:01.328936 sudo[1660]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:24:01.332744 sudo[1660]: pam_unix(sudo:session): session closed for user root Nov 8 00:24:01.338785 sudo[1659]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 8 00:24:01.339130 sudo[1659]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:24:01.355724 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 8 00:24:01.357444 auditctl[1663]: No rules Nov 8 00:24:01.357827 systemd[1]: audit-rules.service: Deactivated successfully. Nov 8 00:24:01.358043 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 8 00:24:01.360879 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 8 00:24:01.384155 augenrules[1681]: No rules Nov 8 00:24:01.384839 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 8 00:24:01.386205 sudo[1659]: pam_unix(sudo:session): session closed for user root Nov 8 00:24:01.548881 sshd[1656]: pam_unix(sshd:session): session closed for user core Nov 8 00:24:01.551796 systemd[1]: sshd@5-65.109.8.72:22-147.75.109.163:32944.service: Deactivated successfully. Nov 8 00:24:01.553243 systemd[1]: session-6.scope: Deactivated successfully. Nov 8 00:24:01.554320 systemd-logind[1479]: Session 6 logged out. Waiting for processes to exit. Nov 8 00:24:01.555530 systemd-logind[1479]: Removed session 6. Nov 8 00:24:01.726597 systemd[1]: Started sshd@6-65.109.8.72:22-147.75.109.163:32948.service - OpenSSH per-connection server daemon (147.75.109.163:32948). Nov 8 00:24:01.993940 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 8 00:24:02.003856 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:24:02.122579 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:24:02.128326 (kubelet)[1699]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:24:02.179966 kubelet[1699]: E1108 00:24:02.179849 1699 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:24:02.183669 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:24:02.183861 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:24:02.729986 sshd[1689]: Accepted publickey for core from 147.75.109.163 port 32948 ssh2: RSA SHA256:OlzoI32JgcpjQ1LH303EkMKY9qIGtPKb42SZOMj04EQ Nov 8 00:24:02.731479 sshd[1689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:24:02.736730 systemd-logind[1479]: New session 7 of user core. Nov 8 00:24:02.741481 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 8 00:24:03.263867 sudo[1708]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 8 00:24:03.264147 sudo[1708]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:24:03.530476 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 8 00:24:03.531794 (dockerd)[1725]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 8 00:24:03.804692 dockerd[1725]: time="2025-11-08T00:24:03.804040865Z" level=info msg="Starting up" Nov 8 00:24:03.906158 dockerd[1725]: time="2025-11-08T00:24:03.905843367Z" level=info msg="Loading containers: start." Nov 8 00:24:04.002344 kernel: Initializing XFRM netlink socket Nov 8 00:24:04.026259 systemd-timesyncd[1378]: Network configuration changed, trying to establish connection. Nov 8 00:24:05.150385 systemd-timesyncd[1378]: Contacted time server 173.249.58.145:123 (2.flatcar.pool.ntp.org). Nov 8 00:24:05.150490 systemd-timesyncd[1378]: Initial clock synchronization to Sat 2025-11-08 00:24:05.149941 UTC. Nov 8 00:24:05.150604 systemd-resolved[1361]: Clock change detected. Flushing caches. Nov 8 00:24:05.165974 systemd-networkd[1414]: docker0: Link UP Nov 8 00:24:05.184132 dockerd[1725]: time="2025-11-08T00:24:05.184063138Z" level=info msg="Loading containers: done." Nov 8 00:24:05.200270 dockerd[1725]: time="2025-11-08T00:24:05.200199803Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 8 00:24:05.200403 dockerd[1725]: time="2025-11-08T00:24:05.200345265Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Nov 8 00:24:05.200483 dockerd[1725]: time="2025-11-08T00:24:05.200446094Z" level=info msg="Daemon has completed initialization" Nov 8 00:24:05.233182 dockerd[1725]: time="2025-11-08T00:24:05.233044684Z" level=info msg="API listen on /run/docker.sock" Nov 8 00:24:05.233177 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 8 00:24:06.457670 containerd[1501]: time="2025-11-08T00:24:06.457619486Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Nov 8 00:24:06.995586 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3823582406.mount: Deactivated successfully. Nov 8 00:24:07.931307 containerd[1501]: time="2025-11-08T00:24:07.931239215Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:24:07.932379 containerd[1501]: time="2025-11-08T00:24:07.932332545Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28838016" Nov 8 00:24:07.933262 containerd[1501]: time="2025-11-08T00:24:07.933152503Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:24:07.936398 containerd[1501]: time="2025-11-08T00:24:07.936136660Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:24:07.937076 containerd[1501]: time="2025-11-08T00:24:07.937042008Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 1.479379011s" Nov 8 00:24:07.937153 containerd[1501]: time="2025-11-08T00:24:07.937082173Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Nov 8 00:24:07.937990 containerd[1501]: time="2025-11-08T00:24:07.937971020Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Nov 8 00:24:09.078263 containerd[1501]: time="2025-11-08T00:24:09.077282330Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:24:09.078263 containerd[1501]: time="2025-11-08T00:24:09.078193860Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24787049" Nov 8 00:24:09.078866 containerd[1501]: time="2025-11-08T00:24:09.078814825Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:24:09.082255 containerd[1501]: time="2025-11-08T00:24:09.081235103Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:24:09.082255 containerd[1501]: time="2025-11-08T00:24:09.082075630Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 1.144022145s" Nov 8 00:24:09.082255 containerd[1501]: time="2025-11-08T00:24:09.082109534Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Nov 8 00:24:09.082847 containerd[1501]: time="2025-11-08T00:24:09.082712765Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Nov 8 00:24:10.152628 containerd[1501]: time="2025-11-08T00:24:10.152574572Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:24:10.153661 containerd[1501]: time="2025-11-08T00:24:10.153599183Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19176311" Nov 8 00:24:10.154315 containerd[1501]: time="2025-11-08T00:24:10.154277445Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:24:10.157716 containerd[1501]: time="2025-11-08T00:24:10.157327455Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:24:10.159456 containerd[1501]: time="2025-11-08T00:24:10.159243850Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 1.076303489s" Nov 8 00:24:10.159456 containerd[1501]: time="2025-11-08T00:24:10.159283604Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Nov 8 00:24:10.160274 containerd[1501]: time="2025-11-08T00:24:10.160249717Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Nov 8 00:24:11.119241 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2875929821.mount: Deactivated successfully. Nov 8 00:24:11.382184 containerd[1501]: time="2025-11-08T00:24:11.382060826Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:24:11.383067 containerd[1501]: time="2025-11-08T00:24:11.383026376Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30924234" Nov 8 00:24:11.383948 containerd[1501]: time="2025-11-08T00:24:11.383905576Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:24:11.385480 containerd[1501]: time="2025-11-08T00:24:11.385442939Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:24:11.386128 containerd[1501]: time="2025-11-08T00:24:11.385794909Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 1.22551249s" Nov 8 00:24:11.386128 containerd[1501]: time="2025-11-08T00:24:11.385824344Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Nov 8 00:24:11.386422 containerd[1501]: time="2025-11-08T00:24:11.386401336Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Nov 8 00:24:11.863984 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4117007335.mount: Deactivated successfully. Nov 8 00:24:12.627341 containerd[1501]: time="2025-11-08T00:24:12.627280288Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:24:12.628241 containerd[1501]: time="2025-11-08T00:24:12.628180576Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565335" Nov 8 00:24:12.629085 containerd[1501]: time="2025-11-08T00:24:12.628841506Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:24:12.631801 containerd[1501]: time="2025-11-08T00:24:12.631569332Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:24:12.632359 containerd[1501]: time="2025-11-08T00:24:12.632330178Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.245904357s" Nov 8 00:24:12.632405 containerd[1501]: time="2025-11-08T00:24:12.632363410Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Nov 8 00:24:12.633160 containerd[1501]: time="2025-11-08T00:24:12.633136231Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 8 00:24:13.081478 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount819217383.mount: Deactivated successfully. Nov 8 00:24:13.088170 containerd[1501]: time="2025-11-08T00:24:13.088099138Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:24:13.089067 containerd[1501]: time="2025-11-08T00:24:13.088988205Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321160" Nov 8 00:24:13.091232 containerd[1501]: time="2025-11-08T00:24:13.089870580Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:24:13.092285 containerd[1501]: time="2025-11-08T00:24:13.092253860Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:24:13.093160 containerd[1501]: time="2025-11-08T00:24:13.093129061Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 459.961842ms" Nov 8 00:24:13.093281 containerd[1501]: time="2025-11-08T00:24:13.093262511Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 8 00:24:13.094071 containerd[1501]: time="2025-11-08T00:24:13.094039990Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Nov 8 00:24:13.519181 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 8 00:24:13.524573 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:24:13.649765 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2299953931.mount: Deactivated successfully. Nov 8 00:24:13.654752 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:24:13.663727 (kubelet)[2006]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:24:13.737933 kubelet[2006]: E1108 00:24:13.737617 2006 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:24:13.740158 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:24:13.741396 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:24:16.227132 containerd[1501]: time="2025-11-08T00:24:16.227031818Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:24:16.228300 containerd[1501]: time="2025-11-08T00:24:16.228250213Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682132" Nov 8 00:24:16.229422 containerd[1501]: time="2025-11-08T00:24:16.229373730Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:24:16.234240 containerd[1501]: time="2025-11-08T00:24:16.234176798Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:24:16.236388 containerd[1501]: time="2025-11-08T00:24:16.235847081Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.141773518s" Nov 8 00:24:16.236388 containerd[1501]: time="2025-11-08T00:24:16.235899139Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Nov 8 00:24:19.143077 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:24:19.155616 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:24:19.188988 systemd[1]: Reloading requested from client PID 2090 ('systemctl') (unit session-7.scope)... Nov 8 00:24:19.189001 systemd[1]: Reloading... Nov 8 00:24:19.289507 zram_generator::config[2131]: No configuration found. Nov 8 00:24:19.393736 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:24:19.456805 systemd[1]: Reloading finished in 267 ms. Nov 8 00:24:19.501441 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 8 00:24:19.501543 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 8 00:24:19.501828 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:24:19.505688 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:24:19.595342 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:24:19.600027 (kubelet)[2185]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 8 00:24:19.646320 kubelet[2185]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:24:19.646320 kubelet[2185]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 8 00:24:19.646320 kubelet[2185]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:24:19.646320 kubelet[2185]: I1108 00:24:19.644613 2185 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 8 00:24:20.089736 kubelet[2185]: I1108 00:24:20.089632 2185 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 8 00:24:20.090204 kubelet[2185]: I1108 00:24:20.090184 2185 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 8 00:24:20.090974 kubelet[2185]: I1108 00:24:20.090958 2185 server.go:954] "Client rotation is on, will bootstrap in background" Nov 8 00:24:20.123346 kubelet[2185]: E1108 00:24:20.123292 2185 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://65.109.8.72:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 65.109.8.72:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:24:20.124104 kubelet[2185]: I1108 00:24:20.123896 2185 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 8 00:24:20.136871 kubelet[2185]: E1108 00:24:20.135731 2185 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 8 00:24:20.136871 kubelet[2185]: I1108 00:24:20.135786 2185 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 8 00:24:20.142041 kubelet[2185]: I1108 00:24:20.142002 2185 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 8 00:24:20.144327 kubelet[2185]: I1108 00:24:20.144247 2185 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 8 00:24:20.144574 kubelet[2185]: I1108 00:24:20.144301 2185 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-n-d839b30383","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 8 00:24:20.147462 kubelet[2185]: I1108 00:24:20.147421 2185 topology_manager.go:138] "Creating topology manager with none policy" Nov 8 00:24:20.147462 kubelet[2185]: I1108 00:24:20.147450 2185 container_manager_linux.go:304] "Creating device plugin manager" Nov 8 00:24:20.148970 kubelet[2185]: I1108 00:24:20.148934 2185 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:24:20.154039 kubelet[2185]: I1108 00:24:20.153634 2185 kubelet.go:446] "Attempting to sync node with API server" Nov 8 00:24:20.154039 kubelet[2185]: I1108 00:24:20.153695 2185 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 8 00:24:20.154039 kubelet[2185]: I1108 00:24:20.153731 2185 kubelet.go:352] "Adding apiserver pod source" Nov 8 00:24:20.154039 kubelet[2185]: I1108 00:24:20.153748 2185 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 8 00:24:20.159956 kubelet[2185]: I1108 00:24:20.159850 2185 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 8 00:24:20.163428 kubelet[2185]: I1108 00:24:20.163403 2185 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 8 00:24:20.163570 kubelet[2185]: W1108 00:24:20.163556 2185 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 8 00:24:20.164820 kubelet[2185]: I1108 00:24:20.164194 2185 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 8 00:24:20.164820 kubelet[2185]: I1108 00:24:20.164252 2185 server.go:1287] "Started kubelet" Nov 8 00:24:20.164820 kubelet[2185]: W1108 00:24:20.164401 2185 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://65.109.8.72:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-d839b30383&limit=500&resourceVersion=0": dial tcp 65.109.8.72:6443: connect: connection refused Nov 8 00:24:20.164820 kubelet[2185]: E1108 00:24:20.164456 2185 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://65.109.8.72:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-d839b30383&limit=500&resourceVersion=0\": dial tcp 65.109.8.72:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:24:20.175231 kubelet[2185]: I1108 00:24:20.175186 2185 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 8 00:24:20.180655 kubelet[2185]: I1108 00:24:20.180360 2185 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 8 00:24:20.182711 kubelet[2185]: I1108 00:24:20.181936 2185 server.go:479] "Adding debug handlers to kubelet server" Nov 8 00:24:20.186715 kubelet[2185]: W1108 00:24:20.186638 2185 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://65.109.8.72:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 65.109.8.72:6443: connect: connection refused Nov 8 00:24:20.186948 kubelet[2185]: E1108 00:24:20.186913 2185 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://65.109.8.72:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 65.109.8.72:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:24:20.187780 kubelet[2185]: I1108 00:24:20.187742 2185 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 8 00:24:20.190151 kubelet[2185]: I1108 00:24:20.190116 2185 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 8 00:24:20.190474 kubelet[2185]: E1108 00:24:20.190430 2185 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-d839b30383\" not found" Nov 8 00:24:20.191078 kubelet[2185]: I1108 00:24:20.191047 2185 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 8 00:24:20.191124 kubelet[2185]: I1108 00:24:20.191109 2185 reconciler.go:26] "Reconciler: start to sync state" Nov 8 00:24:20.193484 kubelet[2185]: I1108 00:24:20.193373 2185 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 8 00:24:20.193854 kubelet[2185]: I1108 00:24:20.193811 2185 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 8 00:24:20.200671 kubelet[2185]: E1108 00:24:20.198772 2185 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://65.109.8.72:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-d839b30383?timeout=10s\": dial tcp 65.109.8.72:6443: connect: connection refused" interval="200ms" Nov 8 00:24:20.200671 kubelet[2185]: I1108 00:24:20.199141 2185 factory.go:221] Registration of the systemd container factory successfully Nov 8 00:24:20.200671 kubelet[2185]: I1108 00:24:20.199306 2185 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 8 00:24:20.202477 kubelet[2185]: W1108 00:24:20.202402 2185 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://65.109.8.72:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 65.109.8.72:6443: connect: connection refused Nov 8 00:24:20.202619 kubelet[2185]: E1108 00:24:20.202482 2185 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://65.109.8.72:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 65.109.8.72:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:24:20.207837 kubelet[2185]: E1108 00:24:20.205194 2185 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://65.109.8.72:6443/api/v1/namespaces/default/events\": dial tcp 65.109.8.72:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-6-n-d839b30383.1875e04f3fe1a21a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-6-n-d839b30383,UID:ci-4081-3-6-n-d839b30383,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-n-d839b30383,},FirstTimestamp:2025-11-08 00:24:20.16420713 +0000 UTC m=+0.560514111,LastTimestamp:2025-11-08 00:24:20.16420713 +0000 UTC m=+0.560514111,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-n-d839b30383,}" Nov 8 00:24:20.209678 kubelet[2185]: I1108 00:24:20.209641 2185 factory.go:221] Registration of the containerd container factory successfully Nov 8 00:24:20.210461 kubelet[2185]: I1108 00:24:20.210345 2185 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 8 00:24:20.212544 kubelet[2185]: I1108 00:24:20.212269 2185 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 8 00:24:20.212544 kubelet[2185]: I1108 00:24:20.212290 2185 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 8 00:24:20.212544 kubelet[2185]: I1108 00:24:20.212307 2185 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 8 00:24:20.212544 kubelet[2185]: I1108 00:24:20.212320 2185 kubelet.go:2382] "Starting kubelet main sync loop" Nov 8 00:24:20.212544 kubelet[2185]: E1108 00:24:20.212357 2185 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 8 00:24:20.223773 kubelet[2185]: W1108 00:24:20.223695 2185 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://65.109.8.72:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 65.109.8.72:6443: connect: connection refused Nov 8 00:24:20.223773 kubelet[2185]: E1108 00:24:20.223768 2185 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://65.109.8.72:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 65.109.8.72:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:24:20.231842 kubelet[2185]: E1108 00:24:20.231633 2185 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 8 00:24:20.237186 kubelet[2185]: I1108 00:24:20.237120 2185 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 8 00:24:20.237186 kubelet[2185]: I1108 00:24:20.237170 2185 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 8 00:24:20.237186 kubelet[2185]: I1108 00:24:20.237190 2185 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:24:20.238862 kubelet[2185]: I1108 00:24:20.238839 2185 policy_none.go:49] "None policy: Start" Nov 8 00:24:20.238862 kubelet[2185]: I1108 00:24:20.238860 2185 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 8 00:24:20.238937 kubelet[2185]: I1108 00:24:20.238870 2185 state_mem.go:35] "Initializing new in-memory state store" Nov 8 00:24:20.247350 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 8 00:24:20.256173 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 8 00:24:20.258863 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 8 00:24:20.270737 kubelet[2185]: I1108 00:24:20.270361 2185 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 8 00:24:20.273504 kubelet[2185]: I1108 00:24:20.273478 2185 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 8 00:24:20.273618 kubelet[2185]: I1108 00:24:20.273501 2185 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 8 00:24:20.273918 kubelet[2185]: I1108 00:24:20.273889 2185 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 8 00:24:20.275742 kubelet[2185]: E1108 00:24:20.275711 2185 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 8 00:24:20.275792 kubelet[2185]: E1108 00:24:20.275755 2185 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-6-n-d839b30383\" not found" Nov 8 00:24:20.326744 systemd[1]: Created slice kubepods-burstable-podd7d2f13a53d4d57e87a177897fe88bf8.slice - libcontainer container kubepods-burstable-podd7d2f13a53d4d57e87a177897fe88bf8.slice. Nov 8 00:24:20.334194 kubelet[2185]: E1108 00:24:20.334001 2185 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-d839b30383\" not found" node="ci-4081-3-6-n-d839b30383" Nov 8 00:24:20.336942 systemd[1]: Created slice kubepods-burstable-podabae073533c9b91767e7534278294b57.slice - libcontainer container kubepods-burstable-podabae073533c9b91767e7534278294b57.slice. Nov 8 00:24:20.340844 kubelet[2185]: E1108 00:24:20.340770 2185 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-d839b30383\" not found" node="ci-4081-3-6-n-d839b30383" Nov 8 00:24:20.342673 systemd[1]: Created slice kubepods-burstable-podef8e838160a97a4887f3de6a4ba44033.slice - libcontainer container kubepods-burstable-podef8e838160a97a4887f3de6a4ba44033.slice. Nov 8 00:24:20.344475 kubelet[2185]: E1108 00:24:20.344271 2185 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-d839b30383\" not found" node="ci-4081-3-6-n-d839b30383" Nov 8 00:24:20.375597 kubelet[2185]: I1108 00:24:20.375556 2185 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-d839b30383" Nov 8 00:24:20.376022 kubelet[2185]: E1108 00:24:20.375981 2185 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://65.109.8.72:6443/api/v1/nodes\": dial tcp 65.109.8.72:6443: connect: connection refused" node="ci-4081-3-6-n-d839b30383" Nov 8 00:24:20.392495 kubelet[2185]: I1108 00:24:20.392459 2185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/abae073533c9b91767e7534278294b57-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-d839b30383\" (UID: \"abae073533c9b91767e7534278294b57\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-d839b30383" Nov 8 00:24:20.392495 kubelet[2185]: I1108 00:24:20.392503 2185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/abae073533c9b91767e7534278294b57-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-n-d839b30383\" (UID: \"abae073533c9b91767e7534278294b57\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-d839b30383" Nov 8 00:24:20.392826 kubelet[2185]: I1108 00:24:20.392780 2185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ef8e838160a97a4887f3de6a4ba44033-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-n-d839b30383\" (UID: \"ef8e838160a97a4887f3de6a4ba44033\") " pod="kube-system/kube-scheduler-ci-4081-3-6-n-d839b30383" Nov 8 00:24:20.392826 kubelet[2185]: I1108 00:24:20.392807 2185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d7d2f13a53d4d57e87a177897fe88bf8-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-n-d839b30383\" (UID: \"d7d2f13a53d4d57e87a177897fe88bf8\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-d839b30383" Nov 8 00:24:20.392902 kubelet[2185]: I1108 00:24:20.392829 2185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d7d2f13a53d4d57e87a177897fe88bf8-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-n-d839b30383\" (UID: \"d7d2f13a53d4d57e87a177897fe88bf8\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-d839b30383" Nov 8 00:24:20.392902 kubelet[2185]: I1108 00:24:20.392845 2185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d7d2f13a53d4d57e87a177897fe88bf8-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-n-d839b30383\" (UID: \"d7d2f13a53d4d57e87a177897fe88bf8\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-d839b30383" Nov 8 00:24:20.392902 kubelet[2185]: I1108 00:24:20.392863 2185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/abae073533c9b91767e7534278294b57-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-n-d839b30383\" (UID: \"abae073533c9b91767e7534278294b57\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-d839b30383" Nov 8 00:24:20.392902 kubelet[2185]: I1108 00:24:20.392880 2185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/abae073533c9b91767e7534278294b57-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-n-d839b30383\" (UID: \"abae073533c9b91767e7534278294b57\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-d839b30383" Nov 8 00:24:20.392902 kubelet[2185]: I1108 00:24:20.392895 2185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/abae073533c9b91767e7534278294b57-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-d839b30383\" (UID: \"abae073533c9b91767e7534278294b57\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-d839b30383" Nov 8 00:24:20.400009 kubelet[2185]: E1108 00:24:20.399956 2185 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://65.109.8.72:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-d839b30383?timeout=10s\": dial tcp 65.109.8.72:6443: connect: connection refused" interval="400ms" Nov 8 00:24:20.583243 kubelet[2185]: I1108 00:24:20.582864 2185 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-d839b30383" Nov 8 00:24:20.583243 kubelet[2185]: E1108 00:24:20.583190 2185 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://65.109.8.72:6443/api/v1/nodes\": dial tcp 65.109.8.72:6443: connect: connection refused" node="ci-4081-3-6-n-d839b30383" Nov 8 00:24:20.636258 containerd[1501]: time="2025-11-08T00:24:20.635982279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-n-d839b30383,Uid:d7d2f13a53d4d57e87a177897fe88bf8,Namespace:kube-system,Attempt:0,}" Nov 8 00:24:20.649245 containerd[1501]: time="2025-11-08T00:24:20.647002898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-n-d839b30383,Uid:abae073533c9b91767e7534278294b57,Namespace:kube-system,Attempt:0,}" Nov 8 00:24:20.649245 containerd[1501]: time="2025-11-08T00:24:20.647026272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-n-d839b30383,Uid:ef8e838160a97a4887f3de6a4ba44033,Namespace:kube-system,Attempt:0,}" Nov 8 00:24:20.701189 kubelet[2185]: E1108 00:24:20.701051 2185 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://65.109.8.72:6443/api/v1/namespaces/default/events\": dial tcp 65.109.8.72:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-6-n-d839b30383.1875e04f3fe1a21a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-6-n-d839b30383,UID:ci-4081-3-6-n-d839b30383,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-n-d839b30383,},FirstTimestamp:2025-11-08 00:24:20.16420713 +0000 UTC m=+0.560514111,LastTimestamp:2025-11-08 00:24:20.16420713 +0000 UTC m=+0.560514111,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-n-d839b30383,}" Nov 8 00:24:20.801172 kubelet[2185]: E1108 00:24:20.801125 2185 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://65.109.8.72:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-d839b30383?timeout=10s\": dial tcp 65.109.8.72:6443: connect: connection refused" interval="800ms" Nov 8 00:24:20.986160 kubelet[2185]: I1108 00:24:20.985958 2185 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-d839b30383" Nov 8 00:24:20.986517 kubelet[2185]: E1108 00:24:20.986406 2185 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://65.109.8.72:6443/api/v1/nodes\": dial tcp 65.109.8.72:6443: connect: connection refused" node="ci-4081-3-6-n-d839b30383" Nov 8 00:24:21.072206 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2617670917.mount: Deactivated successfully. Nov 8 00:24:21.077799 containerd[1501]: time="2025-11-08T00:24:21.077738207Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:24:21.078822 containerd[1501]: time="2025-11-08T00:24:21.078780942Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:24:21.079714 containerd[1501]: time="2025-11-08T00:24:21.079656976Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 8 00:24:21.080441 containerd[1501]: time="2025-11-08T00:24:21.080391773Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312078" Nov 8 00:24:21.081638 containerd[1501]: time="2025-11-08T00:24:21.081566998Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:24:21.082958 containerd[1501]: time="2025-11-08T00:24:21.082843973Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:24:21.082958 containerd[1501]: time="2025-11-08T00:24:21.082886593Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 8 00:24:21.085258 containerd[1501]: time="2025-11-08T00:24:21.085066371Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:24:21.088412 containerd[1501]: time="2025-11-08T00:24:21.088275720Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 439.460111ms" Nov 8 00:24:21.090195 containerd[1501]: time="2025-11-08T00:24:21.090154374Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 454.030249ms" Nov 8 00:24:21.092435 containerd[1501]: time="2025-11-08T00:24:21.092346345Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 445.218142ms" Nov 8 00:24:21.210756 containerd[1501]: time="2025-11-08T00:24:21.210531652Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:24:21.210756 containerd[1501]: time="2025-11-08T00:24:21.210611201Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:24:21.210756 containerd[1501]: time="2025-11-08T00:24:21.210625668Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:24:21.210756 containerd[1501]: time="2025-11-08T00:24:21.210691402Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:24:21.215719 containerd[1501]: time="2025-11-08T00:24:21.215481075Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:24:21.215719 containerd[1501]: time="2025-11-08T00:24:21.215540907Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:24:21.215719 containerd[1501]: time="2025-11-08T00:24:21.215568338Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:24:21.215828 containerd[1501]: time="2025-11-08T00:24:21.215675109Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:24:21.219380 containerd[1501]: time="2025-11-08T00:24:21.218587321Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:24:21.219380 containerd[1501]: time="2025-11-08T00:24:21.218659255Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:24:21.219380 containerd[1501]: time="2025-11-08T00:24:21.218675406Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:24:21.219380 containerd[1501]: time="2025-11-08T00:24:21.218740127Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:24:21.220483 kubelet[2185]: W1108 00:24:21.220319 2185 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://65.109.8.72:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 65.109.8.72:6443: connect: connection refused Nov 8 00:24:21.220483 kubelet[2185]: E1108 00:24:21.220364 2185 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://65.109.8.72:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 65.109.8.72:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:24:21.235396 systemd[1]: Started cri-containerd-14726f56e711bce1a6e6944e47284e4ff15549a4768b3d62e52348e1617ec2d2.scope - libcontainer container 14726f56e711bce1a6e6944e47284e4ff15549a4768b3d62e52348e1617ec2d2. Nov 8 00:24:21.254567 systemd[1]: Started cri-containerd-6073ae0dcc9bbe05599be68f945021d9c3c742a1a22e6b9266f1bd8c1a555186.scope - libcontainer container 6073ae0dcc9bbe05599be68f945021d9c3c742a1a22e6b9266f1bd8c1a555186. Nov 8 00:24:21.259586 systemd[1]: Started cri-containerd-c18314c3fcce8faa36fb7ba7e499351a9d89797c336ac11207067fe06f7168ec.scope - libcontainer container c18314c3fcce8faa36fb7ba7e499351a9d89797c336ac11207067fe06f7168ec. Nov 8 00:24:21.262373 kubelet[2185]: W1108 00:24:21.260187 2185 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://65.109.8.72:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-d839b30383&limit=500&resourceVersion=0": dial tcp 65.109.8.72:6443: connect: connection refused Nov 8 00:24:21.262373 kubelet[2185]: E1108 00:24:21.260310 2185 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://65.109.8.72:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-d839b30383&limit=500&resourceVersion=0\": dial tcp 65.109.8.72:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:24:21.314338 containerd[1501]: time="2025-11-08T00:24:21.314200374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-n-d839b30383,Uid:abae073533c9b91767e7534278294b57,Namespace:kube-system,Attempt:0,} returns sandbox id \"14726f56e711bce1a6e6944e47284e4ff15549a4768b3d62e52348e1617ec2d2\"" Nov 8 00:24:21.323704 containerd[1501]: time="2025-11-08T00:24:21.323369871Z" level=info msg="CreateContainer within sandbox \"14726f56e711bce1a6e6944e47284e4ff15549a4768b3d62e52348e1617ec2d2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 8 00:24:21.324178 containerd[1501]: time="2025-11-08T00:24:21.324152409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-n-d839b30383,Uid:d7d2f13a53d4d57e87a177897fe88bf8,Namespace:kube-system,Attempt:0,} returns sandbox id \"c18314c3fcce8faa36fb7ba7e499351a9d89797c336ac11207067fe06f7168ec\"" Nov 8 00:24:21.328747 containerd[1501]: time="2025-11-08T00:24:21.328716700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-n-d839b30383,Uid:ef8e838160a97a4887f3de6a4ba44033,Namespace:kube-system,Attempt:0,} returns sandbox id \"6073ae0dcc9bbe05599be68f945021d9c3c742a1a22e6b9266f1bd8c1a555186\"" Nov 8 00:24:21.331685 containerd[1501]: time="2025-11-08T00:24:21.331637929Z" level=info msg="CreateContainer within sandbox \"6073ae0dcc9bbe05599be68f945021d9c3c742a1a22e6b9266f1bd8c1a555186\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 8 00:24:21.331867 containerd[1501]: time="2025-11-08T00:24:21.331840699Z" level=info msg="CreateContainer within sandbox \"c18314c3fcce8faa36fb7ba7e499351a9d89797c336ac11207067fe06f7168ec\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 8 00:24:21.342193 containerd[1501]: time="2025-11-08T00:24:21.342139775Z" level=info msg="CreateContainer within sandbox \"14726f56e711bce1a6e6944e47284e4ff15549a4768b3d62e52348e1617ec2d2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4f1033f978937c1540d8269d0baf5397dc52ea2eda76320638e7b13378522272\"" Nov 8 00:24:21.343887 containerd[1501]: time="2025-11-08T00:24:21.343859320Z" level=info msg="StartContainer for \"4f1033f978937c1540d8269d0baf5397dc52ea2eda76320638e7b13378522272\"" Nov 8 00:24:21.344272 containerd[1501]: time="2025-11-08T00:24:21.344232289Z" level=info msg="CreateContainer within sandbox \"6073ae0dcc9bbe05599be68f945021d9c3c742a1a22e6b9266f1bd8c1a555186\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"46620bb0e01b70237626ef636f6367e5d3bc8e5f65203bcaaa803eb31cc387e9\"" Nov 8 00:24:21.347451 containerd[1501]: time="2025-11-08T00:24:21.347420769Z" level=info msg="StartContainer for \"46620bb0e01b70237626ef636f6367e5d3bc8e5f65203bcaaa803eb31cc387e9\"" Nov 8 00:24:21.352954 containerd[1501]: time="2025-11-08T00:24:21.352909423Z" level=info msg="CreateContainer within sandbox \"c18314c3fcce8faa36fb7ba7e499351a9d89797c336ac11207067fe06f7168ec\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ebf460fca7ce123edbfb23d4c647a5802bf5139f96846f3c9569d76b5836fcd2\"" Nov 8 00:24:21.354453 containerd[1501]: time="2025-11-08T00:24:21.353502586Z" level=info msg="StartContainer for \"ebf460fca7ce123edbfb23d4c647a5802bf5139f96846f3c9569d76b5836fcd2\"" Nov 8 00:24:21.384303 systemd[1]: Started cri-containerd-4f1033f978937c1540d8269d0baf5397dc52ea2eda76320638e7b13378522272.scope - libcontainer container 4f1033f978937c1540d8269d0baf5397dc52ea2eda76320638e7b13378522272. Nov 8 00:24:21.393462 systemd[1]: Started cri-containerd-46620bb0e01b70237626ef636f6367e5d3bc8e5f65203bcaaa803eb31cc387e9.scope - libcontainer container 46620bb0e01b70237626ef636f6367e5d3bc8e5f65203bcaaa803eb31cc387e9. Nov 8 00:24:21.402380 systemd[1]: Started cri-containerd-ebf460fca7ce123edbfb23d4c647a5802bf5139f96846f3c9569d76b5836fcd2.scope - libcontainer container ebf460fca7ce123edbfb23d4c647a5802bf5139f96846f3c9569d76b5836fcd2. Nov 8 00:24:21.423955 kubelet[2185]: W1108 00:24:21.423887 2185 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://65.109.8.72:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 65.109.8.72:6443: connect: connection refused Nov 8 00:24:21.424088 kubelet[2185]: E1108 00:24:21.423963 2185 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://65.109.8.72:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 65.109.8.72:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:24:21.456096 containerd[1501]: time="2025-11-08T00:24:21.456032291Z" level=info msg="StartContainer for \"4f1033f978937c1540d8269d0baf5397dc52ea2eda76320638e7b13378522272\" returns successfully" Nov 8 00:24:21.459183 containerd[1501]: time="2025-11-08T00:24:21.459135451Z" level=info msg="StartContainer for \"ebf460fca7ce123edbfb23d4c647a5802bf5139f96846f3c9569d76b5836fcd2\" returns successfully" Nov 8 00:24:21.476366 containerd[1501]: time="2025-11-08T00:24:21.476295105Z" level=info msg="StartContainer for \"46620bb0e01b70237626ef636f6367e5d3bc8e5f65203bcaaa803eb31cc387e9\" returns successfully" Nov 8 00:24:21.602575 kubelet[2185]: E1108 00:24:21.602444 2185 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://65.109.8.72:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-d839b30383?timeout=10s\": dial tcp 65.109.8.72:6443: connect: connection refused" interval="1.6s" Nov 8 00:24:21.637322 kubelet[2185]: W1108 00:24:21.636633 2185 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://65.109.8.72:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 65.109.8.72:6443: connect: connection refused Nov 8 00:24:21.637322 kubelet[2185]: E1108 00:24:21.636708 2185 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://65.109.8.72:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 65.109.8.72:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:24:21.789446 kubelet[2185]: I1108 00:24:21.789414 2185 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-d839b30383" Nov 8 00:24:22.238492 kubelet[2185]: E1108 00:24:22.238294 2185 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-d839b30383\" not found" node="ci-4081-3-6-n-d839b30383" Nov 8 00:24:22.240529 kubelet[2185]: E1108 00:24:22.240315 2185 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-d839b30383\" not found" node="ci-4081-3-6-n-d839b30383" Nov 8 00:24:22.242542 kubelet[2185]: E1108 00:24:22.242528 2185 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-d839b30383\" not found" node="ci-4081-3-6-n-d839b30383" Nov 8 00:24:23.007332 kubelet[2185]: I1108 00:24:23.007280 2185 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-n-d839b30383" Nov 8 00:24:23.008283 kubelet[2185]: E1108 00:24:23.007846 2185 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4081-3-6-n-d839b30383\": node \"ci-4081-3-6-n-d839b30383\" not found" Nov 8 00:24:23.023203 kubelet[2185]: E1108 00:24:23.023167 2185 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-d839b30383\" not found" Nov 8 00:24:23.123908 kubelet[2185]: E1108 00:24:23.123817 2185 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-d839b30383\" not found" Nov 8 00:24:23.224190 kubelet[2185]: E1108 00:24:23.224146 2185 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-d839b30383\" not found" Nov 8 00:24:23.245151 kubelet[2185]: E1108 00:24:23.245099 2185 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-d839b30383\" not found" node="ci-4081-3-6-n-d839b30383" Nov 8 00:24:23.245531 kubelet[2185]: E1108 00:24:23.245503 2185 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-d839b30383\" not found" node="ci-4081-3-6-n-d839b30383" Nov 8 00:24:23.325140 kubelet[2185]: E1108 00:24:23.324982 2185 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-d839b30383\" not found" Nov 8 00:24:23.425944 kubelet[2185]: E1108 00:24:23.425878 2185 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-d839b30383\" not found" Nov 8 00:24:23.526851 kubelet[2185]: E1108 00:24:23.526792 2185 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-d839b30383\" not found" Nov 8 00:24:23.628070 kubelet[2185]: E1108 00:24:23.627927 2185 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-d839b30383\" not found" Nov 8 00:24:23.728193 kubelet[2185]: E1108 00:24:23.728120 2185 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-d839b30383\" not found" Nov 8 00:24:23.828996 kubelet[2185]: E1108 00:24:23.828937 2185 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-d839b30383\" not found" Nov 8 00:24:23.929515 kubelet[2185]: E1108 00:24:23.929377 2185 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-d839b30383\" not found" Nov 8 00:24:23.992720 kubelet[2185]: I1108 00:24:23.991547 2185 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-d839b30383" Nov 8 00:24:24.009433 kubelet[2185]: I1108 00:24:24.009388 2185 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-d839b30383" Nov 8 00:24:24.014629 kubelet[2185]: I1108 00:24:24.014333 2185 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-d839b30383" Nov 8 00:24:24.180373 kubelet[2185]: I1108 00:24:24.180072 2185 apiserver.go:52] "Watching apiserver" Nov 8 00:24:24.191661 kubelet[2185]: I1108 00:24:24.191583 2185 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 8 00:24:25.060917 systemd[1]: Reloading requested from client PID 2460 ('systemctl') (unit session-7.scope)... Nov 8 00:24:25.060944 systemd[1]: Reloading... Nov 8 00:24:25.168262 zram_generator::config[2500]: No configuration found. Nov 8 00:24:25.299680 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:24:25.384190 systemd[1]: Reloading finished in 322 ms. Nov 8 00:24:25.417536 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:24:25.430991 systemd[1]: kubelet.service: Deactivated successfully. Nov 8 00:24:25.431292 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:24:25.438567 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:24:25.555390 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:24:25.564542 (kubelet)[2551]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 8 00:24:25.637724 kubelet[2551]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:24:25.638012 kubelet[2551]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 8 00:24:25.638057 kubelet[2551]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:24:25.638175 kubelet[2551]: I1108 00:24:25.638146 2551 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 8 00:24:25.644824 kubelet[2551]: I1108 00:24:25.644803 2551 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 8 00:24:25.644910 kubelet[2551]: I1108 00:24:25.644902 2551 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 8 00:24:25.645129 kubelet[2551]: I1108 00:24:25.645120 2551 server.go:954] "Client rotation is on, will bootstrap in background" Nov 8 00:24:25.650176 kubelet[2551]: I1108 00:24:25.650150 2551 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 8 00:24:25.657239 kubelet[2551]: I1108 00:24:25.657052 2551 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 8 00:24:25.660897 kubelet[2551]: E1108 00:24:25.660859 2551 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 8 00:24:25.660897 kubelet[2551]: I1108 00:24:25.660897 2551 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 8 00:24:25.664227 kubelet[2551]: I1108 00:24:25.664189 2551 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 8 00:24:25.664461 kubelet[2551]: I1108 00:24:25.664429 2551 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 8 00:24:25.664670 kubelet[2551]: I1108 00:24:25.664459 2551 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-n-d839b30383","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 8 00:24:25.664753 kubelet[2551]: I1108 00:24:25.664672 2551 topology_manager.go:138] "Creating topology manager with none policy" Nov 8 00:24:25.664753 kubelet[2551]: I1108 00:24:25.664683 2551 container_manager_linux.go:304] "Creating device plugin manager" Nov 8 00:24:25.664753 kubelet[2551]: I1108 00:24:25.664727 2551 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:24:25.669240 kubelet[2551]: I1108 00:24:25.665831 2551 kubelet.go:446] "Attempting to sync node with API server" Nov 8 00:24:25.669240 kubelet[2551]: I1108 00:24:25.665865 2551 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 8 00:24:25.669240 kubelet[2551]: I1108 00:24:25.665886 2551 kubelet.go:352] "Adding apiserver pod source" Nov 8 00:24:25.669240 kubelet[2551]: I1108 00:24:25.665933 2551 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 8 00:24:25.678284 kubelet[2551]: I1108 00:24:25.678258 2551 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 8 00:24:25.678854 kubelet[2551]: I1108 00:24:25.678827 2551 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 8 00:24:25.679391 kubelet[2551]: I1108 00:24:25.679378 2551 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 8 00:24:25.679519 kubelet[2551]: I1108 00:24:25.679507 2551 server.go:1287] "Started kubelet" Nov 8 00:24:25.681853 kubelet[2551]: I1108 00:24:25.681834 2551 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 8 00:24:25.686605 kubelet[2551]: I1108 00:24:25.686559 2551 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 8 00:24:25.687358 kubelet[2551]: I1108 00:24:25.687319 2551 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 8 00:24:25.694242 kubelet[2551]: I1108 00:24:25.692661 2551 server.go:479] "Adding debug handlers to kubelet server" Nov 8 00:24:25.694242 kubelet[2551]: I1108 00:24:25.694174 2551 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 8 00:24:25.694575 kubelet[2551]: I1108 00:24:25.694559 2551 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 8 00:24:25.694824 kubelet[2551]: I1108 00:24:25.694793 2551 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 8 00:24:25.694969 kubelet[2551]: I1108 00:24:25.694949 2551 reconciler.go:26] "Reconciler: start to sync state" Nov 8 00:24:25.695938 kubelet[2551]: I1108 00:24:25.695921 2551 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 8 00:24:25.702954 kubelet[2551]: I1108 00:24:25.702906 2551 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 8 00:24:25.703178 kubelet[2551]: I1108 00:24:25.703143 2551 factory.go:221] Registration of the systemd container factory successfully Nov 8 00:24:25.703431 kubelet[2551]: I1108 00:24:25.703407 2551 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 8 00:24:25.705180 kubelet[2551]: I1108 00:24:25.705159 2551 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 8 00:24:25.705867 kubelet[2551]: I1108 00:24:25.705836 2551 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 8 00:24:25.706176 kubelet[2551]: I1108 00:24:25.706162 2551 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 8 00:24:25.706290 kubelet[2551]: I1108 00:24:25.706279 2551 kubelet.go:2382] "Starting kubelet main sync loop" Nov 8 00:24:25.706752 kubelet[2551]: E1108 00:24:25.706714 2551 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 8 00:24:25.708137 kubelet[2551]: E1108 00:24:25.708106 2551 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 8 00:24:25.710804 kubelet[2551]: I1108 00:24:25.710772 2551 factory.go:221] Registration of the containerd container factory successfully Nov 8 00:24:25.759490 kubelet[2551]: I1108 00:24:25.759447 2551 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 8 00:24:25.759490 kubelet[2551]: I1108 00:24:25.759470 2551 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 8 00:24:25.759490 kubelet[2551]: I1108 00:24:25.759492 2551 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:24:25.759732 kubelet[2551]: I1108 00:24:25.759700 2551 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 8 00:24:25.759732 kubelet[2551]: I1108 00:24:25.759711 2551 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 8 00:24:25.759732 kubelet[2551]: I1108 00:24:25.759729 2551 policy_none.go:49] "None policy: Start" Nov 8 00:24:25.759797 kubelet[2551]: I1108 00:24:25.759738 2551 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 8 00:24:25.759797 kubelet[2551]: I1108 00:24:25.759748 2551 state_mem.go:35] "Initializing new in-memory state store" Nov 8 00:24:25.759864 kubelet[2551]: I1108 00:24:25.759838 2551 state_mem.go:75] "Updated machine memory state" Nov 8 00:24:25.764016 kubelet[2551]: I1108 00:24:25.763989 2551 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 8 00:24:25.764174 kubelet[2551]: I1108 00:24:25.764143 2551 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 8 00:24:25.764174 kubelet[2551]: I1108 00:24:25.764159 2551 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 8 00:24:25.764755 kubelet[2551]: I1108 00:24:25.764734 2551 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 8 00:24:25.771201 kubelet[2551]: E1108 00:24:25.771131 2551 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 8 00:24:25.810079 kubelet[2551]: I1108 00:24:25.810033 2551 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-d839b30383" Nov 8 00:24:25.812082 kubelet[2551]: I1108 00:24:25.812036 2551 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-d839b30383" Nov 8 00:24:25.812316 kubelet[2551]: I1108 00:24:25.812194 2551 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-d839b30383" Nov 8 00:24:25.821058 kubelet[2551]: E1108 00:24:25.820892 2551 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-6-n-d839b30383\" already exists" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-d839b30383" Nov 8 00:24:25.821058 kubelet[2551]: E1108 00:24:25.821023 2551 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-n-d839b30383\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-6-n-d839b30383" Nov 8 00:24:25.822872 kubelet[2551]: E1108 00:24:25.822821 2551 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-6-n-d839b30383\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-6-n-d839b30383" Nov 8 00:24:25.871941 kubelet[2551]: I1108 00:24:25.871886 2551 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-d839b30383" Nov 8 00:24:25.883343 kubelet[2551]: I1108 00:24:25.883280 2551 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081-3-6-n-d839b30383" Nov 8 00:24:25.883343 kubelet[2551]: I1108 00:24:25.883351 2551 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-n-d839b30383" Nov 8 00:24:25.895648 kubelet[2551]: I1108 00:24:25.895471 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/abae073533c9b91767e7534278294b57-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-n-d839b30383\" (UID: \"abae073533c9b91767e7534278294b57\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-d839b30383" Nov 8 00:24:25.895648 kubelet[2551]: I1108 00:24:25.895509 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d7d2f13a53d4d57e87a177897fe88bf8-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-n-d839b30383\" (UID: \"d7d2f13a53d4d57e87a177897fe88bf8\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-d839b30383" Nov 8 00:24:25.895648 kubelet[2551]: I1108 00:24:25.895527 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d7d2f13a53d4d57e87a177897fe88bf8-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-n-d839b30383\" (UID: \"d7d2f13a53d4d57e87a177897fe88bf8\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-d839b30383" Nov 8 00:24:25.895648 kubelet[2551]: I1108 00:24:25.895557 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d7d2f13a53d4d57e87a177897fe88bf8-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-n-d839b30383\" (UID: \"d7d2f13a53d4d57e87a177897fe88bf8\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-d839b30383" Nov 8 00:24:25.895648 kubelet[2551]: I1108 00:24:25.895572 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/abae073533c9b91767e7534278294b57-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-d839b30383\" (UID: \"abae073533c9b91767e7534278294b57\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-d839b30383" Nov 8 00:24:25.896403 kubelet[2551]: I1108 00:24:25.896303 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/abae073533c9b91767e7534278294b57-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-n-d839b30383\" (UID: \"abae073533c9b91767e7534278294b57\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-d839b30383" Nov 8 00:24:25.896403 kubelet[2551]: I1108 00:24:25.896379 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/abae073533c9b91767e7534278294b57-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-d839b30383\" (UID: \"abae073533c9b91767e7534278294b57\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-d839b30383" Nov 8 00:24:25.896403 kubelet[2551]: I1108 00:24:25.896397 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/abae073533c9b91767e7534278294b57-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-n-d839b30383\" (UID: \"abae073533c9b91767e7534278294b57\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-d839b30383" Nov 8 00:24:25.896581 kubelet[2551]: I1108 00:24:25.896441 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ef8e838160a97a4887f3de6a4ba44033-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-n-d839b30383\" (UID: \"ef8e838160a97a4887f3de6a4ba44033\") " pod="kube-system/kube-scheduler-ci-4081-3-6-n-d839b30383" Nov 8 00:24:26.063513 sudo[2584]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Nov 8 00:24:26.064057 sudo[2584]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Nov 8 00:24:26.598574 sudo[2584]: pam_unix(sudo:session): session closed for user root Nov 8 00:24:26.672404 kubelet[2551]: I1108 00:24:26.672334 2551 apiserver.go:52] "Watching apiserver" Nov 8 00:24:26.695812 kubelet[2551]: I1108 00:24:26.695739 2551 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 8 00:24:26.744156 kubelet[2551]: I1108 00:24:26.743178 2551 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-d839b30383" Nov 8 00:24:26.744796 kubelet[2551]: I1108 00:24:26.744617 2551 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-d839b30383" Nov 8 00:24:26.755322 kubelet[2551]: E1108 00:24:26.755198 2551 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-6-n-d839b30383\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-6-n-d839b30383" Nov 8 00:24:26.757420 kubelet[2551]: E1108 00:24:26.757371 2551 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-n-d839b30383\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-6-n-d839b30383" Nov 8 00:24:26.784295 kubelet[2551]: I1108 00:24:26.784240 2551 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-d839b30383" podStartSLOduration=3.784210604 podStartE2EDuration="3.784210604s" podCreationTimestamp="2025-11-08 00:24:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:24:26.774247167 +0000 UTC m=+1.202915731" watchObservedRunningTime="2025-11-08 00:24:26.784210604 +0000 UTC m=+1.212879168" Nov 8 00:24:26.796882 kubelet[2551]: I1108 00:24:26.796722 2551 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-6-n-d839b30383" podStartSLOduration=2.796702683 podStartE2EDuration="2.796702683s" podCreationTimestamp="2025-11-08 00:24:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:24:26.785523977 +0000 UTC m=+1.214192541" watchObservedRunningTime="2025-11-08 00:24:26.796702683 +0000 UTC m=+1.225371247" Nov 8 00:24:26.807613 kubelet[2551]: I1108 00:24:26.807284 2551 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-6-n-d839b30383" podStartSLOduration=2.807266335 podStartE2EDuration="2.807266335s" podCreationTimestamp="2025-11-08 00:24:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:24:26.79810908 +0000 UTC m=+1.226777645" watchObservedRunningTime="2025-11-08 00:24:26.807266335 +0000 UTC m=+1.235934899" Nov 8 00:24:27.945163 sudo[1708]: pam_unix(sudo:session): session closed for user root Nov 8 00:24:28.109802 sshd[1689]: pam_unix(sshd:session): session closed for user core Nov 8 00:24:28.113121 systemd[1]: sshd@6-65.109.8.72:22-147.75.109.163:32948.service: Deactivated successfully. Nov 8 00:24:28.115098 systemd[1]: session-7.scope: Deactivated successfully. Nov 8 00:24:28.115284 systemd[1]: session-7.scope: Consumed 4.332s CPU time, 142.3M memory peak, 0B memory swap peak. Nov 8 00:24:28.117060 systemd-logind[1479]: Session 7 logged out. Waiting for processes to exit. Nov 8 00:24:28.118950 systemd-logind[1479]: Removed session 7. Nov 8 00:24:31.593669 kubelet[2551]: I1108 00:24:31.593625 2551 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 8 00:24:31.595752 containerd[1501]: time="2025-11-08T00:24:31.595699811Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 8 00:24:31.596177 kubelet[2551]: I1108 00:24:31.595934 2551 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 8 00:24:32.396269 kubelet[2551]: W1108 00:24:32.394246 2551 reflector.go:569] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-4081-3-6-n-d839b30383" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-6-n-d839b30383' and this object Nov 8 00:24:32.396269 kubelet[2551]: E1108 00:24:32.394295 2551 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ci-4081-3-6-n-d839b30383\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081-3-6-n-d839b30383' and this object" logger="UnhandledError" Nov 8 00:24:32.396400 systemd[1]: Created slice kubepods-besteffort-pod4478377e_eff6_4ce6_a16a_c1f2cadd5c61.slice - libcontainer container kubepods-besteffort-pod4478377e_eff6_4ce6_a16a_c1f2cadd5c61.slice. Nov 8 00:24:32.397255 kubelet[2551]: W1108 00:24:32.397005 2551 reflector.go:569] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-4081-3-6-n-d839b30383" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-6-n-d839b30383' and this object Nov 8 00:24:32.397255 kubelet[2551]: E1108 00:24:32.397042 2551 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:ci-4081-3-6-n-d839b30383\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081-3-6-n-d839b30383' and this object" logger="UnhandledError" Nov 8 00:24:32.408796 systemd[1]: Created slice kubepods-burstable-pod5ee5f43a_5438_4f97_90d6_44dbf30f42b7.slice - libcontainer container kubepods-burstable-pod5ee5f43a_5438_4f97_90d6_44dbf30f42b7.slice. Nov 8 00:24:32.440147 kubelet[2551]: I1108 00:24:32.439459 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5ee5f43a-5438-4f97-90d6-44dbf30f42b7-cilium-config-path\") pod \"cilium-zpvqc\" (UID: \"5ee5f43a-5438-4f97-90d6-44dbf30f42b7\") " pod="kube-system/cilium-zpvqc" Nov 8 00:24:32.440147 kubelet[2551]: I1108 00:24:32.439502 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5ee5f43a-5438-4f97-90d6-44dbf30f42b7-cilium-run\") pod \"cilium-zpvqc\" (UID: \"5ee5f43a-5438-4f97-90d6-44dbf30f42b7\") " pod="kube-system/cilium-zpvqc" Nov 8 00:24:32.440147 kubelet[2551]: I1108 00:24:32.439530 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4478377e-eff6-4ce6-a16a-c1f2cadd5c61-xtables-lock\") pod \"kube-proxy-qf4q9\" (UID: \"4478377e-eff6-4ce6-a16a-c1f2cadd5c61\") " pod="kube-system/kube-proxy-qf4q9" Nov 8 00:24:32.440147 kubelet[2551]: I1108 00:24:32.439547 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7l2x\" (UniqueName: \"kubernetes.io/projected/4478377e-eff6-4ce6-a16a-c1f2cadd5c61-kube-api-access-c7l2x\") pod \"kube-proxy-qf4q9\" (UID: \"4478377e-eff6-4ce6-a16a-c1f2cadd5c61\") " pod="kube-system/kube-proxy-qf4q9" Nov 8 00:24:32.440147 kubelet[2551]: I1108 00:24:32.439572 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5ee5f43a-5438-4f97-90d6-44dbf30f42b7-cni-path\") pod \"cilium-zpvqc\" (UID: \"5ee5f43a-5438-4f97-90d6-44dbf30f42b7\") " pod="kube-system/cilium-zpvqc" Nov 8 00:24:32.440147 kubelet[2551]: I1108 00:24:32.439587 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5ee5f43a-5438-4f97-90d6-44dbf30f42b7-hostproc\") pod \"cilium-zpvqc\" (UID: \"5ee5f43a-5438-4f97-90d6-44dbf30f42b7\") " pod="kube-system/cilium-zpvqc" Nov 8 00:24:32.440393 kubelet[2551]: I1108 00:24:32.439602 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5ee5f43a-5438-4f97-90d6-44dbf30f42b7-host-proc-sys-net\") pod \"cilium-zpvqc\" (UID: \"5ee5f43a-5438-4f97-90d6-44dbf30f42b7\") " pod="kube-system/cilium-zpvqc" Nov 8 00:24:32.440393 kubelet[2551]: I1108 00:24:32.439617 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5ee5f43a-5438-4f97-90d6-44dbf30f42b7-host-proc-sys-kernel\") pod \"cilium-zpvqc\" (UID: \"5ee5f43a-5438-4f97-90d6-44dbf30f42b7\") " pod="kube-system/cilium-zpvqc" Nov 8 00:24:32.440393 kubelet[2551]: I1108 00:24:32.439632 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5ee5f43a-5438-4f97-90d6-44dbf30f42b7-bpf-maps\") pod \"cilium-zpvqc\" (UID: \"5ee5f43a-5438-4f97-90d6-44dbf30f42b7\") " pod="kube-system/cilium-zpvqc" Nov 8 00:24:32.440393 kubelet[2551]: I1108 00:24:32.439644 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5ee5f43a-5438-4f97-90d6-44dbf30f42b7-lib-modules\") pod \"cilium-zpvqc\" (UID: \"5ee5f43a-5438-4f97-90d6-44dbf30f42b7\") " pod="kube-system/cilium-zpvqc" Nov 8 00:24:32.440393 kubelet[2551]: I1108 00:24:32.439666 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5ee5f43a-5438-4f97-90d6-44dbf30f42b7-cilium-cgroup\") pod \"cilium-zpvqc\" (UID: \"5ee5f43a-5438-4f97-90d6-44dbf30f42b7\") " pod="kube-system/cilium-zpvqc" Nov 8 00:24:32.440393 kubelet[2551]: I1108 00:24:32.439710 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5ee5f43a-5438-4f97-90d6-44dbf30f42b7-etc-cni-netd\") pod \"cilium-zpvqc\" (UID: \"5ee5f43a-5438-4f97-90d6-44dbf30f42b7\") " pod="kube-system/cilium-zpvqc" Nov 8 00:24:32.440497 kubelet[2551]: I1108 00:24:32.439726 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5ee5f43a-5438-4f97-90d6-44dbf30f42b7-xtables-lock\") pod \"cilium-zpvqc\" (UID: \"5ee5f43a-5438-4f97-90d6-44dbf30f42b7\") " pod="kube-system/cilium-zpvqc" Nov 8 00:24:32.440497 kubelet[2551]: I1108 00:24:32.439740 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5ee5f43a-5438-4f97-90d6-44dbf30f42b7-clustermesh-secrets\") pod \"cilium-zpvqc\" (UID: \"5ee5f43a-5438-4f97-90d6-44dbf30f42b7\") " pod="kube-system/cilium-zpvqc" Nov 8 00:24:32.440497 kubelet[2551]: I1108 00:24:32.439756 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5ee5f43a-5438-4f97-90d6-44dbf30f42b7-hubble-tls\") pod \"cilium-zpvqc\" (UID: \"5ee5f43a-5438-4f97-90d6-44dbf30f42b7\") " pod="kube-system/cilium-zpvqc" Nov 8 00:24:32.440497 kubelet[2551]: I1108 00:24:32.439771 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4478377e-eff6-4ce6-a16a-c1f2cadd5c61-kube-proxy\") pod \"kube-proxy-qf4q9\" (UID: \"4478377e-eff6-4ce6-a16a-c1f2cadd5c61\") " pod="kube-system/kube-proxy-qf4q9" Nov 8 00:24:32.440497 kubelet[2551]: I1108 00:24:32.439788 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4478377e-eff6-4ce6-a16a-c1f2cadd5c61-lib-modules\") pod \"kube-proxy-qf4q9\" (UID: \"4478377e-eff6-4ce6-a16a-c1f2cadd5c61\") " pod="kube-system/kube-proxy-qf4q9" Nov 8 00:24:32.440497 kubelet[2551]: I1108 00:24:32.439806 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xh78f\" (UniqueName: \"kubernetes.io/projected/5ee5f43a-5438-4f97-90d6-44dbf30f42b7-kube-api-access-xh78f\") pod \"cilium-zpvqc\" (UID: \"5ee5f43a-5438-4f97-90d6-44dbf30f42b7\") " pod="kube-system/cilium-zpvqc" Nov 8 00:24:32.581980 systemd[1]: Created slice kubepods-besteffort-pod8c6ce54f_ded5_426a_aa0c_1c121606e402.slice - libcontainer container kubepods-besteffort-pod8c6ce54f_ded5_426a_aa0c_1c121606e402.slice. Nov 8 00:24:32.641040 kubelet[2551]: I1108 00:24:32.640987 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8c6ce54f-ded5-426a-aa0c-1c121606e402-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-vgxhk\" (UID: \"8c6ce54f-ded5-426a-aa0c-1c121606e402\") " pod="kube-system/cilium-operator-6c4d7847fc-vgxhk" Nov 8 00:24:32.641040 kubelet[2551]: I1108 00:24:32.641042 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6lzk\" (UniqueName: \"kubernetes.io/projected/8c6ce54f-ded5-426a-aa0c-1c121606e402-kube-api-access-h6lzk\") pod \"cilium-operator-6c4d7847fc-vgxhk\" (UID: \"8c6ce54f-ded5-426a-aa0c-1c121606e402\") " pod="kube-system/cilium-operator-6c4d7847fc-vgxhk" Nov 8 00:24:32.704921 containerd[1501]: time="2025-11-08T00:24:32.704782497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qf4q9,Uid:4478377e-eff6-4ce6-a16a-c1f2cadd5c61,Namespace:kube-system,Attempt:0,}" Nov 8 00:24:32.731445 containerd[1501]: time="2025-11-08T00:24:32.730969420Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:24:32.731445 containerd[1501]: time="2025-11-08T00:24:32.731024714Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:24:32.731445 containerd[1501]: time="2025-11-08T00:24:32.731040473Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:24:32.731445 containerd[1501]: time="2025-11-08T00:24:32.731116236Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:24:32.758631 systemd[1]: run-containerd-runc-k8s.io-cd9594840c9dd86508306673be7ba1cafd52bdaf46382c7ec15324f333afab71-runc.vXQ250.mount: Deactivated successfully. Nov 8 00:24:32.770496 systemd[1]: Started cri-containerd-cd9594840c9dd86508306673be7ba1cafd52bdaf46382c7ec15324f333afab71.scope - libcontainer container cd9594840c9dd86508306673be7ba1cafd52bdaf46382c7ec15324f333afab71. Nov 8 00:24:32.793377 containerd[1501]: time="2025-11-08T00:24:32.793299912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qf4q9,Uid:4478377e-eff6-4ce6-a16a-c1f2cadd5c61,Namespace:kube-system,Attempt:0,} returns sandbox id \"cd9594840c9dd86508306673be7ba1cafd52bdaf46382c7ec15324f333afab71\"" Nov 8 00:24:32.798163 containerd[1501]: time="2025-11-08T00:24:32.798086399Z" level=info msg="CreateContainer within sandbox \"cd9594840c9dd86508306673be7ba1cafd52bdaf46382c7ec15324f333afab71\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 8 00:24:32.813501 containerd[1501]: time="2025-11-08T00:24:32.813442068Z" level=info msg="CreateContainer within sandbox \"cd9594840c9dd86508306673be7ba1cafd52bdaf46382c7ec15324f333afab71\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0cd826ea892b0378443872cd034846a8f5608e38aa9c9167f8918daa1c64ec0d\"" Nov 8 00:24:32.814532 containerd[1501]: time="2025-11-08T00:24:32.814462523Z" level=info msg="StartContainer for \"0cd826ea892b0378443872cd034846a8f5608e38aa9c9167f8918daa1c64ec0d\"" Nov 8 00:24:32.841083 systemd[1]: Started cri-containerd-0cd826ea892b0378443872cd034846a8f5608e38aa9c9167f8918daa1c64ec0d.scope - libcontainer container 0cd826ea892b0378443872cd034846a8f5608e38aa9c9167f8918daa1c64ec0d. Nov 8 00:24:32.874943 containerd[1501]: time="2025-11-08T00:24:32.874263730Z" level=info msg="StartContainer for \"0cd826ea892b0378443872cd034846a8f5608e38aa9c9167f8918daa1c64ec0d\" returns successfully" Nov 8 00:24:32.886016 containerd[1501]: time="2025-11-08T00:24:32.885935791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-vgxhk,Uid:8c6ce54f-ded5-426a-aa0c-1c121606e402,Namespace:kube-system,Attempt:0,}" Nov 8 00:24:32.912271 containerd[1501]: time="2025-11-08T00:24:32.912056792Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:24:32.912543 containerd[1501]: time="2025-11-08T00:24:32.912497428Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:24:32.912833 containerd[1501]: time="2025-11-08T00:24:32.912776902Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:24:32.913138 containerd[1501]: time="2025-11-08T00:24:32.913073138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:24:32.934549 systemd[1]: Started cri-containerd-f5fa2c09f95c63fd2b8029d1a8e1a1a5ada467ab2ce6ecbb9cf158ccb1a9faf5.scope - libcontainer container f5fa2c09f95c63fd2b8029d1a8e1a1a5ada467ab2ce6ecbb9cf158ccb1a9faf5. Nov 8 00:24:32.988925 containerd[1501]: time="2025-11-08T00:24:32.988569241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-vgxhk,Uid:8c6ce54f-ded5-426a-aa0c-1c121606e402,Namespace:kube-system,Attempt:0,} returns sandbox id \"f5fa2c09f95c63fd2b8029d1a8e1a1a5ada467ab2ce6ecbb9cf158ccb1a9faf5\"" Nov 8 00:24:32.992108 containerd[1501]: time="2025-11-08T00:24:32.992054749Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Nov 8 00:24:33.542030 kubelet[2551]: E1108 00:24:33.541973 2551 secret.go:189] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Nov 8 00:24:33.542276 kubelet[2551]: E1108 00:24:33.542105 2551 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5ee5f43a-5438-4f97-90d6-44dbf30f42b7-clustermesh-secrets podName:5ee5f43a-5438-4f97-90d6-44dbf30f42b7 nodeName:}" failed. No retries permitted until 2025-11-08 00:24:34.042075908 +0000 UTC m=+8.470744483 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/5ee5f43a-5438-4f97-90d6-44dbf30f42b7-clustermesh-secrets") pod "cilium-zpvqc" (UID: "5ee5f43a-5438-4f97-90d6-44dbf30f42b7") : failed to sync secret cache: timed out waiting for the condition Nov 8 00:24:33.779995 kubelet[2551]: I1108 00:24:33.777883 2551 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qf4q9" podStartSLOduration=1.777858412 podStartE2EDuration="1.777858412s" podCreationTimestamp="2025-11-08 00:24:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:24:33.776518218 +0000 UTC m=+8.205186792" watchObservedRunningTime="2025-11-08 00:24:33.777858412 +0000 UTC m=+8.206527045" Nov 8 00:24:34.213310 containerd[1501]: time="2025-11-08T00:24:34.212884607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zpvqc,Uid:5ee5f43a-5438-4f97-90d6-44dbf30f42b7,Namespace:kube-system,Attempt:0,}" Nov 8 00:24:34.234651 containerd[1501]: time="2025-11-08T00:24:34.234541986Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:24:34.234651 containerd[1501]: time="2025-11-08T00:24:34.234613009Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:24:34.234855 containerd[1501]: time="2025-11-08T00:24:34.234631013Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:24:34.234855 containerd[1501]: time="2025-11-08T00:24:34.234749455Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:24:34.263516 systemd[1]: Started cri-containerd-ad7fe8fc914db613c7725775e056fe233f48b8f06d4a30ef01671adeb6b6d13f.scope - libcontainer container ad7fe8fc914db613c7725775e056fe233f48b8f06d4a30ef01671adeb6b6d13f. Nov 8 00:24:34.285260 containerd[1501]: time="2025-11-08T00:24:34.285126177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zpvqc,Uid:5ee5f43a-5438-4f97-90d6-44dbf30f42b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"ad7fe8fc914db613c7725775e056fe233f48b8f06d4a30ef01671adeb6b6d13f\"" Nov 8 00:24:34.882515 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount747932310.mount: Deactivated successfully. Nov 8 00:24:35.384488 containerd[1501]: time="2025-11-08T00:24:35.384438679Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:24:35.385481 containerd[1501]: time="2025-11-08T00:24:35.385337043Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Nov 8 00:24:35.386473 containerd[1501]: time="2025-11-08T00:24:35.386193600Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:24:35.387407 containerd[1501]: time="2025-11-08T00:24:35.387375046Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.395276386s" Nov 8 00:24:35.387450 containerd[1501]: time="2025-11-08T00:24:35.387411585Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Nov 8 00:24:35.391859 containerd[1501]: time="2025-11-08T00:24:35.391829070Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Nov 8 00:24:35.392002 containerd[1501]: time="2025-11-08T00:24:35.391976416Z" level=info msg="CreateContainer within sandbox \"f5fa2c09f95c63fd2b8029d1a8e1a1a5ada467ab2ce6ecbb9cf158ccb1a9faf5\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Nov 8 00:24:35.405990 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1940670201.mount: Deactivated successfully. Nov 8 00:24:35.407880 containerd[1501]: time="2025-11-08T00:24:35.407823197Z" level=info msg="CreateContainer within sandbox \"f5fa2c09f95c63fd2b8029d1a8e1a1a5ada467ab2ce6ecbb9cf158ccb1a9faf5\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"dc5d10d2666f8e3aedbc046f85d75607633c6767a7386e2cd39414399cb1a8a6\"" Nov 8 00:24:35.410267 containerd[1501]: time="2025-11-08T00:24:35.409169272Z" level=info msg="StartContainer for \"dc5d10d2666f8e3aedbc046f85d75607633c6767a7386e2cd39414399cb1a8a6\"" Nov 8 00:24:35.439468 systemd[1]: Started cri-containerd-dc5d10d2666f8e3aedbc046f85d75607633c6767a7386e2cd39414399cb1a8a6.scope - libcontainer container dc5d10d2666f8e3aedbc046f85d75607633c6767a7386e2cd39414399cb1a8a6. Nov 8 00:24:35.470966 containerd[1501]: time="2025-11-08T00:24:35.470921218Z" level=info msg="StartContainer for \"dc5d10d2666f8e3aedbc046f85d75607633c6767a7386e2cd39414399cb1a8a6\" returns successfully" Nov 8 00:24:36.052534 kubelet[2551]: I1108 00:24:36.052462 2551 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-vgxhk" podStartSLOduration=1.650214945 podStartE2EDuration="4.049589285s" podCreationTimestamp="2025-11-08 00:24:32 +0000 UTC" firstStartedPulling="2025-11-08 00:24:32.991210135 +0000 UTC m=+7.419878699" lastFinishedPulling="2025-11-08 00:24:35.390584475 +0000 UTC m=+9.819253039" observedRunningTime="2025-11-08 00:24:35.840414296 +0000 UTC m=+10.269082860" watchObservedRunningTime="2025-11-08 00:24:36.049589285 +0000 UTC m=+10.478257850" Nov 8 00:24:36.102365 update_engine[1480]: I20251108 00:24:36.102267 1480 update_attempter.cc:509] Updating boot flags... Nov 8 00:24:36.179826 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (2972) Nov 8 00:24:36.281238 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (2975) Nov 8 00:24:39.531591 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1012873887.mount: Deactivated successfully. Nov 8 00:24:40.968855 containerd[1501]: time="2025-11-08T00:24:40.968785579Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:24:40.971027 containerd[1501]: time="2025-11-08T00:24:40.970991051Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Nov 8 00:24:40.982336 containerd[1501]: time="2025-11-08T00:24:40.982275323Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:24:40.984289 containerd[1501]: time="2025-11-08T00:24:40.984150379Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 5.592285674s" Nov 8 00:24:40.984289 containerd[1501]: time="2025-11-08T00:24:40.984185752Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Nov 8 00:24:40.987280 containerd[1501]: time="2025-11-08T00:24:40.987255098Z" level=info msg="CreateContainer within sandbox \"ad7fe8fc914db613c7725775e056fe233f48b8f06d4a30ef01671adeb6b6d13f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 8 00:24:41.058893 containerd[1501]: time="2025-11-08T00:24:41.058832721Z" level=info msg="CreateContainer within sandbox \"ad7fe8fc914db613c7725775e056fe233f48b8f06d4a30ef01671adeb6b6d13f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2ce2a97e1f43be9edbc9906f4ab70eb31c48feb1c4d413b7f36094474ff868ec\"" Nov 8 00:24:41.061238 containerd[1501]: time="2025-11-08T00:24:41.059569082Z" level=info msg="StartContainer for \"2ce2a97e1f43be9edbc9906f4ab70eb31c48feb1c4d413b7f36094474ff868ec\"" Nov 8 00:24:41.216390 systemd[1]: Started cri-containerd-2ce2a97e1f43be9edbc9906f4ab70eb31c48feb1c4d413b7f36094474ff868ec.scope - libcontainer container 2ce2a97e1f43be9edbc9906f4ab70eb31c48feb1c4d413b7f36094474ff868ec. Nov 8 00:24:41.239132 containerd[1501]: time="2025-11-08T00:24:41.238997816Z" level=info msg="StartContainer for \"2ce2a97e1f43be9edbc9906f4ab70eb31c48feb1c4d413b7f36094474ff868ec\" returns successfully" Nov 8 00:24:41.250956 systemd[1]: cri-containerd-2ce2a97e1f43be9edbc9906f4ab70eb31c48feb1c4d413b7f36094474ff868ec.scope: Deactivated successfully. Nov 8 00:24:41.319746 containerd[1501]: time="2025-11-08T00:24:41.311089042Z" level=info msg="shim disconnected" id=2ce2a97e1f43be9edbc9906f4ab70eb31c48feb1c4d413b7f36094474ff868ec namespace=k8s.io Nov 8 00:24:41.319746 containerd[1501]: time="2025-11-08T00:24:41.319737501Z" level=warning msg="cleaning up after shim disconnected" id=2ce2a97e1f43be9edbc9906f4ab70eb31c48feb1c4d413b7f36094474ff868ec namespace=k8s.io Nov 8 00:24:41.319746 containerd[1501]: time="2025-11-08T00:24:41.319753930Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:24:41.812427 containerd[1501]: time="2025-11-08T00:24:41.812269198Z" level=info msg="CreateContainer within sandbox \"ad7fe8fc914db613c7725775e056fe233f48b8f06d4a30ef01671adeb6b6d13f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 8 00:24:41.825557 containerd[1501]: time="2025-11-08T00:24:41.825517568Z" level=info msg="CreateContainer within sandbox \"ad7fe8fc914db613c7725775e056fe233f48b8f06d4a30ef01671adeb6b6d13f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5e5d8e18940d03e3a2b2c33e7835602f4f6e20a4253c7ab94cc660179f44d05a\"" Nov 8 00:24:41.828253 containerd[1501]: time="2025-11-08T00:24:41.827495479Z" level=info msg="StartContainer for \"5e5d8e18940d03e3a2b2c33e7835602f4f6e20a4253c7ab94cc660179f44d05a\"" Nov 8 00:24:41.856382 systemd[1]: Started cri-containerd-5e5d8e18940d03e3a2b2c33e7835602f4f6e20a4253c7ab94cc660179f44d05a.scope - libcontainer container 5e5d8e18940d03e3a2b2c33e7835602f4f6e20a4253c7ab94cc660179f44d05a. Nov 8 00:24:41.888267 containerd[1501]: time="2025-11-08T00:24:41.888212323Z" level=info msg="StartContainer for \"5e5d8e18940d03e3a2b2c33e7835602f4f6e20a4253c7ab94cc660179f44d05a\" returns successfully" Nov 8 00:24:41.898049 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 8 00:24:41.898673 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:24:41.898864 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:24:41.904284 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:24:41.904502 systemd[1]: cri-containerd-5e5d8e18940d03e3a2b2c33e7835602f4f6e20a4253c7ab94cc660179f44d05a.scope: Deactivated successfully. Nov 8 00:24:41.930495 containerd[1501]: time="2025-11-08T00:24:41.930296648Z" level=info msg="shim disconnected" id=5e5d8e18940d03e3a2b2c33e7835602f4f6e20a4253c7ab94cc660179f44d05a namespace=k8s.io Nov 8 00:24:41.931006 containerd[1501]: time="2025-11-08T00:24:41.930618382Z" level=warning msg="cleaning up after shim disconnected" id=5e5d8e18940d03e3a2b2c33e7835602f4f6e20a4253c7ab94cc660179f44d05a namespace=k8s.io Nov 8 00:24:41.931006 containerd[1501]: time="2025-11-08T00:24:41.930643977Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:24:41.957234 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:24:42.048511 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2ce2a97e1f43be9edbc9906f4ab70eb31c48feb1c4d413b7f36094474ff868ec-rootfs.mount: Deactivated successfully. Nov 8 00:24:42.815304 containerd[1501]: time="2025-11-08T00:24:42.815212137Z" level=info msg="CreateContainer within sandbox \"ad7fe8fc914db613c7725775e056fe233f48b8f06d4a30ef01671adeb6b6d13f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 8 00:24:42.840443 containerd[1501]: time="2025-11-08T00:24:42.840395629Z" level=info msg="CreateContainer within sandbox \"ad7fe8fc914db613c7725775e056fe233f48b8f06d4a30ef01671adeb6b6d13f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"59255e348dd2ac73682e5b6d33a17d2fdb37ef20ab52520fcbe5d59654c173b8\"" Nov 8 00:24:42.841995 containerd[1501]: time="2025-11-08T00:24:42.841947431Z" level=info msg="StartContainer for \"59255e348dd2ac73682e5b6d33a17d2fdb37ef20ab52520fcbe5d59654c173b8\"" Nov 8 00:24:42.879406 systemd[1]: Started cri-containerd-59255e348dd2ac73682e5b6d33a17d2fdb37ef20ab52520fcbe5d59654c173b8.scope - libcontainer container 59255e348dd2ac73682e5b6d33a17d2fdb37ef20ab52520fcbe5d59654c173b8. Nov 8 00:24:42.911780 containerd[1501]: time="2025-11-08T00:24:42.911734839Z" level=info msg="StartContainer for \"59255e348dd2ac73682e5b6d33a17d2fdb37ef20ab52520fcbe5d59654c173b8\" returns successfully" Nov 8 00:24:42.915476 systemd[1]: cri-containerd-59255e348dd2ac73682e5b6d33a17d2fdb37ef20ab52520fcbe5d59654c173b8.scope: Deactivated successfully. Nov 8 00:24:42.944974 containerd[1501]: time="2025-11-08T00:24:42.944789853Z" level=info msg="shim disconnected" id=59255e348dd2ac73682e5b6d33a17d2fdb37ef20ab52520fcbe5d59654c173b8 namespace=k8s.io Nov 8 00:24:42.944974 containerd[1501]: time="2025-11-08T00:24:42.944840804Z" level=warning msg="cleaning up after shim disconnected" id=59255e348dd2ac73682e5b6d33a17d2fdb37ef20ab52520fcbe5d59654c173b8 namespace=k8s.io Nov 8 00:24:42.944974 containerd[1501]: time="2025-11-08T00:24:42.944849680Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:24:43.048598 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-59255e348dd2ac73682e5b6d33a17d2fdb37ef20ab52520fcbe5d59654c173b8-rootfs.mount: Deactivated successfully. Nov 8 00:24:43.822657 containerd[1501]: time="2025-11-08T00:24:43.822577726Z" level=info msg="CreateContainer within sandbox \"ad7fe8fc914db613c7725775e056fe233f48b8f06d4a30ef01671adeb6b6d13f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 8 00:24:43.848888 containerd[1501]: time="2025-11-08T00:24:43.848802414Z" level=info msg="CreateContainer within sandbox \"ad7fe8fc914db613c7725775e056fe233f48b8f06d4a30ef01671adeb6b6d13f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"cb2b765dbc964bb3a86448950397e412e78ab5fbbe12b240ca1a70353e4014d0\"" Nov 8 00:24:43.850254 containerd[1501]: time="2025-11-08T00:24:43.849834823Z" level=info msg="StartContainer for \"cb2b765dbc964bb3a86448950397e412e78ab5fbbe12b240ca1a70353e4014d0\"" Nov 8 00:24:43.883380 systemd[1]: Started cri-containerd-cb2b765dbc964bb3a86448950397e412e78ab5fbbe12b240ca1a70353e4014d0.scope - libcontainer container cb2b765dbc964bb3a86448950397e412e78ab5fbbe12b240ca1a70353e4014d0. Nov 8 00:24:43.910092 systemd[1]: cri-containerd-cb2b765dbc964bb3a86448950397e412e78ab5fbbe12b240ca1a70353e4014d0.scope: Deactivated successfully. Nov 8 00:24:43.912406 containerd[1501]: time="2025-11-08T00:24:43.912063464Z" level=info msg="StartContainer for \"cb2b765dbc964bb3a86448950397e412e78ab5fbbe12b240ca1a70353e4014d0\" returns successfully" Nov 8 00:24:43.936181 containerd[1501]: time="2025-11-08T00:24:43.936120276Z" level=info msg="shim disconnected" id=cb2b765dbc964bb3a86448950397e412e78ab5fbbe12b240ca1a70353e4014d0 namespace=k8s.io Nov 8 00:24:43.936591 containerd[1501]: time="2025-11-08T00:24:43.936258513Z" level=warning msg="cleaning up after shim disconnected" id=cb2b765dbc964bb3a86448950397e412e78ab5fbbe12b240ca1a70353e4014d0 namespace=k8s.io Nov 8 00:24:43.936591 containerd[1501]: time="2025-11-08T00:24:43.936275875Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:24:43.948439 containerd[1501]: time="2025-11-08T00:24:43.948380888Z" level=warning msg="cleanup warnings time=\"2025-11-08T00:24:43Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Nov 8 00:24:44.049379 systemd[1]: run-containerd-runc-k8s.io-cb2b765dbc964bb3a86448950397e412e78ab5fbbe12b240ca1a70353e4014d0-runc.gtp3TO.mount: Deactivated successfully. Nov 8 00:24:44.049517 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cb2b765dbc964bb3a86448950397e412e78ab5fbbe12b240ca1a70353e4014d0-rootfs.mount: Deactivated successfully. Nov 8 00:24:44.823765 containerd[1501]: time="2025-11-08T00:24:44.823719146Z" level=info msg="CreateContainer within sandbox \"ad7fe8fc914db613c7725775e056fe233f48b8f06d4a30ef01671adeb6b6d13f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 8 00:24:44.844179 containerd[1501]: time="2025-11-08T00:24:44.843663814Z" level=info msg="CreateContainer within sandbox \"ad7fe8fc914db613c7725775e056fe233f48b8f06d4a30ef01671adeb6b6d13f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"35bfdabf8d1eb77391a10a46faf20ddad829ab21206036c9aa11e4a71e424832\"" Nov 8 00:24:44.845179 containerd[1501]: time="2025-11-08T00:24:44.844381054Z" level=info msg="StartContainer for \"35bfdabf8d1eb77391a10a46faf20ddad829ab21206036c9aa11e4a71e424832\"" Nov 8 00:24:44.884700 systemd[1]: Started cri-containerd-35bfdabf8d1eb77391a10a46faf20ddad829ab21206036c9aa11e4a71e424832.scope - libcontainer container 35bfdabf8d1eb77391a10a46faf20ddad829ab21206036c9aa11e4a71e424832. Nov 8 00:24:44.907532 containerd[1501]: time="2025-11-08T00:24:44.907495981Z" level=info msg="StartContainer for \"35bfdabf8d1eb77391a10a46faf20ddad829ab21206036c9aa11e4a71e424832\" returns successfully" Nov 8 00:24:45.086496 kubelet[2551]: I1108 00:24:45.085625 2551 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 8 00:24:45.133479 systemd[1]: Created slice kubepods-burstable-pod9391c4ca_c8f2_4e2f_a010_9330466f9a48.slice - libcontainer container kubepods-burstable-pod9391c4ca_c8f2_4e2f_a010_9330466f9a48.slice. Nov 8 00:24:45.138134 kubelet[2551]: I1108 00:24:45.138044 2551 status_manager.go:890] "Failed to get status for pod" podUID="9391c4ca-c8f2-4e2f-a010-9330466f9a48" pod="kube-system/coredns-668d6bf9bc-5sgtm" err="pods \"coredns-668d6bf9bc-5sgtm\" is forbidden: User \"system:node:ci-4081-3-6-n-d839b30383\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081-3-6-n-d839b30383' and this object" Nov 8 00:24:45.142205 systemd[1]: Created slice kubepods-burstable-poda65ceb3b_32cd_4390_8ab5_13d14e7b7372.slice - libcontainer container kubepods-burstable-poda65ceb3b_32cd_4390_8ab5_13d14e7b7372.slice. Nov 8 00:24:45.242559 kubelet[2551]: I1108 00:24:45.242345 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9391c4ca-c8f2-4e2f-a010-9330466f9a48-config-volume\") pod \"coredns-668d6bf9bc-5sgtm\" (UID: \"9391c4ca-c8f2-4e2f-a010-9330466f9a48\") " pod="kube-system/coredns-668d6bf9bc-5sgtm" Nov 8 00:24:45.242559 kubelet[2551]: I1108 00:24:45.242393 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a65ceb3b-32cd-4390-8ab5-13d14e7b7372-config-volume\") pod \"coredns-668d6bf9bc-rhbss\" (UID: \"a65ceb3b-32cd-4390-8ab5-13d14e7b7372\") " pod="kube-system/coredns-668d6bf9bc-rhbss" Nov 8 00:24:45.242559 kubelet[2551]: I1108 00:24:45.242420 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knj4f\" (UniqueName: \"kubernetes.io/projected/9391c4ca-c8f2-4e2f-a010-9330466f9a48-kube-api-access-knj4f\") pod \"coredns-668d6bf9bc-5sgtm\" (UID: \"9391c4ca-c8f2-4e2f-a010-9330466f9a48\") " pod="kube-system/coredns-668d6bf9bc-5sgtm" Nov 8 00:24:45.242559 kubelet[2551]: I1108 00:24:45.242448 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9698m\" (UniqueName: \"kubernetes.io/projected/a65ceb3b-32cd-4390-8ab5-13d14e7b7372-kube-api-access-9698m\") pod \"coredns-668d6bf9bc-rhbss\" (UID: \"a65ceb3b-32cd-4390-8ab5-13d14e7b7372\") " pod="kube-system/coredns-668d6bf9bc-rhbss" Nov 8 00:24:45.440545 containerd[1501]: time="2025-11-08T00:24:45.440397800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5sgtm,Uid:9391c4ca-c8f2-4e2f-a010-9330466f9a48,Namespace:kube-system,Attempt:0,}" Nov 8 00:24:45.447328 containerd[1501]: time="2025-11-08T00:24:45.447278070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rhbss,Uid:a65ceb3b-32cd-4390-8ab5-13d14e7b7372,Namespace:kube-system,Attempt:0,}" Nov 8 00:24:45.849618 kubelet[2551]: I1108 00:24:45.849513 2551 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-zpvqc" podStartSLOduration=7.151308367 podStartE2EDuration="13.849457588s" podCreationTimestamp="2025-11-08 00:24:32 +0000 UTC" firstStartedPulling="2025-11-08 00:24:34.286993279 +0000 UTC m=+8.715661853" lastFinishedPulling="2025-11-08 00:24:40.9851425 +0000 UTC m=+15.413811074" observedRunningTime="2025-11-08 00:24:45.849348723 +0000 UTC m=+20.278017286" watchObservedRunningTime="2025-11-08 00:24:45.849457588 +0000 UTC m=+20.278126153" Nov 8 00:24:46.974567 systemd-networkd[1414]: cilium_host: Link UP Nov 8 00:24:46.974706 systemd-networkd[1414]: cilium_net: Link UP Nov 8 00:24:46.974856 systemd-networkd[1414]: cilium_net: Gained carrier Nov 8 00:24:46.978936 systemd-networkd[1414]: cilium_host: Gained carrier Nov 8 00:24:47.063472 systemd-networkd[1414]: cilium_vxlan: Link UP Nov 8 00:24:47.063485 systemd-networkd[1414]: cilium_vxlan: Gained carrier Nov 8 00:24:47.245392 systemd-networkd[1414]: cilium_net: Gained IPv6LL Nov 8 00:24:47.447424 kernel: NET: Registered PF_ALG protocol family Nov 8 00:24:47.806089 systemd-networkd[1414]: cilium_host: Gained IPv6LL Nov 8 00:24:48.029524 systemd-networkd[1414]: lxc_health: Link UP Nov 8 00:24:48.029851 systemd-networkd[1414]: lxc_health: Gained carrier Nov 8 00:24:48.520268 systemd-networkd[1414]: lxc49134f778473: Link UP Nov 8 00:24:48.529867 kernel: eth0: renamed from tmpb44cd Nov 8 00:24:48.538261 systemd-networkd[1414]: lxc49134f778473: Gained carrier Nov 8 00:24:48.545316 systemd-networkd[1414]: lxcbe2a4b0161f6: Link UP Nov 8 00:24:48.550579 kernel: eth0: renamed from tmp6cbde Nov 8 00:24:48.559483 systemd-networkd[1414]: lxcbe2a4b0161f6: Gained carrier Nov 8 00:24:48.639012 systemd-networkd[1414]: cilium_vxlan: Gained IPv6LL Nov 8 00:24:49.662347 systemd-networkd[1414]: lxc_health: Gained IPv6LL Nov 8 00:24:49.726022 systemd-networkd[1414]: lxc49134f778473: Gained IPv6LL Nov 8 00:24:49.789420 systemd-networkd[1414]: lxcbe2a4b0161f6: Gained IPv6LL Nov 8 00:24:51.842273 containerd[1501]: time="2025-11-08T00:24:51.840911038Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:24:51.842273 containerd[1501]: time="2025-11-08T00:24:51.840966721Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:24:51.842273 containerd[1501]: time="2025-11-08T00:24:51.840976869Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:24:51.842273 containerd[1501]: time="2025-11-08T00:24:51.841054761Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:24:51.868151 containerd[1501]: time="2025-11-08T00:24:51.867701993Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:24:51.868366 containerd[1501]: time="2025-11-08T00:24:51.868163195Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:24:51.868544 containerd[1501]: time="2025-11-08T00:24:51.868491563Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:24:51.868726 containerd[1501]: time="2025-11-08T00:24:51.868694324Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:24:51.893012 systemd[1]: Started cri-containerd-b44cd06f1255c91be4e4ab8171b0947f114c160fe99b6c8f1c62db00b89d82b3.scope - libcontainer container b44cd06f1255c91be4e4ab8171b0947f114c160fe99b6c8f1c62db00b89d82b3. Nov 8 00:24:51.901576 systemd[1]: Started cri-containerd-6cbde672c09010d0c3e08e04a07efe1f41a5325a57447ec9731b9f313799c8f4.scope - libcontainer container 6cbde672c09010d0c3e08e04a07efe1f41a5325a57447ec9731b9f313799c8f4. Nov 8 00:24:51.971823 containerd[1501]: time="2025-11-08T00:24:51.971620220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rhbss,Uid:a65ceb3b-32cd-4390-8ab5-13d14e7b7372,Namespace:kube-system,Attempt:0,} returns sandbox id \"b44cd06f1255c91be4e4ab8171b0947f114c160fe99b6c8f1c62db00b89d82b3\"" Nov 8 00:24:51.996025 containerd[1501]: time="2025-11-08T00:24:51.995797602Z" level=info msg="CreateContainer within sandbox \"b44cd06f1255c91be4e4ab8171b0947f114c160fe99b6c8f1c62db00b89d82b3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 8 00:24:52.004722 containerd[1501]: time="2025-11-08T00:24:52.004632652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5sgtm,Uid:9391c4ca-c8f2-4e2f-a010-9330466f9a48,Namespace:kube-system,Attempt:0,} returns sandbox id \"6cbde672c09010d0c3e08e04a07efe1f41a5325a57447ec9731b9f313799c8f4\"" Nov 8 00:24:52.014772 containerd[1501]: time="2025-11-08T00:24:52.014716064Z" level=info msg="CreateContainer within sandbox \"6cbde672c09010d0c3e08e04a07efe1f41a5325a57447ec9731b9f313799c8f4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 8 00:24:52.017264 containerd[1501]: time="2025-11-08T00:24:52.017243887Z" level=info msg="CreateContainer within sandbox \"b44cd06f1255c91be4e4ab8171b0947f114c160fe99b6c8f1c62db00b89d82b3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"127db2bb03a447e275b59310052be1b956c1bcc4e6e0026cc88b74b573ef9afe\"" Nov 8 00:24:52.018106 containerd[1501]: time="2025-11-08T00:24:52.018086679Z" level=info msg="StartContainer for \"127db2bb03a447e275b59310052be1b956c1bcc4e6e0026cc88b74b573ef9afe\"" Nov 8 00:24:52.033756 containerd[1501]: time="2025-11-08T00:24:52.033673064Z" level=info msg="CreateContainer within sandbox \"6cbde672c09010d0c3e08e04a07efe1f41a5325a57447ec9731b9f313799c8f4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c2c097e8937aa4df617fd0f128b84563c74646a0019a4c47f816fb5edb5eb2ea\"" Nov 8 00:24:52.035403 containerd[1501]: time="2025-11-08T00:24:52.035186231Z" level=info msg="StartContainer for \"c2c097e8937aa4df617fd0f128b84563c74646a0019a4c47f816fb5edb5eb2ea\"" Nov 8 00:24:52.052637 systemd[1]: Started cri-containerd-127db2bb03a447e275b59310052be1b956c1bcc4e6e0026cc88b74b573ef9afe.scope - libcontainer container 127db2bb03a447e275b59310052be1b956c1bcc4e6e0026cc88b74b573ef9afe. Nov 8 00:24:52.078801 systemd[1]: Started cri-containerd-c2c097e8937aa4df617fd0f128b84563c74646a0019a4c47f816fb5edb5eb2ea.scope - libcontainer container c2c097e8937aa4df617fd0f128b84563c74646a0019a4c47f816fb5edb5eb2ea. Nov 8 00:24:52.093167 containerd[1501]: time="2025-11-08T00:24:52.093047200Z" level=info msg="StartContainer for \"127db2bb03a447e275b59310052be1b956c1bcc4e6e0026cc88b74b573ef9afe\" returns successfully" Nov 8 00:24:52.116264 containerd[1501]: time="2025-11-08T00:24:52.115943494Z" level=info msg="StartContainer for \"c2c097e8937aa4df617fd0f128b84563c74646a0019a4c47f816fb5edb5eb2ea\" returns successfully" Nov 8 00:24:52.848889 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3737144179.mount: Deactivated successfully. Nov 8 00:24:52.868521 kubelet[2551]: I1108 00:24:52.868170 2551 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-rhbss" podStartSLOduration=20.868145589 podStartE2EDuration="20.868145589s" podCreationTimestamp="2025-11-08 00:24:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:24:52.868067076 +0000 UTC m=+27.296735691" watchObservedRunningTime="2025-11-08 00:24:52.868145589 +0000 UTC m=+27.296814183" Nov 8 00:25:01.442766 kubelet[2551]: I1108 00:25:01.441986 2551 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 8 00:25:01.469857 kubelet[2551]: I1108 00:25:01.469773 2551 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-5sgtm" podStartSLOduration=29.469751365 podStartE2EDuration="29.469751365s" podCreationTimestamp="2025-11-08 00:24:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:24:52.904564986 +0000 UTC m=+27.333233560" watchObservedRunningTime="2025-11-08 00:25:01.469751365 +0000 UTC m=+35.898419939" Nov 8 00:25:50.414569 systemd[1]: Started sshd@7-65.109.8.72:22-147.75.109.163:60394.service - OpenSSH per-connection server daemon (147.75.109.163:60394). Nov 8 00:25:51.430545 sshd[3940]: Accepted publickey for core from 147.75.109.163 port 60394 ssh2: RSA SHA256:OlzoI32JgcpjQ1LH303EkMKY9qIGtPKb42SZOMj04EQ Nov 8 00:25:51.433784 sshd[3940]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:25:51.444123 systemd-logind[1479]: New session 8 of user core. Nov 8 00:25:51.447394 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 8 00:25:52.620454 sshd[3940]: pam_unix(sshd:session): session closed for user core Nov 8 00:25:52.625392 systemd-logind[1479]: Session 8 logged out. Waiting for processes to exit. Nov 8 00:25:52.626099 systemd[1]: sshd@7-65.109.8.72:22-147.75.109.163:60394.service: Deactivated successfully. Nov 8 00:25:52.628035 systemd[1]: session-8.scope: Deactivated successfully. Nov 8 00:25:52.629352 systemd-logind[1479]: Removed session 8. Nov 8 00:25:57.799616 systemd[1]: Started sshd@8-65.109.8.72:22-147.75.109.163:60402.service - OpenSSH per-connection server daemon (147.75.109.163:60402). Nov 8 00:25:58.792985 sshd[3954]: Accepted publickey for core from 147.75.109.163 port 60402 ssh2: RSA SHA256:OlzoI32JgcpjQ1LH303EkMKY9qIGtPKb42SZOMj04EQ Nov 8 00:25:58.794835 sshd[3954]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:25:58.800081 systemd-logind[1479]: New session 9 of user core. Nov 8 00:25:58.808612 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 8 00:25:59.566501 sshd[3954]: pam_unix(sshd:session): session closed for user core Nov 8 00:25:59.570893 systemd[1]: sshd@8-65.109.8.72:22-147.75.109.163:60402.service: Deactivated successfully. Nov 8 00:25:59.573305 systemd[1]: session-9.scope: Deactivated successfully. Nov 8 00:25:59.574536 systemd-logind[1479]: Session 9 logged out. Waiting for processes to exit. Nov 8 00:25:59.576661 systemd-logind[1479]: Removed session 9. Nov 8 00:26:04.744500 systemd[1]: Started sshd@9-65.109.8.72:22-147.75.109.163:41556.service - OpenSSH per-connection server daemon (147.75.109.163:41556). Nov 8 00:26:05.757468 sshd[3970]: Accepted publickey for core from 147.75.109.163 port 41556 ssh2: RSA SHA256:OlzoI32JgcpjQ1LH303EkMKY9qIGtPKb42SZOMj04EQ Nov 8 00:26:05.762918 sshd[3970]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:26:05.768243 systemd-logind[1479]: New session 10 of user core. Nov 8 00:26:05.772399 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 8 00:26:06.553465 sshd[3970]: pam_unix(sshd:session): session closed for user core Nov 8 00:26:06.558482 systemd[1]: sshd@9-65.109.8.72:22-147.75.109.163:41556.service: Deactivated successfully. Nov 8 00:26:06.561914 systemd[1]: session-10.scope: Deactivated successfully. Nov 8 00:26:06.563070 systemd-logind[1479]: Session 10 logged out. Waiting for processes to exit. Nov 8 00:26:06.564380 systemd-logind[1479]: Removed session 10. Nov 8 00:26:06.762449 systemd[1]: Started sshd@10-65.109.8.72:22-147.75.109.163:41566.service - OpenSSH per-connection server daemon (147.75.109.163:41566). Nov 8 00:26:07.881853 sshd[3984]: Accepted publickey for core from 147.75.109.163 port 41566 ssh2: RSA SHA256:OlzoI32JgcpjQ1LH303EkMKY9qIGtPKb42SZOMj04EQ Nov 8 00:26:07.883684 sshd[3984]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:26:07.888394 systemd-logind[1479]: New session 11 of user core. Nov 8 00:26:07.894432 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 8 00:26:08.803101 sshd[3984]: pam_unix(sshd:session): session closed for user core Nov 8 00:26:08.812600 systemd[1]: sshd@10-65.109.8.72:22-147.75.109.163:41566.service: Deactivated successfully. Nov 8 00:26:08.815588 systemd[1]: session-11.scope: Deactivated successfully. Nov 8 00:26:08.816773 systemd-logind[1479]: Session 11 logged out. Waiting for processes to exit. Nov 8 00:26:08.818027 systemd-logind[1479]: Removed session 11. Nov 8 00:26:08.959552 systemd[1]: Started sshd@11-65.109.8.72:22-147.75.109.163:41582.service - OpenSSH per-connection server daemon (147.75.109.163:41582). Nov 8 00:26:09.957430 sshd[3996]: Accepted publickey for core from 147.75.109.163 port 41582 ssh2: RSA SHA256:OlzoI32JgcpjQ1LH303EkMKY9qIGtPKb42SZOMj04EQ Nov 8 00:26:09.958743 sshd[3996]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:26:09.963677 systemd-logind[1479]: New session 12 of user core. Nov 8 00:26:09.968349 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 8 00:26:10.709726 sshd[3996]: pam_unix(sshd:session): session closed for user core Nov 8 00:26:10.713334 systemd[1]: sshd@11-65.109.8.72:22-147.75.109.163:41582.service: Deactivated successfully. Nov 8 00:26:10.715043 systemd[1]: session-12.scope: Deactivated successfully. Nov 8 00:26:10.715846 systemd-logind[1479]: Session 12 logged out. Waiting for processes to exit. Nov 8 00:26:10.717012 systemd-logind[1479]: Removed session 12. Nov 8 00:26:15.923203 systemd[1]: Started sshd@12-65.109.8.72:22-147.75.109.163:53950.service - OpenSSH per-connection server daemon (147.75.109.163:53950). Nov 8 00:26:17.038850 sshd[4009]: Accepted publickey for core from 147.75.109.163 port 53950 ssh2: RSA SHA256:OlzoI32JgcpjQ1LH303EkMKY9qIGtPKb42SZOMj04EQ Nov 8 00:26:17.039902 sshd[4009]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:26:17.044068 systemd-logind[1479]: New session 13 of user core. Nov 8 00:26:17.049374 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 8 00:26:17.873891 sshd[4009]: pam_unix(sshd:session): session closed for user core Nov 8 00:26:17.877831 systemd[1]: sshd@12-65.109.8.72:22-147.75.109.163:53950.service: Deactivated successfully. Nov 8 00:26:17.879704 systemd[1]: session-13.scope: Deactivated successfully. Nov 8 00:26:17.880772 systemd-logind[1479]: Session 13 logged out. Waiting for processes to exit. Nov 8 00:26:17.881952 systemd-logind[1479]: Removed session 13. Nov 8 00:26:18.064435 systemd[1]: Started sshd@13-65.109.8.72:22-147.75.109.163:53962.service - OpenSSH per-connection server daemon (147.75.109.163:53962). Nov 8 00:26:19.158353 sshd[4021]: Accepted publickey for core from 147.75.109.163 port 53962 ssh2: RSA SHA256:OlzoI32JgcpjQ1LH303EkMKY9qIGtPKb42SZOMj04EQ Nov 8 00:26:19.160210 sshd[4021]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:26:19.165948 systemd-logind[1479]: New session 14 of user core. Nov 8 00:26:19.172417 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 8 00:26:20.181921 sshd[4021]: pam_unix(sshd:session): session closed for user core Nov 8 00:26:20.187478 systemd[1]: sshd@13-65.109.8.72:22-147.75.109.163:53962.service: Deactivated successfully. Nov 8 00:26:20.190586 systemd[1]: session-14.scope: Deactivated successfully. Nov 8 00:26:20.192738 systemd-logind[1479]: Session 14 logged out. Waiting for processes to exit. Nov 8 00:26:20.194211 systemd-logind[1479]: Removed session 14. Nov 8 00:26:20.343580 systemd[1]: Started sshd@14-65.109.8.72:22-147.75.109.163:42934.service - OpenSSH per-connection server daemon (147.75.109.163:42934). Nov 8 00:26:21.351836 sshd[4032]: Accepted publickey for core from 147.75.109.163 port 42934 ssh2: RSA SHA256:OlzoI32JgcpjQ1LH303EkMKY9qIGtPKb42SZOMj04EQ Nov 8 00:26:21.353801 sshd[4032]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:26:21.359770 systemd-logind[1479]: New session 15 of user core. Nov 8 00:26:21.363413 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 8 00:26:22.767950 sshd[4032]: pam_unix(sshd:session): session closed for user core Nov 8 00:26:22.774622 systemd[1]: sshd@14-65.109.8.72:22-147.75.109.163:42934.service: Deactivated successfully. Nov 8 00:26:22.776708 systemd-logind[1479]: Session 15 logged out. Waiting for processes to exit. Nov 8 00:26:22.778257 systemd[1]: session-15.scope: Deactivated successfully. Nov 8 00:26:22.780089 systemd-logind[1479]: Removed session 15. Nov 8 00:26:22.943532 systemd[1]: Started sshd@15-65.109.8.72:22-147.75.109.163:42948.service - OpenSSH per-connection server daemon (147.75.109.163:42948). Nov 8 00:26:23.938515 sshd[4051]: Accepted publickey for core from 147.75.109.163 port 42948 ssh2: RSA SHA256:OlzoI32JgcpjQ1LH303EkMKY9qIGtPKb42SZOMj04EQ Nov 8 00:26:23.940236 sshd[4051]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:26:23.945527 systemd-logind[1479]: New session 16 of user core. Nov 8 00:26:23.951490 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 8 00:26:24.832186 sshd[4051]: pam_unix(sshd:session): session closed for user core Nov 8 00:26:24.835397 systemd[1]: sshd@15-65.109.8.72:22-147.75.109.163:42948.service: Deactivated successfully. Nov 8 00:26:24.838111 systemd[1]: session-16.scope: Deactivated successfully. Nov 8 00:26:24.839662 systemd-logind[1479]: Session 16 logged out. Waiting for processes to exit. Nov 8 00:26:24.840843 systemd-logind[1479]: Removed session 16. Nov 8 00:26:25.039998 systemd[1]: Started sshd@16-65.109.8.72:22-147.75.109.163:42958.service - OpenSSH per-connection server daemon (147.75.109.163:42958). Nov 8 00:26:26.151971 sshd[4062]: Accepted publickey for core from 147.75.109.163 port 42958 ssh2: RSA SHA256:OlzoI32JgcpjQ1LH303EkMKY9qIGtPKb42SZOMj04EQ Nov 8 00:26:26.153857 sshd[4062]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:26:26.159561 systemd-logind[1479]: New session 17 of user core. Nov 8 00:26:26.166440 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 8 00:26:26.983292 sshd[4062]: pam_unix(sshd:session): session closed for user core Nov 8 00:26:26.986561 systemd[1]: sshd@16-65.109.8.72:22-147.75.109.163:42958.service: Deactivated successfully. Nov 8 00:26:26.988722 systemd[1]: session-17.scope: Deactivated successfully. Nov 8 00:26:26.990575 systemd-logind[1479]: Session 17 logged out. Waiting for processes to exit. Nov 8 00:26:26.991917 systemd-logind[1479]: Removed session 17. Nov 8 00:26:32.139309 systemd[1]: Started sshd@17-65.109.8.72:22-147.75.109.163:36310.service - OpenSSH per-connection server daemon (147.75.109.163:36310). Nov 8 00:26:33.149139 sshd[4079]: Accepted publickey for core from 147.75.109.163 port 36310 ssh2: RSA SHA256:OlzoI32JgcpjQ1LH303EkMKY9qIGtPKb42SZOMj04EQ Nov 8 00:26:33.150621 sshd[4079]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:26:33.155320 systemd-logind[1479]: New session 18 of user core. Nov 8 00:26:33.162368 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 8 00:26:33.909532 sshd[4079]: pam_unix(sshd:session): session closed for user core Nov 8 00:26:33.914082 systemd[1]: sshd@17-65.109.8.72:22-147.75.109.163:36310.service: Deactivated successfully. Nov 8 00:26:33.917072 systemd[1]: session-18.scope: Deactivated successfully. Nov 8 00:26:33.918003 systemd-logind[1479]: Session 18 logged out. Waiting for processes to exit. Nov 8 00:26:33.919435 systemd-logind[1479]: Removed session 18. Nov 8 00:26:39.083396 systemd[1]: Started sshd@18-65.109.8.72:22-147.75.109.163:36312.service - OpenSSH per-connection server daemon (147.75.109.163:36312). Nov 8 00:26:40.090520 sshd[4094]: Accepted publickey for core from 147.75.109.163 port 36312 ssh2: RSA SHA256:OlzoI32JgcpjQ1LH303EkMKY9qIGtPKb42SZOMj04EQ Nov 8 00:26:40.092194 sshd[4094]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:26:40.097589 systemd-logind[1479]: New session 19 of user core. Nov 8 00:26:40.099400 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 8 00:26:40.853369 sshd[4094]: pam_unix(sshd:session): session closed for user core Nov 8 00:26:40.856584 systemd[1]: sshd@18-65.109.8.72:22-147.75.109.163:36312.service: Deactivated successfully. Nov 8 00:26:40.858969 systemd[1]: session-19.scope: Deactivated successfully. Nov 8 00:26:40.861078 systemd-logind[1479]: Session 19 logged out. Waiting for processes to exit. Nov 8 00:26:40.862845 systemd-logind[1479]: Removed session 19. Nov 8 00:26:41.031901 systemd[1]: Started sshd@19-65.109.8.72:22-147.75.109.163:44410.service - OpenSSH per-connection server daemon (147.75.109.163:44410). Nov 8 00:26:42.037346 sshd[4107]: Accepted publickey for core from 147.75.109.163 port 44410 ssh2: RSA SHA256:OlzoI32JgcpjQ1LH303EkMKY9qIGtPKb42SZOMj04EQ Nov 8 00:26:42.038948 sshd[4107]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:26:42.044294 systemd-logind[1479]: New session 20 of user core. Nov 8 00:26:42.050389 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 8 00:26:43.899554 containerd[1501]: time="2025-11-08T00:26:43.899481617Z" level=info msg="StopContainer for \"dc5d10d2666f8e3aedbc046f85d75607633c6767a7386e2cd39414399cb1a8a6\" with timeout 30 (s)" Nov 8 00:26:43.903976 containerd[1501]: time="2025-11-08T00:26:43.903947251Z" level=info msg="Stop container \"dc5d10d2666f8e3aedbc046f85d75607633c6767a7386e2cd39414399cb1a8a6\" with signal terminated" Nov 8 00:26:43.911949 systemd[1]: run-containerd-runc-k8s.io-35bfdabf8d1eb77391a10a46faf20ddad829ab21206036c9aa11e4a71e424832-runc.J6c6F4.mount: Deactivated successfully. Nov 8 00:26:43.928098 containerd[1501]: time="2025-11-08T00:26:43.928007281Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 8 00:26:43.931017 systemd[1]: cri-containerd-dc5d10d2666f8e3aedbc046f85d75607633c6767a7386e2cd39414399cb1a8a6.scope: Deactivated successfully. Nov 8 00:26:43.935411 containerd[1501]: time="2025-11-08T00:26:43.935293269Z" level=info msg="StopContainer for \"35bfdabf8d1eb77391a10a46faf20ddad829ab21206036c9aa11e4a71e424832\" with timeout 2 (s)" Nov 8 00:26:43.935788 containerd[1501]: time="2025-11-08T00:26:43.935719723Z" level=info msg="Stop container \"35bfdabf8d1eb77391a10a46faf20ddad829ab21206036c9aa11e4a71e424832\" with signal terminated" Nov 8 00:26:43.943620 systemd-networkd[1414]: lxc_health: Link DOWN Nov 8 00:26:43.943627 systemd-networkd[1414]: lxc_health: Lost carrier Nov 8 00:26:43.963655 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dc5d10d2666f8e3aedbc046f85d75607633c6767a7386e2cd39414399cb1a8a6-rootfs.mount: Deactivated successfully. Nov 8 00:26:43.967823 systemd[1]: cri-containerd-35bfdabf8d1eb77391a10a46faf20ddad829ab21206036c9aa11e4a71e424832.scope: Deactivated successfully. Nov 8 00:26:43.968429 systemd[1]: cri-containerd-35bfdabf8d1eb77391a10a46faf20ddad829ab21206036c9aa11e4a71e424832.scope: Consumed 6.288s CPU time. Nov 8 00:26:43.986849 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-35bfdabf8d1eb77391a10a46faf20ddad829ab21206036c9aa11e4a71e424832-rootfs.mount: Deactivated successfully. Nov 8 00:26:43.991160 containerd[1501]: time="2025-11-08T00:26:43.990921508Z" level=info msg="shim disconnected" id=dc5d10d2666f8e3aedbc046f85d75607633c6767a7386e2cd39414399cb1a8a6 namespace=k8s.io Nov 8 00:26:43.991160 containerd[1501]: time="2025-11-08T00:26:43.990997902Z" level=warning msg="cleaning up after shim disconnected" id=dc5d10d2666f8e3aedbc046f85d75607633c6767a7386e2cd39414399cb1a8a6 namespace=k8s.io Nov 8 00:26:43.991160 containerd[1501]: time="2025-11-08T00:26:43.991007881Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:26:43.992801 containerd[1501]: time="2025-11-08T00:26:43.992668459Z" level=info msg="shim disconnected" id=35bfdabf8d1eb77391a10a46faf20ddad829ab21206036c9aa11e4a71e424832 namespace=k8s.io Nov 8 00:26:43.992801 containerd[1501]: time="2025-11-08T00:26:43.992717120Z" level=warning msg="cleaning up after shim disconnected" id=35bfdabf8d1eb77391a10a46faf20ddad829ab21206036c9aa11e4a71e424832 namespace=k8s.io Nov 8 00:26:43.992801 containerd[1501]: time="2025-11-08T00:26:43.992724865Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:26:44.006291 containerd[1501]: time="2025-11-08T00:26:44.006242603Z" level=warning msg="cleanup warnings time=\"2025-11-08T00:26:44Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Nov 8 00:26:44.008721 containerd[1501]: time="2025-11-08T00:26:44.008689793Z" level=info msg="StopContainer for \"dc5d10d2666f8e3aedbc046f85d75607633c6767a7386e2cd39414399cb1a8a6\" returns successfully" Nov 8 00:26:44.009413 containerd[1501]: time="2025-11-08T00:26:44.009211276Z" level=info msg="StopContainer for \"35bfdabf8d1eb77391a10a46faf20ddad829ab21206036c9aa11e4a71e424832\" returns successfully" Nov 8 00:26:44.013686 containerd[1501]: time="2025-11-08T00:26:44.013513351Z" level=info msg="StopPodSandbox for \"ad7fe8fc914db613c7725775e056fe233f48b8f06d4a30ef01671adeb6b6d13f\"" Nov 8 00:26:44.013686 containerd[1501]: time="2025-11-08T00:26:44.013562903Z" level=info msg="Container to stop \"2ce2a97e1f43be9edbc9906f4ab70eb31c48feb1c4d413b7f36094474ff868ec\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 8 00:26:44.013686 containerd[1501]: time="2025-11-08T00:26:44.013579396Z" level=info msg="Container to stop \"cb2b765dbc964bb3a86448950397e412e78ab5fbbe12b240ca1a70353e4014d0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 8 00:26:44.013686 containerd[1501]: time="2025-11-08T00:26:44.013593402Z" level=info msg="Container to stop \"35bfdabf8d1eb77391a10a46faf20ddad829ab21206036c9aa11e4a71e424832\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 8 00:26:44.013686 containerd[1501]: time="2025-11-08T00:26:44.013607618Z" level=info msg="Container to stop \"5e5d8e18940d03e3a2b2c33e7835602f4f6e20a4253c7ab94cc660179f44d05a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 8 00:26:44.013686 containerd[1501]: time="2025-11-08T00:26:44.013619601Z" level=info msg="Container to stop \"59255e348dd2ac73682e5b6d33a17d2fdb37ef20ab52520fcbe5d59654c173b8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 8 00:26:44.013878 containerd[1501]: time="2025-11-08T00:26:44.013755516Z" level=info msg="StopPodSandbox for \"f5fa2c09f95c63fd2b8029d1a8e1a1a5ada467ab2ce6ecbb9cf158ccb1a9faf5\"" Nov 8 00:26:44.013878 containerd[1501]: time="2025-11-08T00:26:44.013788078Z" level=info msg="Container to stop \"dc5d10d2666f8e3aedbc046f85d75607633c6767a7386e2cd39414399cb1a8a6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 8 00:26:44.019095 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ad7fe8fc914db613c7725775e056fe233f48b8f06d4a30ef01671adeb6b6d13f-shm.mount: Deactivated successfully. Nov 8 00:26:44.019189 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f5fa2c09f95c63fd2b8029d1a8e1a1a5ada467ab2ce6ecbb9cf158ccb1a9faf5-shm.mount: Deactivated successfully. Nov 8 00:26:44.023337 systemd[1]: cri-containerd-ad7fe8fc914db613c7725775e056fe233f48b8f06d4a30ef01671adeb6b6d13f.scope: Deactivated successfully. Nov 8 00:26:44.023995 systemd[1]: cri-containerd-f5fa2c09f95c63fd2b8029d1a8e1a1a5ada467ab2ce6ecbb9cf158ccb1a9faf5.scope: Deactivated successfully. Nov 8 00:26:44.055759 containerd[1501]: time="2025-11-08T00:26:44.055552430Z" level=info msg="shim disconnected" id=f5fa2c09f95c63fd2b8029d1a8e1a1a5ada467ab2ce6ecbb9cf158ccb1a9faf5 namespace=k8s.io Nov 8 00:26:44.055759 containerd[1501]: time="2025-11-08T00:26:44.055615489Z" level=warning msg="cleaning up after shim disconnected" id=f5fa2c09f95c63fd2b8029d1a8e1a1a5ada467ab2ce6ecbb9cf158ccb1a9faf5 namespace=k8s.io Nov 8 00:26:44.055759 containerd[1501]: time="2025-11-08T00:26:44.055623754Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:26:44.057431 containerd[1501]: time="2025-11-08T00:26:44.057388489Z" level=info msg="shim disconnected" id=ad7fe8fc914db613c7725775e056fe233f48b8f06d4a30ef01671adeb6b6d13f namespace=k8s.io Nov 8 00:26:44.057431 containerd[1501]: time="2025-11-08T00:26:44.057427442Z" level=warning msg="cleaning up after shim disconnected" id=ad7fe8fc914db613c7725775e056fe233f48b8f06d4a30ef01671adeb6b6d13f namespace=k8s.io Nov 8 00:26:44.057525 containerd[1501]: time="2025-11-08T00:26:44.057437411Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:26:44.068457 containerd[1501]: time="2025-11-08T00:26:44.067501872Z" level=warning msg="cleanup warnings time=\"2025-11-08T00:26:44Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Nov 8 00:26:44.070573 containerd[1501]: time="2025-11-08T00:26:44.070541929Z" level=warning msg="cleanup warnings time=\"2025-11-08T00:26:44Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Nov 8 00:26:44.075120 containerd[1501]: time="2025-11-08T00:26:44.075083044Z" level=info msg="TearDown network for sandbox \"f5fa2c09f95c63fd2b8029d1a8e1a1a5ada467ab2ce6ecbb9cf158ccb1a9faf5\" successfully" Nov 8 00:26:44.075120 containerd[1501]: time="2025-11-08T00:26:44.075109684Z" level=info msg="StopPodSandbox for \"f5fa2c09f95c63fd2b8029d1a8e1a1a5ada467ab2ce6ecbb9cf158ccb1a9faf5\" returns successfully" Nov 8 00:26:44.076003 containerd[1501]: time="2025-11-08T00:26:44.075969885Z" level=info msg="TearDown network for sandbox \"ad7fe8fc914db613c7725775e056fe233f48b8f06d4a30ef01671adeb6b6d13f\" successfully" Nov 8 00:26:44.076003 containerd[1501]: time="2025-11-08T00:26:44.075995303Z" level=info msg="StopPodSandbox for \"ad7fe8fc914db613c7725775e056fe233f48b8f06d4a30ef01671adeb6b6d13f\" returns successfully" Nov 8 00:26:44.092562 kubelet[2551]: I1108 00:26:44.082630 2551 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5fa2c09f95c63fd2b8029d1a8e1a1a5ada467ab2ce6ecbb9cf158ccb1a9faf5" Nov 8 00:26:44.092986 kubelet[2551]: I1108 00:26:44.092617 2551 scope.go:117] "RemoveContainer" containerID="35bfdabf8d1eb77391a10a46faf20ddad829ab21206036c9aa11e4a71e424832" Nov 8 00:26:44.105749 containerd[1501]: time="2025-11-08T00:26:44.105481914Z" level=info msg="RemoveContainer for \"35bfdabf8d1eb77391a10a46faf20ddad829ab21206036c9aa11e4a71e424832\"" Nov 8 00:26:44.113684 containerd[1501]: time="2025-11-08T00:26:44.113538382Z" level=info msg="RemoveContainer for \"35bfdabf8d1eb77391a10a46faf20ddad829ab21206036c9aa11e4a71e424832\" returns successfully" Nov 8 00:26:44.113972 kubelet[2551]: I1108 00:26:44.113933 2551 scope.go:117] "RemoveContainer" containerID="cb2b765dbc964bb3a86448950397e412e78ab5fbbe12b240ca1a70353e4014d0" Nov 8 00:26:44.114989 containerd[1501]: time="2025-11-08T00:26:44.114962886Z" level=info msg="RemoveContainer for \"cb2b765dbc964bb3a86448950397e412e78ab5fbbe12b240ca1a70353e4014d0\"" Nov 8 00:26:44.117801 containerd[1501]: time="2025-11-08T00:26:44.117416629Z" level=info msg="RemoveContainer for \"cb2b765dbc964bb3a86448950397e412e78ab5fbbe12b240ca1a70353e4014d0\" returns successfully" Nov 8 00:26:44.117885 kubelet[2551]: I1108 00:26:44.117532 2551 scope.go:117] "RemoveContainer" containerID="59255e348dd2ac73682e5b6d33a17d2fdb37ef20ab52520fcbe5d59654c173b8" Nov 8 00:26:44.119083 kubelet[2551]: I1108 00:26:44.118946 2551 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5ee5f43a-5438-4f97-90d6-44dbf30f42b7-etc-cni-netd\") pod \"5ee5f43a-5438-4f97-90d6-44dbf30f42b7\" (UID: \"5ee5f43a-5438-4f97-90d6-44dbf30f42b7\") " Nov 8 00:26:44.119083 kubelet[2551]: I1108 00:26:44.118985 2551 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5ee5f43a-5438-4f97-90d6-44dbf30f42b7-clustermesh-secrets\") pod \"5ee5f43a-5438-4f97-90d6-44dbf30f42b7\" (UID: \"5ee5f43a-5438-4f97-90d6-44dbf30f42b7\") " Nov 8 00:26:44.119083 kubelet[2551]: I1108 00:26:44.119005 2551 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xh78f\" (UniqueName: \"kubernetes.io/projected/5ee5f43a-5438-4f97-90d6-44dbf30f42b7-kube-api-access-xh78f\") pod \"5ee5f43a-5438-4f97-90d6-44dbf30f42b7\" (UID: \"5ee5f43a-5438-4f97-90d6-44dbf30f42b7\") " Nov 8 00:26:44.119083 kubelet[2551]: I1108 00:26:44.119022 2551 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8c6ce54f-ded5-426a-aa0c-1c121606e402-cilium-config-path\") pod \"8c6ce54f-ded5-426a-aa0c-1c121606e402\" (UID: \"8c6ce54f-ded5-426a-aa0c-1c121606e402\") " Nov 8 00:26:44.119083 kubelet[2551]: I1108 00:26:44.119039 2551 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5ee5f43a-5438-4f97-90d6-44dbf30f42b7-cilium-config-path\") pod \"5ee5f43a-5438-4f97-90d6-44dbf30f42b7\" (UID: \"5ee5f43a-5438-4f97-90d6-44dbf30f42b7\") " Nov 8 00:26:44.119083 kubelet[2551]: I1108 00:26:44.119071 2551 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5ee5f43a-5438-4f97-90d6-44dbf30f42b7-cni-path\") pod \"5ee5f43a-5438-4f97-90d6-44dbf30f42b7\" (UID: \"5ee5f43a-5438-4f97-90d6-44dbf30f42b7\") " Nov 8 00:26:44.119296 containerd[1501]: time="2025-11-08T00:26:44.119133843Z" level=info msg="RemoveContainer for \"59255e348dd2ac73682e5b6d33a17d2fdb37ef20ab52520fcbe5d59654c173b8\"" Nov 8 00:26:44.119790 kubelet[2551]: I1108 00:26:44.119452 2551 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6lzk\" (UniqueName: \"kubernetes.io/projected/8c6ce54f-ded5-426a-aa0c-1c121606e402-kube-api-access-h6lzk\") pod \"8c6ce54f-ded5-426a-aa0c-1c121606e402\" (UID: \"8c6ce54f-ded5-426a-aa0c-1c121606e402\") " Nov 8 00:26:44.119790 kubelet[2551]: I1108 00:26:44.119481 2551 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5ee5f43a-5438-4f97-90d6-44dbf30f42b7-host-proc-sys-net\") pod \"5ee5f43a-5438-4f97-90d6-44dbf30f42b7\" (UID: \"5ee5f43a-5438-4f97-90d6-44dbf30f42b7\") " Nov 8 00:26:44.119790 kubelet[2551]: I1108 00:26:44.119530 2551 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5ee5f43a-5438-4f97-90d6-44dbf30f42b7-lib-modules\") pod \"5ee5f43a-5438-4f97-90d6-44dbf30f42b7\" (UID: \"5ee5f43a-5438-4f97-90d6-44dbf30f42b7\") " Nov 8 00:26:44.119790 kubelet[2551]: I1108 00:26:44.119550 2551 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5ee5f43a-5438-4f97-90d6-44dbf30f42b7-hubble-tls\") pod \"5ee5f43a-5438-4f97-90d6-44dbf30f42b7\" (UID: \"5ee5f43a-5438-4f97-90d6-44dbf30f42b7\") " Nov 8 00:26:44.119790 kubelet[2551]: I1108 00:26:44.119566 2551 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5ee5f43a-5438-4f97-90d6-44dbf30f42b7-cilium-cgroup\") pod \"5ee5f43a-5438-4f97-90d6-44dbf30f42b7\" (UID: \"5ee5f43a-5438-4f97-90d6-44dbf30f42b7\") " Nov 8 00:26:44.119790 kubelet[2551]: I1108 00:26:44.119580 2551 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5ee5f43a-5438-4f97-90d6-44dbf30f42b7-host-proc-sys-kernel\") pod \"5ee5f43a-5438-4f97-90d6-44dbf30f42b7\" (UID: \"5ee5f43a-5438-4f97-90d6-44dbf30f42b7\") " Nov 8 00:26:44.119927 kubelet[2551]: I1108 00:26:44.119611 2551 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5ee5f43a-5438-4f97-90d6-44dbf30f42b7-bpf-maps\") pod \"5ee5f43a-5438-4f97-90d6-44dbf30f42b7\" (UID: \"5ee5f43a-5438-4f97-90d6-44dbf30f42b7\") " Nov 8 00:26:44.119927 kubelet[2551]: I1108 00:26:44.119624 2551 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5ee5f43a-5438-4f97-90d6-44dbf30f42b7-hostproc\") pod \"5ee5f43a-5438-4f97-90d6-44dbf30f42b7\" (UID: \"5ee5f43a-5438-4f97-90d6-44dbf30f42b7\") " Nov 8 00:26:44.119927 kubelet[2551]: I1108 00:26:44.119692 2551 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5ee5f43a-5438-4f97-90d6-44dbf30f42b7-xtables-lock\") pod \"5ee5f43a-5438-4f97-90d6-44dbf30f42b7\" (UID: \"5ee5f43a-5438-4f97-90d6-44dbf30f42b7\") " Nov 8 00:26:44.119927 kubelet[2551]: I1108 00:26:44.119710 2551 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5ee5f43a-5438-4f97-90d6-44dbf30f42b7-cilium-run\") pod \"5ee5f43a-5438-4f97-90d6-44dbf30f42b7\" (UID: \"5ee5f43a-5438-4f97-90d6-44dbf30f42b7\") " Nov 8 00:26:44.121700 containerd[1501]: time="2025-11-08T00:26:44.121639244Z" level=info msg="RemoveContainer for \"59255e348dd2ac73682e5b6d33a17d2fdb37ef20ab52520fcbe5d59654c173b8\" returns successfully" Nov 8 00:26:44.124074 kubelet[2551]: I1108 00:26:44.122607 2551 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ee5f43a-5438-4f97-90d6-44dbf30f42b7-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "5ee5f43a-5438-4f97-90d6-44dbf30f42b7" (UID: "5ee5f43a-5438-4f97-90d6-44dbf30f42b7"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 8 00:26:44.124074 kubelet[2551]: I1108 00:26:44.123814 2551 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ee5f43a-5438-4f97-90d6-44dbf30f42b7-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "5ee5f43a-5438-4f97-90d6-44dbf30f42b7" (UID: "5ee5f43a-5438-4f97-90d6-44dbf30f42b7"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 8 00:26:44.124074 kubelet[2551]: I1108 00:26:44.123880 2551 scope.go:117] "RemoveContainer" containerID="5e5d8e18940d03e3a2b2c33e7835602f4f6e20a4253c7ab94cc660179f44d05a" Nov 8 00:26:44.124074 kubelet[2551]: I1108 00:26:44.123936 2551 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ee5f43a-5438-4f97-90d6-44dbf30f42b7-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "5ee5f43a-5438-4f97-90d6-44dbf30f42b7" (UID: "5ee5f43a-5438-4f97-90d6-44dbf30f42b7"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 8 00:26:44.127750 kubelet[2551]: I1108 00:26:44.127711 2551 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ee5f43a-5438-4f97-90d6-44dbf30f42b7-kube-api-access-xh78f" (OuterVolumeSpecName: "kube-api-access-xh78f") pod "5ee5f43a-5438-4f97-90d6-44dbf30f42b7" (UID: "5ee5f43a-5438-4f97-90d6-44dbf30f42b7"). InnerVolumeSpecName "kube-api-access-xh78f". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 8 00:26:44.128548 kubelet[2551]: I1108 00:26:44.128529 2551 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ee5f43a-5438-4f97-90d6-44dbf30f42b7-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "5ee5f43a-5438-4f97-90d6-44dbf30f42b7" (UID: "5ee5f43a-5438-4f97-90d6-44dbf30f42b7"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 8 00:26:44.130117 kubelet[2551]: I1108 00:26:44.129653 2551 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c6ce54f-ded5-426a-aa0c-1c121606e402-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8c6ce54f-ded5-426a-aa0c-1c121606e402" (UID: "8c6ce54f-ded5-426a-aa0c-1c121606e402"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 8 00:26:44.130644 kubelet[2551]: I1108 00:26:44.130627 2551 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ee5f43a-5438-4f97-90d6-44dbf30f42b7-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "5ee5f43a-5438-4f97-90d6-44dbf30f42b7" (UID: "5ee5f43a-5438-4f97-90d6-44dbf30f42b7"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 8 00:26:44.130722 kubelet[2551]: I1108 00:26:44.130711 2551 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ee5f43a-5438-4f97-90d6-44dbf30f42b7-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "5ee5f43a-5438-4f97-90d6-44dbf30f42b7" (UID: "5ee5f43a-5438-4f97-90d6-44dbf30f42b7"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 8 00:26:44.130785 kubelet[2551]: I1108 00:26:44.130773 2551 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ee5f43a-5438-4f97-90d6-44dbf30f42b7-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "5ee5f43a-5438-4f97-90d6-44dbf30f42b7" (UID: "5ee5f43a-5438-4f97-90d6-44dbf30f42b7"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 8 00:26:44.130846 kubelet[2551]: I1108 00:26:44.130836 2551 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ee5f43a-5438-4f97-90d6-44dbf30f42b7-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "5ee5f43a-5438-4f97-90d6-44dbf30f42b7" (UID: "5ee5f43a-5438-4f97-90d6-44dbf30f42b7"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 8 00:26:44.130900 kubelet[2551]: I1108 00:26:44.130891 2551 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ee5f43a-5438-4f97-90d6-44dbf30f42b7-hostproc" (OuterVolumeSpecName: "hostproc") pod "5ee5f43a-5438-4f97-90d6-44dbf30f42b7" (UID: "5ee5f43a-5438-4f97-90d6-44dbf30f42b7"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 8 00:26:44.130951 kubelet[2551]: I1108 00:26:44.130941 2551 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ee5f43a-5438-4f97-90d6-44dbf30f42b7-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "5ee5f43a-5438-4f97-90d6-44dbf30f42b7" (UID: "5ee5f43a-5438-4f97-90d6-44dbf30f42b7"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 8 00:26:44.131656 kubelet[2551]: I1108 00:26:44.131629 2551 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ee5f43a-5438-4f97-90d6-44dbf30f42b7-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5ee5f43a-5438-4f97-90d6-44dbf30f42b7" (UID: "5ee5f43a-5438-4f97-90d6-44dbf30f42b7"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 8 00:26:44.131697 kubelet[2551]: I1108 00:26:44.131664 2551 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ee5f43a-5438-4f97-90d6-44dbf30f42b7-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "5ee5f43a-5438-4f97-90d6-44dbf30f42b7" (UID: "5ee5f43a-5438-4f97-90d6-44dbf30f42b7"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 8 00:26:44.131885 kubelet[2551]: I1108 00:26:44.131863 2551 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ee5f43a-5438-4f97-90d6-44dbf30f42b7-cni-path" (OuterVolumeSpecName: "cni-path") pod "5ee5f43a-5438-4f97-90d6-44dbf30f42b7" (UID: "5ee5f43a-5438-4f97-90d6-44dbf30f42b7"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 8 00:26:44.132033 containerd[1501]: time="2025-11-08T00:26:44.132009911Z" level=info msg="RemoveContainer for \"5e5d8e18940d03e3a2b2c33e7835602f4f6e20a4253c7ab94cc660179f44d05a\"" Nov 8 00:26:44.133969 kubelet[2551]: I1108 00:26:44.133945 2551 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c6ce54f-ded5-426a-aa0c-1c121606e402-kube-api-access-h6lzk" (OuterVolumeSpecName: "kube-api-access-h6lzk") pod "8c6ce54f-ded5-426a-aa0c-1c121606e402" (UID: "8c6ce54f-ded5-426a-aa0c-1c121606e402"). InnerVolumeSpecName "kube-api-access-h6lzk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 8 00:26:44.135006 containerd[1501]: time="2025-11-08T00:26:44.134971881Z" level=info msg="RemoveContainer for \"5e5d8e18940d03e3a2b2c33e7835602f4f6e20a4253c7ab94cc660179f44d05a\" returns successfully" Nov 8 00:26:44.135170 kubelet[2551]: I1108 00:26:44.135137 2551 scope.go:117] "RemoveContainer" containerID="2ce2a97e1f43be9edbc9906f4ab70eb31c48feb1c4d413b7f36094474ff868ec" Nov 8 00:26:44.136135 containerd[1501]: time="2025-11-08T00:26:44.136111679Z" level=info msg="RemoveContainer for \"2ce2a97e1f43be9edbc9906f4ab70eb31c48feb1c4d413b7f36094474ff868ec\"" Nov 8 00:26:44.138465 containerd[1501]: time="2025-11-08T00:26:44.138424887Z" level=info msg="RemoveContainer for \"2ce2a97e1f43be9edbc9906f4ab70eb31c48feb1c4d413b7f36094474ff868ec\" returns successfully" Nov 8 00:26:44.138582 kubelet[2551]: I1108 00:26:44.138553 2551 scope.go:117] "RemoveContainer" containerID="35bfdabf8d1eb77391a10a46faf20ddad829ab21206036c9aa11e4a71e424832" Nov 8 00:26:44.145692 containerd[1501]: time="2025-11-08T00:26:44.140282286Z" level=error msg="ContainerStatus for \"35bfdabf8d1eb77391a10a46faf20ddad829ab21206036c9aa11e4a71e424832\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"35bfdabf8d1eb77391a10a46faf20ddad829ab21206036c9aa11e4a71e424832\": not found" Nov 8 00:26:44.145920 kubelet[2551]: E1108 00:26:44.145893 2551 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"35bfdabf8d1eb77391a10a46faf20ddad829ab21206036c9aa11e4a71e424832\": not found" containerID="35bfdabf8d1eb77391a10a46faf20ddad829ab21206036c9aa11e4a71e424832" Nov 8 00:26:44.145989 kubelet[2551]: I1108 00:26:44.145923 2551 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"35bfdabf8d1eb77391a10a46faf20ddad829ab21206036c9aa11e4a71e424832"} err="failed to get container status \"35bfdabf8d1eb77391a10a46faf20ddad829ab21206036c9aa11e4a71e424832\": rpc error: code = NotFound desc = an error occurred when try to find container \"35bfdabf8d1eb77391a10a46faf20ddad829ab21206036c9aa11e4a71e424832\": not found" Nov 8 00:26:44.146067 kubelet[2551]: I1108 00:26:44.145993 2551 scope.go:117] "RemoveContainer" containerID="cb2b765dbc964bb3a86448950397e412e78ab5fbbe12b240ca1a70353e4014d0" Nov 8 00:26:44.146361 containerd[1501]: time="2025-11-08T00:26:44.146163125Z" level=error msg="ContainerStatus for \"cb2b765dbc964bb3a86448950397e412e78ab5fbbe12b240ca1a70353e4014d0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cb2b765dbc964bb3a86448950397e412e78ab5fbbe12b240ca1a70353e4014d0\": not found" Nov 8 00:26:44.150152 kubelet[2551]: E1108 00:26:44.149302 2551 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cb2b765dbc964bb3a86448950397e412e78ab5fbbe12b240ca1a70353e4014d0\": not found" containerID="cb2b765dbc964bb3a86448950397e412e78ab5fbbe12b240ca1a70353e4014d0" Nov 8 00:26:44.150152 kubelet[2551]: I1108 00:26:44.149323 2551 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cb2b765dbc964bb3a86448950397e412e78ab5fbbe12b240ca1a70353e4014d0"} err="failed to get container status \"cb2b765dbc964bb3a86448950397e412e78ab5fbbe12b240ca1a70353e4014d0\": rpc error: code = NotFound desc = an error occurred when try to find container \"cb2b765dbc964bb3a86448950397e412e78ab5fbbe12b240ca1a70353e4014d0\": not found" Nov 8 00:26:44.150152 kubelet[2551]: I1108 00:26:44.149340 2551 scope.go:117] "RemoveContainer" containerID="59255e348dd2ac73682e5b6d33a17d2fdb37ef20ab52520fcbe5d59654c173b8" Nov 8 00:26:44.150152 kubelet[2551]: E1108 00:26:44.149554 2551 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"59255e348dd2ac73682e5b6d33a17d2fdb37ef20ab52520fcbe5d59654c173b8\": not found" containerID="59255e348dd2ac73682e5b6d33a17d2fdb37ef20ab52520fcbe5d59654c173b8" Nov 8 00:26:44.150152 kubelet[2551]: I1108 00:26:44.149569 2551 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"59255e348dd2ac73682e5b6d33a17d2fdb37ef20ab52520fcbe5d59654c173b8"} err="failed to get container status \"59255e348dd2ac73682e5b6d33a17d2fdb37ef20ab52520fcbe5d59654c173b8\": rpc error: code = NotFound desc = an error occurred when try to find container \"59255e348dd2ac73682e5b6d33a17d2fdb37ef20ab52520fcbe5d59654c173b8\": not found" Nov 8 00:26:44.150152 kubelet[2551]: I1108 00:26:44.149633 2551 scope.go:117] "RemoveContainer" containerID="5e5d8e18940d03e3a2b2c33e7835602f4f6e20a4253c7ab94cc660179f44d05a" Nov 8 00:26:44.150361 containerd[1501]: time="2025-11-08T00:26:44.149456450Z" level=error msg="ContainerStatus for \"59255e348dd2ac73682e5b6d33a17d2fdb37ef20ab52520fcbe5d59654c173b8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"59255e348dd2ac73682e5b6d33a17d2fdb37ef20ab52520fcbe5d59654c173b8\": not found" Nov 8 00:26:44.150361 containerd[1501]: time="2025-11-08T00:26:44.149789877Z" level=error msg="ContainerStatus for \"5e5d8e18940d03e3a2b2c33e7835602f4f6e20a4253c7ab94cc660179f44d05a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5e5d8e18940d03e3a2b2c33e7835602f4f6e20a4253c7ab94cc660179f44d05a\": not found" Nov 8 00:26:44.150361 containerd[1501]: time="2025-11-08T00:26:44.150023808Z" level=error msg="ContainerStatus for \"2ce2a97e1f43be9edbc9906f4ab70eb31c48feb1c4d413b7f36094474ff868ec\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2ce2a97e1f43be9edbc9906f4ab70eb31c48feb1c4d413b7f36094474ff868ec\": not found" Nov 8 00:26:44.150427 kubelet[2551]: E1108 00:26:44.149890 2551 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5e5d8e18940d03e3a2b2c33e7835602f4f6e20a4253c7ab94cc660179f44d05a\": not found" containerID="5e5d8e18940d03e3a2b2c33e7835602f4f6e20a4253c7ab94cc660179f44d05a" Nov 8 00:26:44.150427 kubelet[2551]: I1108 00:26:44.149905 2551 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5e5d8e18940d03e3a2b2c33e7835602f4f6e20a4253c7ab94cc660179f44d05a"} err="failed to get container status \"5e5d8e18940d03e3a2b2c33e7835602f4f6e20a4253c7ab94cc660179f44d05a\": rpc error: code = NotFound desc = an error occurred when try to find container \"5e5d8e18940d03e3a2b2c33e7835602f4f6e20a4253c7ab94cc660179f44d05a\": not found" Nov 8 00:26:44.150427 kubelet[2551]: I1108 00:26:44.149917 2551 scope.go:117] "RemoveContainer" containerID="2ce2a97e1f43be9edbc9906f4ab70eb31c48feb1c4d413b7f36094474ff868ec" Nov 8 00:26:44.150427 kubelet[2551]: E1108 00:26:44.150261 2551 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2ce2a97e1f43be9edbc9906f4ab70eb31c48feb1c4d413b7f36094474ff868ec\": not found" containerID="2ce2a97e1f43be9edbc9906f4ab70eb31c48feb1c4d413b7f36094474ff868ec" Nov 8 00:26:44.150427 kubelet[2551]: I1108 00:26:44.150278 2551 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2ce2a97e1f43be9edbc9906f4ab70eb31c48feb1c4d413b7f36094474ff868ec"} err="failed to get container status \"2ce2a97e1f43be9edbc9906f4ab70eb31c48feb1c4d413b7f36094474ff868ec\": rpc error: code = NotFound desc = an error occurred when try to find container \"2ce2a97e1f43be9edbc9906f4ab70eb31c48feb1c4d413b7f36094474ff868ec\": not found" Nov 8 00:26:44.220452 kubelet[2551]: I1108 00:26:44.220244 2551 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5ee5f43a-5438-4f97-90d6-44dbf30f42b7-host-proc-sys-kernel\") on node \"ci-4081-3-6-n-d839b30383\" DevicePath \"\"" Nov 8 00:26:44.220452 kubelet[2551]: I1108 00:26:44.220283 2551 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5ee5f43a-5438-4f97-90d6-44dbf30f42b7-bpf-maps\") on node \"ci-4081-3-6-n-d839b30383\" DevicePath \"\"" Nov 8 00:26:44.220452 kubelet[2551]: I1108 00:26:44.220297 2551 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5ee5f43a-5438-4f97-90d6-44dbf30f42b7-cilium-run\") on node \"ci-4081-3-6-n-d839b30383\" DevicePath \"\"" Nov 8 00:26:44.220452 kubelet[2551]: I1108 00:26:44.220309 2551 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5ee5f43a-5438-4f97-90d6-44dbf30f42b7-hostproc\") on node \"ci-4081-3-6-n-d839b30383\" DevicePath \"\"" Nov 8 00:26:44.220452 kubelet[2551]: I1108 00:26:44.220318 2551 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5ee5f43a-5438-4f97-90d6-44dbf30f42b7-xtables-lock\") on node \"ci-4081-3-6-n-d839b30383\" DevicePath \"\"" Nov 8 00:26:44.220452 kubelet[2551]: I1108 00:26:44.220329 2551 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xh78f\" (UniqueName: \"kubernetes.io/projected/5ee5f43a-5438-4f97-90d6-44dbf30f42b7-kube-api-access-xh78f\") on node \"ci-4081-3-6-n-d839b30383\" DevicePath \"\"" Nov 8 00:26:44.220452 kubelet[2551]: I1108 00:26:44.220337 2551 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5ee5f43a-5438-4f97-90d6-44dbf30f42b7-etc-cni-netd\") on node \"ci-4081-3-6-n-d839b30383\" DevicePath \"\"" Nov 8 00:26:44.220452 kubelet[2551]: I1108 00:26:44.220347 2551 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5ee5f43a-5438-4f97-90d6-44dbf30f42b7-clustermesh-secrets\") on node \"ci-4081-3-6-n-d839b30383\" DevicePath \"\"" Nov 8 00:26:44.220786 kubelet[2551]: I1108 00:26:44.220359 2551 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8c6ce54f-ded5-426a-aa0c-1c121606e402-cilium-config-path\") on node \"ci-4081-3-6-n-d839b30383\" DevicePath \"\"" Nov 8 00:26:44.220786 kubelet[2551]: I1108 00:26:44.220369 2551 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5ee5f43a-5438-4f97-90d6-44dbf30f42b7-cilium-config-path\") on node \"ci-4081-3-6-n-d839b30383\" DevicePath \"\"" Nov 8 00:26:44.220786 kubelet[2551]: I1108 00:26:44.220378 2551 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5ee5f43a-5438-4f97-90d6-44dbf30f42b7-cni-path\") on node \"ci-4081-3-6-n-d839b30383\" DevicePath \"\"" Nov 8 00:26:44.220786 kubelet[2551]: I1108 00:26:44.220388 2551 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-h6lzk\" (UniqueName: \"kubernetes.io/projected/8c6ce54f-ded5-426a-aa0c-1c121606e402-kube-api-access-h6lzk\") on node \"ci-4081-3-6-n-d839b30383\" DevicePath \"\"" Nov 8 00:26:44.220786 kubelet[2551]: I1108 00:26:44.220397 2551 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5ee5f43a-5438-4f97-90d6-44dbf30f42b7-host-proc-sys-net\") on node \"ci-4081-3-6-n-d839b30383\" DevicePath \"\"" Nov 8 00:26:44.220786 kubelet[2551]: I1108 00:26:44.220407 2551 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5ee5f43a-5438-4f97-90d6-44dbf30f42b7-lib-modules\") on node \"ci-4081-3-6-n-d839b30383\" DevicePath \"\"" Nov 8 00:26:44.220786 kubelet[2551]: I1108 00:26:44.220416 2551 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5ee5f43a-5438-4f97-90d6-44dbf30f42b7-hubble-tls\") on node \"ci-4081-3-6-n-d839b30383\" DevicePath \"\"" Nov 8 00:26:44.220786 kubelet[2551]: I1108 00:26:44.220424 2551 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5ee5f43a-5438-4f97-90d6-44dbf30f42b7-cilium-cgroup\") on node \"ci-4081-3-6-n-d839b30383\" DevicePath \"\"" Nov 8 00:26:44.906402 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ad7fe8fc914db613c7725775e056fe233f48b8f06d4a30ef01671adeb6b6d13f-rootfs.mount: Deactivated successfully. Nov 8 00:26:44.906519 systemd[1]: var-lib-kubelet-pods-5ee5f43a\x2d5438\x2d4f97\x2d90d6\x2d44dbf30f42b7-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Nov 8 00:26:44.906608 systemd[1]: var-lib-kubelet-pods-5ee5f43a\x2d5438\x2d4f97\x2d90d6\x2d44dbf30f42b7-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Nov 8 00:26:44.906694 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f5fa2c09f95c63fd2b8029d1a8e1a1a5ada467ab2ce6ecbb9cf158ccb1a9faf5-rootfs.mount: Deactivated successfully. Nov 8 00:26:44.906771 systemd[1]: var-lib-kubelet-pods-8c6ce54f\x2dded5\x2d426a\x2daa0c\x2d1c121606e402-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dh6lzk.mount: Deactivated successfully. Nov 8 00:26:44.906858 systemd[1]: var-lib-kubelet-pods-5ee5f43a\x2d5438\x2d4f97\x2d90d6\x2d44dbf30f42b7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxh78f.mount: Deactivated successfully. Nov 8 00:26:45.099895 systemd[1]: Removed slice kubepods-besteffort-pod8c6ce54f_ded5_426a_aa0c_1c121606e402.slice - libcontainer container kubepods-besteffort-pod8c6ce54f_ded5_426a_aa0c_1c121606e402.slice. Nov 8 00:26:45.108085 systemd[1]: Removed slice kubepods-burstable-pod5ee5f43a_5438_4f97_90d6_44dbf30f42b7.slice - libcontainer container kubepods-burstable-pod5ee5f43a_5438_4f97_90d6_44dbf30f42b7.slice. Nov 8 00:26:45.108196 systemd[1]: kubepods-burstable-pod5ee5f43a_5438_4f97_90d6_44dbf30f42b7.slice: Consumed 6.363s CPU time. Nov 8 00:26:45.709705 kubelet[2551]: I1108 00:26:45.709650 2551 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ee5f43a-5438-4f97-90d6-44dbf30f42b7" path="/var/lib/kubelet/pods/5ee5f43a-5438-4f97-90d6-44dbf30f42b7/volumes" Nov 8 00:26:45.710567 kubelet[2551]: I1108 00:26:45.710479 2551 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c6ce54f-ded5-426a-aa0c-1c121606e402" path="/var/lib/kubelet/pods/8c6ce54f-ded5-426a-aa0c-1c121606e402/volumes" Nov 8 00:26:45.809135 kubelet[2551]: E1108 00:26:45.809054 2551 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 8 00:26:46.000798 sshd[4107]: pam_unix(sshd:session): session closed for user core Nov 8 00:26:46.006562 systemd-logind[1479]: Session 20 logged out. Waiting for processes to exit. Nov 8 00:26:46.007611 systemd[1]: sshd@19-65.109.8.72:22-147.75.109.163:44410.service: Deactivated successfully. Nov 8 00:26:46.009864 systemd[1]: session-20.scope: Deactivated successfully. Nov 8 00:26:46.011028 systemd-logind[1479]: Removed session 20. Nov 8 00:26:46.178574 systemd[1]: Started sshd@20-65.109.8.72:22-147.75.109.163:44418.service - OpenSSH per-connection server daemon (147.75.109.163:44418). Nov 8 00:26:47.189589 sshd[4268]: Accepted publickey for core from 147.75.109.163 port 44418 ssh2: RSA SHA256:OlzoI32JgcpjQ1LH303EkMKY9qIGtPKb42SZOMj04EQ Nov 8 00:26:47.191267 sshd[4268]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:26:47.196847 systemd-logind[1479]: New session 21 of user core. Nov 8 00:26:47.204505 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 8 00:26:48.135201 kubelet[2551]: I1108 00:26:48.123239 2551 memory_manager.go:355] "RemoveStaleState removing state" podUID="8c6ce54f-ded5-426a-aa0c-1c121606e402" containerName="cilium-operator" Nov 8 00:26:48.135201 kubelet[2551]: I1108 00:26:48.135015 2551 memory_manager.go:355] "RemoveStaleState removing state" podUID="5ee5f43a-5438-4f97-90d6-44dbf30f42b7" containerName="cilium-agent" Nov 8 00:26:48.145820 systemd[1]: Created slice kubepods-burstable-pod94090b98_6b0c_4132_bb28_2a4d0ba92a58.slice - libcontainer container kubepods-burstable-pod94090b98_6b0c_4132_bb28_2a4d0ba92a58.slice. Nov 8 00:26:48.251036 kubelet[2551]: I1108 00:26:48.250977 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/94090b98-6b0c-4132-bb28-2a4d0ba92a58-cilium-cgroup\") pod \"cilium-zfjlb\" (UID: \"94090b98-6b0c-4132-bb28-2a4d0ba92a58\") " pod="kube-system/cilium-zfjlb" Nov 8 00:26:48.251036 kubelet[2551]: I1108 00:26:48.251017 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/94090b98-6b0c-4132-bb28-2a4d0ba92a58-cilium-ipsec-secrets\") pod \"cilium-zfjlb\" (UID: \"94090b98-6b0c-4132-bb28-2a4d0ba92a58\") " pod="kube-system/cilium-zfjlb" Nov 8 00:26:48.251036 kubelet[2551]: I1108 00:26:48.251040 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9g66x\" (UniqueName: \"kubernetes.io/projected/94090b98-6b0c-4132-bb28-2a4d0ba92a58-kube-api-access-9g66x\") pod \"cilium-zfjlb\" (UID: \"94090b98-6b0c-4132-bb28-2a4d0ba92a58\") " pod="kube-system/cilium-zfjlb" Nov 8 00:26:48.251476 kubelet[2551]: I1108 00:26:48.251059 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/94090b98-6b0c-4132-bb28-2a4d0ba92a58-cni-path\") pod \"cilium-zfjlb\" (UID: \"94090b98-6b0c-4132-bb28-2a4d0ba92a58\") " pod="kube-system/cilium-zfjlb" Nov 8 00:26:48.251476 kubelet[2551]: I1108 00:26:48.251072 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/94090b98-6b0c-4132-bb28-2a4d0ba92a58-lib-modules\") pod \"cilium-zfjlb\" (UID: \"94090b98-6b0c-4132-bb28-2a4d0ba92a58\") " pod="kube-system/cilium-zfjlb" Nov 8 00:26:48.251476 kubelet[2551]: I1108 00:26:48.251088 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/94090b98-6b0c-4132-bb28-2a4d0ba92a58-cilium-config-path\") pod \"cilium-zfjlb\" (UID: \"94090b98-6b0c-4132-bb28-2a4d0ba92a58\") " pod="kube-system/cilium-zfjlb" Nov 8 00:26:48.251476 kubelet[2551]: I1108 00:26:48.251103 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/94090b98-6b0c-4132-bb28-2a4d0ba92a58-bpf-maps\") pod \"cilium-zfjlb\" (UID: \"94090b98-6b0c-4132-bb28-2a4d0ba92a58\") " pod="kube-system/cilium-zfjlb" Nov 8 00:26:48.251476 kubelet[2551]: I1108 00:26:48.251143 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/94090b98-6b0c-4132-bb28-2a4d0ba92a58-host-proc-sys-net\") pod \"cilium-zfjlb\" (UID: \"94090b98-6b0c-4132-bb28-2a4d0ba92a58\") " pod="kube-system/cilium-zfjlb" Nov 8 00:26:48.251476 kubelet[2551]: I1108 00:26:48.251159 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/94090b98-6b0c-4132-bb28-2a4d0ba92a58-host-proc-sys-kernel\") pod \"cilium-zfjlb\" (UID: \"94090b98-6b0c-4132-bb28-2a4d0ba92a58\") " pod="kube-system/cilium-zfjlb" Nov 8 00:26:48.251704 kubelet[2551]: I1108 00:26:48.251173 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/94090b98-6b0c-4132-bb28-2a4d0ba92a58-cilium-run\") pod \"cilium-zfjlb\" (UID: \"94090b98-6b0c-4132-bb28-2a4d0ba92a58\") " pod="kube-system/cilium-zfjlb" Nov 8 00:26:48.251704 kubelet[2551]: I1108 00:26:48.251200 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/94090b98-6b0c-4132-bb28-2a4d0ba92a58-hostproc\") pod \"cilium-zfjlb\" (UID: \"94090b98-6b0c-4132-bb28-2a4d0ba92a58\") " pod="kube-system/cilium-zfjlb" Nov 8 00:26:48.254147 kubelet[2551]: I1108 00:26:48.251242 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/94090b98-6b0c-4132-bb28-2a4d0ba92a58-xtables-lock\") pod \"cilium-zfjlb\" (UID: \"94090b98-6b0c-4132-bb28-2a4d0ba92a58\") " pod="kube-system/cilium-zfjlb" Nov 8 00:26:48.254229 kubelet[2551]: I1108 00:26:48.254167 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/94090b98-6b0c-4132-bb28-2a4d0ba92a58-clustermesh-secrets\") pod \"cilium-zfjlb\" (UID: \"94090b98-6b0c-4132-bb28-2a4d0ba92a58\") " pod="kube-system/cilium-zfjlb" Nov 8 00:26:48.254229 kubelet[2551]: I1108 00:26:48.254192 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/94090b98-6b0c-4132-bb28-2a4d0ba92a58-hubble-tls\") pod \"cilium-zfjlb\" (UID: \"94090b98-6b0c-4132-bb28-2a4d0ba92a58\") " pod="kube-system/cilium-zfjlb" Nov 8 00:26:48.254229 kubelet[2551]: I1108 00:26:48.254209 2551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/94090b98-6b0c-4132-bb28-2a4d0ba92a58-etc-cni-netd\") pod \"cilium-zfjlb\" (UID: \"94090b98-6b0c-4132-bb28-2a4d0ba92a58\") " pod="kube-system/cilium-zfjlb" Nov 8 00:26:48.338423 sshd[4268]: pam_unix(sshd:session): session closed for user core Nov 8 00:26:48.341328 systemd-logind[1479]: Session 21 logged out. Waiting for processes to exit. Nov 8 00:26:48.341591 systemd[1]: sshd@20-65.109.8.72:22-147.75.109.163:44418.service: Deactivated successfully. Nov 8 00:26:48.344718 systemd[1]: session-21.scope: Deactivated successfully. Nov 8 00:26:48.347294 systemd-logind[1479]: Removed session 21. Nov 8 00:26:48.448885 containerd[1501]: time="2025-11-08T00:26:48.448770436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zfjlb,Uid:94090b98-6b0c-4132-bb28-2a4d0ba92a58,Namespace:kube-system,Attempt:0,}" Nov 8 00:26:48.474064 containerd[1501]: time="2025-11-08T00:26:48.473800087Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:26:48.474064 containerd[1501]: time="2025-11-08T00:26:48.473863065Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:26:48.474064 containerd[1501]: time="2025-11-08T00:26:48.473879746Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:26:48.474064 containerd[1501]: time="2025-11-08T00:26:48.473960709Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:26:48.493430 systemd[1]: Started cri-containerd-ee4962b6f174b5b6cc0aaf7c3d2517a0a27e6582464984dfb0b201961b8c7709.scope - libcontainer container ee4962b6f174b5b6cc0aaf7c3d2517a0a27e6582464984dfb0b201961b8c7709. Nov 8 00:26:48.521966 containerd[1501]: time="2025-11-08T00:26:48.521919526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zfjlb,Uid:94090b98-6b0c-4132-bb28-2a4d0ba92a58,Namespace:kube-system,Attempt:0,} returns sandbox id \"ee4962b6f174b5b6cc0aaf7c3d2517a0a27e6582464984dfb0b201961b8c7709\"" Nov 8 00:26:48.526292 containerd[1501]: time="2025-11-08T00:26:48.526250452Z" level=info msg="CreateContainer within sandbox \"ee4962b6f174b5b6cc0aaf7c3d2517a0a27e6582464984dfb0b201961b8c7709\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 8 00:26:48.540300 containerd[1501]: time="2025-11-08T00:26:48.538956404Z" level=info msg="CreateContainer within sandbox \"ee4962b6f174b5b6cc0aaf7c3d2517a0a27e6582464984dfb0b201961b8c7709\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"109e29c24ff3d951d64cfc36d0676b7d6db535da878a6c52e6c8ed6036cb3137\"" Nov 8 00:26:48.544239 containerd[1501]: time="2025-11-08T00:26:48.541827530Z" level=info msg="StartContainer for \"109e29c24ff3d951d64cfc36d0676b7d6db535da878a6c52e6c8ed6036cb3137\"" Nov 8 00:26:48.548959 systemd[1]: Started sshd@21-65.109.8.72:22-147.75.109.163:44424.service - OpenSSH per-connection server daemon (147.75.109.163:44424). Nov 8 00:26:48.580426 systemd[1]: Started cri-containerd-109e29c24ff3d951d64cfc36d0676b7d6db535da878a6c52e6c8ed6036cb3137.scope - libcontainer container 109e29c24ff3d951d64cfc36d0676b7d6db535da878a6c52e6c8ed6036cb3137. Nov 8 00:26:48.601752 containerd[1501]: time="2025-11-08T00:26:48.601709873Z" level=info msg="StartContainer for \"109e29c24ff3d951d64cfc36d0676b7d6db535da878a6c52e6c8ed6036cb3137\" returns successfully" Nov 8 00:26:48.610445 systemd[1]: cri-containerd-109e29c24ff3d951d64cfc36d0676b7d6db535da878a6c52e6c8ed6036cb3137.scope: Deactivated successfully. Nov 8 00:26:48.645974 containerd[1501]: time="2025-11-08T00:26:48.645842574Z" level=info msg="shim disconnected" id=109e29c24ff3d951d64cfc36d0676b7d6db535da878a6c52e6c8ed6036cb3137 namespace=k8s.io Nov 8 00:26:48.645974 containerd[1501]: time="2025-11-08T00:26:48.645900442Z" level=warning msg="cleaning up after shim disconnected" id=109e29c24ff3d951d64cfc36d0676b7d6db535da878a6c52e6c8ed6036cb3137 namespace=k8s.io Nov 8 00:26:48.645974 containerd[1501]: time="2025-11-08T00:26:48.645909469Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:26:48.831705 kubelet[2551]: I1108 00:26:48.831646 2551 setters.go:602] "Node became not ready" node="ci-4081-3-6-n-d839b30383" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-08T00:26:48Z","lastTransitionTime":"2025-11-08T00:26:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Nov 8 00:26:49.105089 containerd[1501]: time="2025-11-08T00:26:49.104974115Z" level=info msg="CreateContainer within sandbox \"ee4962b6f174b5b6cc0aaf7c3d2517a0a27e6582464984dfb0b201961b8c7709\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 8 00:26:49.119437 containerd[1501]: time="2025-11-08T00:26:49.119357533Z" level=info msg="CreateContainer within sandbox \"ee4962b6f174b5b6cc0aaf7c3d2517a0a27e6582464984dfb0b201961b8c7709\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"561f1f2c576c1a1c90d257a0a503f583ca3eca2f8b62b7662af515baa1cb6b76\"" Nov 8 00:26:49.121394 containerd[1501]: time="2025-11-08T00:26:49.120373346Z" level=info msg="StartContainer for \"561f1f2c576c1a1c90d257a0a503f583ca3eca2f8b62b7662af515baa1cb6b76\"" Nov 8 00:26:49.151408 systemd[1]: Started cri-containerd-561f1f2c576c1a1c90d257a0a503f583ca3eca2f8b62b7662af515baa1cb6b76.scope - libcontainer container 561f1f2c576c1a1c90d257a0a503f583ca3eca2f8b62b7662af515baa1cb6b76. Nov 8 00:26:49.181580 containerd[1501]: time="2025-11-08T00:26:49.179578788Z" level=info msg="StartContainer for \"561f1f2c576c1a1c90d257a0a503f583ca3eca2f8b62b7662af515baa1cb6b76\" returns successfully" Nov 8 00:26:49.185942 systemd[1]: cri-containerd-561f1f2c576c1a1c90d257a0a503f583ca3eca2f8b62b7662af515baa1cb6b76.scope: Deactivated successfully. Nov 8 00:26:49.206344 containerd[1501]: time="2025-11-08T00:26:49.206269454Z" level=info msg="shim disconnected" id=561f1f2c576c1a1c90d257a0a503f583ca3eca2f8b62b7662af515baa1cb6b76 namespace=k8s.io Nov 8 00:26:49.206344 containerd[1501]: time="2025-11-08T00:26:49.206339565Z" level=warning msg="cleaning up after shim disconnected" id=561f1f2c576c1a1c90d257a0a503f583ca3eca2f8b62b7662af515baa1cb6b76 namespace=k8s.io Nov 8 00:26:49.206344 containerd[1501]: time="2025-11-08T00:26:49.206351347Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:26:49.653402 sshd[4324]: Accepted publickey for core from 147.75.109.163 port 44424 ssh2: RSA SHA256:OlzoI32JgcpjQ1LH303EkMKY9qIGtPKb42SZOMj04EQ Nov 8 00:26:49.655251 sshd[4324]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:26:49.661857 systemd-logind[1479]: New session 22 of user core. Nov 8 00:26:49.668420 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 8 00:26:50.111055 containerd[1501]: time="2025-11-08T00:26:50.109537555Z" level=info msg="CreateContainer within sandbox \"ee4962b6f174b5b6cc0aaf7c3d2517a0a27e6582464984dfb0b201961b8c7709\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 8 00:26:50.139481 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount300406296.mount: Deactivated successfully. Nov 8 00:26:50.144524 containerd[1501]: time="2025-11-08T00:26:50.144454199Z" level=info msg="CreateContainer within sandbox \"ee4962b6f174b5b6cc0aaf7c3d2517a0a27e6582464984dfb0b201961b8c7709\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8a6d0b60469486da62c514badc9a5770bbd1551b53391e6aedcb3ec34eccd877\"" Nov 8 00:26:50.146586 containerd[1501]: time="2025-11-08T00:26:50.146544405Z" level=info msg="StartContainer for \"8a6d0b60469486da62c514badc9a5770bbd1551b53391e6aedcb3ec34eccd877\"" Nov 8 00:26:50.183390 systemd[1]: Started cri-containerd-8a6d0b60469486da62c514badc9a5770bbd1551b53391e6aedcb3ec34eccd877.scope - libcontainer container 8a6d0b60469486da62c514badc9a5770bbd1551b53391e6aedcb3ec34eccd877. Nov 8 00:26:50.215650 containerd[1501]: time="2025-11-08T00:26:50.215233559Z" level=info msg="StartContainer for \"8a6d0b60469486da62c514badc9a5770bbd1551b53391e6aedcb3ec34eccd877\" returns successfully" Nov 8 00:26:50.220473 systemd[1]: cri-containerd-8a6d0b60469486da62c514badc9a5770bbd1551b53391e6aedcb3ec34eccd877.scope: Deactivated successfully. Nov 8 00:26:50.246717 containerd[1501]: time="2025-11-08T00:26:50.246644891Z" level=info msg="shim disconnected" id=8a6d0b60469486da62c514badc9a5770bbd1551b53391e6aedcb3ec34eccd877 namespace=k8s.io Nov 8 00:26:50.246717 containerd[1501]: time="2025-11-08T00:26:50.246701849Z" level=warning msg="cleaning up after shim disconnected" id=8a6d0b60469486da62c514badc9a5770bbd1551b53391e6aedcb3ec34eccd877 namespace=k8s.io Nov 8 00:26:50.246717 containerd[1501]: time="2025-11-08T00:26:50.246710025Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:26:50.366052 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8a6d0b60469486da62c514badc9a5770bbd1551b53391e6aedcb3ec34eccd877-rootfs.mount: Deactivated successfully. Nov 8 00:26:50.412719 sshd[4324]: pam_unix(sshd:session): session closed for user core Nov 8 00:26:50.415493 systemd[1]: sshd@21-65.109.8.72:22-147.75.109.163:44424.service: Deactivated successfully. Nov 8 00:26:50.416979 systemd[1]: session-22.scope: Deactivated successfully. Nov 8 00:26:50.418535 systemd-logind[1479]: Session 22 logged out. Waiting for processes to exit. Nov 8 00:26:50.420331 systemd-logind[1479]: Removed session 22. Nov 8 00:26:50.565917 systemd[1]: Started sshd@22-65.109.8.72:22-147.75.109.163:34120.service - OpenSSH per-connection server daemon (147.75.109.163:34120). Nov 8 00:26:50.810699 kubelet[2551]: E1108 00:26:50.810528 2551 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 8 00:26:51.111675 containerd[1501]: time="2025-11-08T00:26:51.111610637Z" level=info msg="CreateContainer within sandbox \"ee4962b6f174b5b6cc0aaf7c3d2517a0a27e6582464984dfb0b201961b8c7709\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 8 00:26:51.124852 containerd[1501]: time="2025-11-08T00:26:51.124811464Z" level=info msg="CreateContainer within sandbox \"ee4962b6f174b5b6cc0aaf7c3d2517a0a27e6582464984dfb0b201961b8c7709\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ef7bb61b4540edaef26b8348f65c71a7d4cccbb8c5e11ee77db511194fc46c78\"" Nov 8 00:26:51.130651 containerd[1501]: time="2025-11-08T00:26:51.127616446Z" level=info msg="StartContainer for \"ef7bb61b4540edaef26b8348f65c71a7d4cccbb8c5e11ee77db511194fc46c78\"" Nov 8 00:26:51.171447 systemd[1]: Started cri-containerd-ef7bb61b4540edaef26b8348f65c71a7d4cccbb8c5e11ee77db511194fc46c78.scope - libcontainer container ef7bb61b4540edaef26b8348f65c71a7d4cccbb8c5e11ee77db511194fc46c78. Nov 8 00:26:51.198854 systemd[1]: cri-containerd-ef7bb61b4540edaef26b8348f65c71a7d4cccbb8c5e11ee77db511194fc46c78.scope: Deactivated successfully. Nov 8 00:26:51.202866 containerd[1501]: time="2025-11-08T00:26:51.202815631Z" level=info msg="StartContainer for \"ef7bb61b4540edaef26b8348f65c71a7d4cccbb8c5e11ee77db511194fc46c78\" returns successfully" Nov 8 00:26:51.232005 containerd[1501]: time="2025-11-08T00:26:51.231846246Z" level=info msg="shim disconnected" id=ef7bb61b4540edaef26b8348f65c71a7d4cccbb8c5e11ee77db511194fc46c78 namespace=k8s.io Nov 8 00:26:51.232005 containerd[1501]: time="2025-11-08T00:26:51.231911289Z" level=warning msg="cleaning up after shim disconnected" id=ef7bb61b4540edaef26b8348f65c71a7d4cccbb8c5e11ee77db511194fc46c78 namespace=k8s.io Nov 8 00:26:51.232005 containerd[1501]: time="2025-11-08T00:26:51.231923272Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:26:51.366620 systemd[1]: run-containerd-runc-k8s.io-ef7bb61b4540edaef26b8348f65c71a7d4cccbb8c5e11ee77db511194fc46c78-runc.cL8jA0.mount: Deactivated successfully. Nov 8 00:26:51.366763 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ef7bb61b4540edaef26b8348f65c71a7d4cccbb8c5e11ee77db511194fc46c78-rootfs.mount: Deactivated successfully. Nov 8 00:26:51.580150 sshd[4514]: Accepted publickey for core from 147.75.109.163 port 34120 ssh2: RSA SHA256:OlzoI32JgcpjQ1LH303EkMKY9qIGtPKb42SZOMj04EQ Nov 8 00:26:51.581792 sshd[4514]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:26:51.587317 systemd-logind[1479]: New session 23 of user core. Nov 8 00:26:51.595422 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 8 00:26:52.121672 containerd[1501]: time="2025-11-08T00:26:52.121622241Z" level=info msg="CreateContainer within sandbox \"ee4962b6f174b5b6cc0aaf7c3d2517a0a27e6582464984dfb0b201961b8c7709\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 8 00:26:52.145821 containerd[1501]: time="2025-11-08T00:26:52.144507482Z" level=info msg="CreateContainer within sandbox \"ee4962b6f174b5b6cc0aaf7c3d2517a0a27e6582464984dfb0b201961b8c7709\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d85a099ccb23a734e025994201dddd84e315f41fef64147f4b1631767141dd56\"" Nov 8 00:26:52.144567 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2363829552.mount: Deactivated successfully. Nov 8 00:26:52.148866 containerd[1501]: time="2025-11-08T00:26:52.148401744Z" level=info msg="StartContainer for \"d85a099ccb23a734e025994201dddd84e315f41fef64147f4b1631767141dd56\"" Nov 8 00:26:52.189402 systemd[1]: Started cri-containerd-d85a099ccb23a734e025994201dddd84e315f41fef64147f4b1631767141dd56.scope - libcontainer container d85a099ccb23a734e025994201dddd84e315f41fef64147f4b1631767141dd56. Nov 8 00:26:52.242305 containerd[1501]: time="2025-11-08T00:26:52.240572758Z" level=info msg="StartContainer for \"d85a099ccb23a734e025994201dddd84e315f41fef64147f4b1631767141dd56\" returns successfully" Nov 8 00:26:52.658251 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Nov 8 00:26:53.141718 kubelet[2551]: I1108 00:26:53.141331 2551 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-zfjlb" podStartSLOduration=5.141307857 podStartE2EDuration="5.141307857s" podCreationTimestamp="2025-11-08 00:26:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:26:53.140781045 +0000 UTC m=+147.569449640" watchObservedRunningTime="2025-11-08 00:26:53.141307857 +0000 UTC m=+147.569976421" Nov 8 00:26:55.466337 systemd-networkd[1414]: lxc_health: Link UP Nov 8 00:26:55.469535 systemd-networkd[1414]: lxc_health: Gained carrier Nov 8 00:26:56.701402 systemd-networkd[1414]: lxc_health: Gained IPv6LL Nov 8 00:27:03.390764 sshd[4514]: pam_unix(sshd:session): session closed for user core Nov 8 00:27:03.398203 systemd[1]: sshd@22-65.109.8.72:22-147.75.109.163:34120.service: Deactivated successfully. Nov 8 00:27:03.398314 systemd-logind[1479]: Session 23 logged out. Waiting for processes to exit. Nov 8 00:27:03.401618 systemd[1]: session-23.scope: Deactivated successfully. Nov 8 00:27:03.403373 systemd-logind[1479]: Removed session 23. Nov 8 00:27:19.331919 systemd[1]: cri-containerd-4f1033f978937c1540d8269d0baf5397dc52ea2eda76320638e7b13378522272.scope: Deactivated successfully. Nov 8 00:27:19.332696 systemd[1]: cri-containerd-4f1033f978937c1540d8269d0baf5397dc52ea2eda76320638e7b13378522272.scope: Consumed 3.227s CPU time, 27.1M memory peak, 0B memory swap peak. Nov 8 00:27:19.360181 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4f1033f978937c1540d8269d0baf5397dc52ea2eda76320638e7b13378522272-rootfs.mount: Deactivated successfully. Nov 8 00:27:19.367355 containerd[1501]: time="2025-11-08T00:27:19.367277349Z" level=info msg="shim disconnected" id=4f1033f978937c1540d8269d0baf5397dc52ea2eda76320638e7b13378522272 namespace=k8s.io Nov 8 00:27:19.367838 containerd[1501]: time="2025-11-08T00:27:19.367357981Z" level=warning msg="cleaning up after shim disconnected" id=4f1033f978937c1540d8269d0baf5397dc52ea2eda76320638e7b13378522272 namespace=k8s.io Nov 8 00:27:19.367838 containerd[1501]: time="2025-11-08T00:27:19.367373280Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:27:19.559584 kubelet[2551]: E1108 00:27:19.559418 2551 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:57672->10.0.0.2:2379: read: connection timed out" Nov 8 00:27:20.181342 kubelet[2551]: I1108 00:27:20.181124 2551 scope.go:117] "RemoveContainer" containerID="4f1033f978937c1540d8269d0baf5397dc52ea2eda76320638e7b13378522272" Nov 8 00:27:20.186150 containerd[1501]: time="2025-11-08T00:27:20.186081355Z" level=info msg="CreateContainer within sandbox \"14726f56e711bce1a6e6944e47284e4ff15549a4768b3d62e52348e1617ec2d2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Nov 8 00:27:20.205509 containerd[1501]: time="2025-11-08T00:27:20.205445759Z" level=info msg="CreateContainer within sandbox \"14726f56e711bce1a6e6944e47284e4ff15549a4768b3d62e52348e1617ec2d2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"c83ea0015c27d9ad7aeffd4da6ef527468bc66d3fb2e826aa906c7ca72d9ba79\"" Nov 8 00:27:20.206205 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount611715306.mount: Deactivated successfully. Nov 8 00:27:20.206435 containerd[1501]: time="2025-11-08T00:27:20.206191872Z" level=info msg="StartContainer for \"c83ea0015c27d9ad7aeffd4da6ef527468bc66d3fb2e826aa906c7ca72d9ba79\"" Nov 8 00:27:20.242385 systemd[1]: Started cri-containerd-c83ea0015c27d9ad7aeffd4da6ef527468bc66d3fb2e826aa906c7ca72d9ba79.scope - libcontainer container c83ea0015c27d9ad7aeffd4da6ef527468bc66d3fb2e826aa906c7ca72d9ba79. Nov 8 00:27:20.285045 containerd[1501]: time="2025-11-08T00:27:20.284996055Z" level=info msg="StartContainer for \"c83ea0015c27d9ad7aeffd4da6ef527468bc66d3fb2e826aa906c7ca72d9ba79\" returns successfully" Nov 8 00:27:24.035241 kubelet[2551]: E1108 00:27:24.034993 2551 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:57506->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-3-6-n-d839b30383.1875e0779f1bb156 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-3-6-n-d839b30383,UID:d7d2f13a53d4d57e87a177897fe88bf8,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-n-d839b30383,},FirstTimestamp:2025-11-08 00:27:13.560539478 +0000 UTC m=+167.989208042,LastTimestamp:2025-11-08 00:27:13.560539478 +0000 UTC m=+167.989208042,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-n-d839b30383,}" Nov 8 00:27:24.666546 systemd[1]: cri-containerd-46620bb0e01b70237626ef636f6367e5d3bc8e5f65203bcaaa803eb31cc387e9.scope: Deactivated successfully. Nov 8 00:27:24.666794 systemd[1]: cri-containerd-46620bb0e01b70237626ef636f6367e5d3bc8e5f65203bcaaa803eb31cc387e9.scope: Consumed 1.837s CPU time, 21.0M memory peak, 0B memory swap peak. Nov 8 00:27:24.687593 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-46620bb0e01b70237626ef636f6367e5d3bc8e5f65203bcaaa803eb31cc387e9-rootfs.mount: Deactivated successfully. Nov 8 00:27:24.698426 containerd[1501]: time="2025-11-08T00:27:24.698359902Z" level=info msg="shim disconnected" id=46620bb0e01b70237626ef636f6367e5d3bc8e5f65203bcaaa803eb31cc387e9 namespace=k8s.io Nov 8 00:27:24.698800 containerd[1501]: time="2025-11-08T00:27:24.698417501Z" level=warning msg="cleaning up after shim disconnected" id=46620bb0e01b70237626ef636f6367e5d3bc8e5f65203bcaaa803eb31cc387e9 namespace=k8s.io Nov 8 00:27:24.698800 containerd[1501]: time="2025-11-08T00:27:24.698445253Z" level=info msg="cleaning up dead shim" namespace=k8s.io