Nov 6 23:44:25.050061 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Thu Nov 6 22:02:38 -00 2025 Nov 6 23:44:25.050114 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=1a4810aa24298684dd9efd264f1d9b812e4e16f32429f4615db9ff284dd4ac25 Nov 6 23:44:25.050142 kernel: BIOS-provided physical RAM map: Nov 6 23:44:25.050160 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Nov 6 23:44:25.050176 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Nov 6 23:44:25.050190 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 6 23:44:25.050202 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007cfdbfff] usable Nov 6 23:44:25.050211 kernel: BIOS-e820: [mem 0x000000007cfdc000-0x000000007cffffff] reserved Nov 6 23:44:25.050224 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Nov 6 23:44:25.050240 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Nov 6 23:44:25.050257 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 6 23:44:25.050274 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 6 23:44:25.050289 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 6 23:44:25.050301 kernel: NX (Execute Disable) protection: active Nov 6 23:44:25.050316 kernel: APIC: Static calls initialized Nov 6 23:44:25.050326 kernel: SMBIOS 3.0.0 present. Nov 6 23:44:25.050340 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Nov 6 23:44:25.050358 kernel: Hypervisor detected: KVM Nov 6 23:44:25.050376 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 6 23:44:25.050393 kernel: kvm-clock: using sched offset of 3129721716 cycles Nov 6 23:44:25.050406 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 6 23:44:25.050419 kernel: tsc: Detected 2399.996 MHz processor Nov 6 23:44:25.050434 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 6 23:44:25.050451 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 6 23:44:25.050473 kernel: last_pfn = 0x7cfdc max_arch_pfn = 0x400000000 Nov 6 23:44:25.054218 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 6 23:44:25.054252 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 6 23:44:25.054268 kernel: Using GB pages for direct mapping Nov 6 23:44:25.054284 kernel: ACPI: Early table checksum verification disabled Nov 6 23:44:25.054294 kernel: ACPI: RSDP 0x00000000000F5270 000014 (v00 BOCHS ) Nov 6 23:44:25.054303 kernel: ACPI: RSDT 0x000000007CFE2693 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 23:44:25.054312 kernel: ACPI: FACP 0x000000007CFE2483 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 23:44:25.054321 kernel: ACPI: DSDT 0x000000007CFE0040 002443 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 23:44:25.054334 kernel: ACPI: FACS 0x000000007CFE0000 000040 Nov 6 23:44:25.054343 kernel: ACPI: APIC 0x000000007CFE2577 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 23:44:25.054351 kernel: ACPI: HPET 0x000000007CFE25F7 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 23:44:25.054360 kernel: ACPI: MCFG 0x000000007CFE262F 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 23:44:25.054369 kernel: ACPI: WAET 0x000000007CFE266B 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 23:44:25.054377 kernel: ACPI: Reserving FACP table memory at [mem 0x7cfe2483-0x7cfe2576] Nov 6 23:44:25.054386 kernel: ACPI: Reserving DSDT table memory at [mem 0x7cfe0040-0x7cfe2482] Nov 6 23:44:25.054401 kernel: ACPI: Reserving FACS table memory at [mem 0x7cfe0000-0x7cfe003f] Nov 6 23:44:25.054411 kernel: ACPI: Reserving APIC table memory at [mem 0x7cfe2577-0x7cfe25f6] Nov 6 23:44:25.054420 kernel: ACPI: Reserving HPET table memory at [mem 0x7cfe25f7-0x7cfe262e] Nov 6 23:44:25.054429 kernel: ACPI: Reserving MCFG table memory at [mem 0x7cfe262f-0x7cfe266a] Nov 6 23:44:25.054443 kernel: ACPI: Reserving WAET table memory at [mem 0x7cfe266b-0x7cfe2692] Nov 6 23:44:25.054458 kernel: No NUMA configuration found Nov 6 23:44:25.054473 kernel: Faking a node at [mem 0x0000000000000000-0x000000007cfdbfff] Nov 6 23:44:25.054519 kernel: NODE_DATA(0) allocated [mem 0x7cfd6000-0x7cfdbfff] Nov 6 23:44:25.054532 kernel: Zone ranges: Nov 6 23:44:25.054542 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 6 23:44:25.054551 kernel: DMA32 [mem 0x0000000001000000-0x000000007cfdbfff] Nov 6 23:44:25.054559 kernel: Normal empty Nov 6 23:44:25.054568 kernel: Movable zone start for each node Nov 6 23:44:25.054577 kernel: Early memory node ranges Nov 6 23:44:25.054586 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 6 23:44:25.054595 kernel: node 0: [mem 0x0000000000100000-0x000000007cfdbfff] Nov 6 23:44:25.054606 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007cfdbfff] Nov 6 23:44:25.054615 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 6 23:44:25.054624 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 6 23:44:25.054633 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Nov 6 23:44:25.054642 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 6 23:44:25.054651 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 6 23:44:25.054660 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 6 23:44:25.054669 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 6 23:44:25.054679 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 6 23:44:25.054695 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 6 23:44:25.054713 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 6 23:44:25.054726 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 6 23:44:25.054740 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 6 23:44:25.054754 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 6 23:44:25.054763 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Nov 6 23:44:25.054772 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 6 23:44:25.054782 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Nov 6 23:44:25.054791 kernel: Booting paravirtualized kernel on KVM Nov 6 23:44:25.054800 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 6 23:44:25.054812 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 6 23:44:25.054855 kernel: percpu: Embedded 58 pages/cpu s196712 r8192 d32664 u1048576 Nov 6 23:44:25.054870 kernel: pcpu-alloc: s196712 r8192 d32664 u1048576 alloc=1*2097152 Nov 6 23:44:25.054880 kernel: pcpu-alloc: [0] 0 1 Nov 6 23:44:25.054889 kernel: kvm-guest: PV spinlocks disabled, no host support Nov 6 23:44:25.054899 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=1a4810aa24298684dd9efd264f1d9b812e4e16f32429f4615db9ff284dd4ac25 Nov 6 23:44:25.054909 kernel: random: crng init done Nov 6 23:44:25.054918 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 6 23:44:25.054930 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 6 23:44:25.054939 kernel: Fallback order for Node 0: 0 Nov 6 23:44:25.054948 kernel: Built 1 zonelists, mobility grouping on. Total pages: 503708 Nov 6 23:44:25.054956 kernel: Policy zone: DMA32 Nov 6 23:44:25.054965 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 6 23:44:25.054975 kernel: Memory: 1920000K/2047464K available (14336K kernel code, 2288K rwdata, 22872K rodata, 43520K init, 1560K bss, 127204K reserved, 0K cma-reserved) Nov 6 23:44:25.054984 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 6 23:44:25.054993 kernel: ftrace: allocating 37954 entries in 149 pages Nov 6 23:44:25.055002 kernel: ftrace: allocated 149 pages with 4 groups Nov 6 23:44:25.055013 kernel: Dynamic Preempt: voluntary Nov 6 23:44:25.055023 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 6 23:44:25.055033 kernel: rcu: RCU event tracing is enabled. Nov 6 23:44:25.055043 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 6 23:44:25.055052 kernel: Trampoline variant of Tasks RCU enabled. Nov 6 23:44:25.055061 kernel: Rude variant of Tasks RCU enabled. Nov 6 23:44:25.055070 kernel: Tracing variant of Tasks RCU enabled. Nov 6 23:44:25.055080 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 6 23:44:25.055089 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 6 23:44:25.055100 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Nov 6 23:44:25.055110 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 6 23:44:25.055118 kernel: Console: colour VGA+ 80x25 Nov 6 23:44:25.055127 kernel: printk: console [tty0] enabled Nov 6 23:44:25.055136 kernel: printk: console [ttyS0] enabled Nov 6 23:44:25.055149 kernel: ACPI: Core revision 20230628 Nov 6 23:44:25.055164 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 6 23:44:25.055179 kernel: APIC: Switch to symmetric I/O mode setup Nov 6 23:44:25.055194 kernel: x2apic enabled Nov 6 23:44:25.055208 kernel: APIC: Switched APIC routing to: physical x2apic Nov 6 23:44:25.055226 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 6 23:44:25.055243 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Nov 6 23:44:25.055258 kernel: Calibrating delay loop (skipped) preset value.. 4799.99 BogoMIPS (lpj=2399996) Nov 6 23:44:25.055272 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 6 23:44:25.055286 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 6 23:44:25.055298 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 6 23:44:25.055308 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 6 23:44:25.055326 kernel: Spectre V2 : Mitigation: Retpolines Nov 6 23:44:25.055336 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 6 23:44:25.055346 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Nov 6 23:44:25.055355 kernel: active return thunk: retbleed_return_thunk Nov 6 23:44:25.055367 kernel: RETBleed: Mitigation: untrained return thunk Nov 6 23:44:25.055379 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 6 23:44:25.055389 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 6 23:44:25.055399 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 6 23:44:25.055408 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 6 23:44:25.055420 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 6 23:44:25.055429 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 6 23:44:25.055438 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Nov 6 23:44:25.055448 kernel: Freeing SMP alternatives memory: 32K Nov 6 23:44:25.055457 kernel: pid_max: default: 32768 minimum: 301 Nov 6 23:44:25.055467 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 6 23:44:25.055476 kernel: landlock: Up and running. Nov 6 23:44:25.055485 kernel: SELinux: Initializing. Nov 6 23:44:25.055511 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 6 23:44:25.055523 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 6 23:44:25.055533 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0) Nov 6 23:44:25.055542 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 6 23:44:25.055552 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 6 23:44:25.055561 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 6 23:44:25.055571 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 6 23:44:25.055580 kernel: ... version: 0 Nov 6 23:44:25.055589 kernel: ... bit width: 48 Nov 6 23:44:25.055599 kernel: ... generic registers: 6 Nov 6 23:44:25.055610 kernel: ... value mask: 0000ffffffffffff Nov 6 23:44:25.055620 kernel: ... max period: 00007fffffffffff Nov 6 23:44:25.055629 kernel: ... fixed-purpose events: 0 Nov 6 23:44:25.055638 kernel: ... event mask: 000000000000003f Nov 6 23:44:25.055648 kernel: signal: max sigframe size: 1776 Nov 6 23:44:25.055657 kernel: rcu: Hierarchical SRCU implementation. Nov 6 23:44:25.055667 kernel: rcu: Max phase no-delay instances is 400. Nov 6 23:44:25.055676 kernel: smp: Bringing up secondary CPUs ... Nov 6 23:44:25.055686 kernel: smpboot: x86: Booting SMP configuration: Nov 6 23:44:25.055697 kernel: .... node #0, CPUs: #1 Nov 6 23:44:25.055706 kernel: smp: Brought up 1 node, 2 CPUs Nov 6 23:44:25.055715 kernel: smpboot: Max logical packages: 1 Nov 6 23:44:25.055725 kernel: smpboot: Total of 2 processors activated (9599.98 BogoMIPS) Nov 6 23:44:25.055734 kernel: devtmpfs: initialized Nov 6 23:44:25.055743 kernel: x86/mm: Memory block size: 128MB Nov 6 23:44:25.055753 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 6 23:44:25.055762 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 6 23:44:25.055772 kernel: pinctrl core: initialized pinctrl subsystem Nov 6 23:44:25.055783 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 6 23:44:25.055796 kernel: audit: initializing netlink subsys (disabled) Nov 6 23:44:25.055811 kernel: audit: type=2000 audit(1762472663.481:1): state=initialized audit_enabled=0 res=1 Nov 6 23:44:25.055837 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 6 23:44:25.055853 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 6 23:44:25.055868 kernel: cpuidle: using governor menu Nov 6 23:44:25.055882 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 6 23:44:25.055892 kernel: dca service started, version 1.12.1 Nov 6 23:44:25.055903 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Nov 6 23:44:25.055915 kernel: PCI: Using configuration type 1 for base access Nov 6 23:44:25.055924 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 6 23:44:25.055934 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 6 23:44:25.055943 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 6 23:44:25.055952 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 6 23:44:25.055962 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 6 23:44:25.055971 kernel: ACPI: Added _OSI(Module Device) Nov 6 23:44:25.055981 kernel: ACPI: Added _OSI(Processor Device) Nov 6 23:44:25.055990 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 6 23:44:25.056003 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 6 23:44:25.056017 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 6 23:44:25.056027 kernel: ACPI: Interpreter enabled Nov 6 23:44:25.056036 kernel: ACPI: PM: (supports S0 S5) Nov 6 23:44:25.056046 kernel: ACPI: Using IOAPIC for interrupt routing Nov 6 23:44:25.056055 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 6 23:44:25.056065 kernel: PCI: Using E820 reservations for host bridge windows Nov 6 23:44:25.056074 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 6 23:44:25.056083 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 6 23:44:25.056287 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 6 23:44:25.056686 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 6 23:44:25.056788 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 6 23:44:25.056801 kernel: PCI host bridge to bus 0000:00 Nov 6 23:44:25.056922 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 6 23:44:25.057004 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 6 23:44:25.057095 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 6 23:44:25.057195 kernel: pci_bus 0000:00: root bus resource [mem 0x7d000000-0xafffffff window] Nov 6 23:44:25.057324 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 6 23:44:25.057408 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Nov 6 23:44:25.057487 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 6 23:44:25.057628 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Nov 6 23:44:25.057733 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 Nov 6 23:44:25.057948 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfb800000-0xfbffffff pref] Nov 6 23:44:25.058067 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfd200000-0xfd203fff 64bit pref] Nov 6 23:44:25.058156 kernel: pci 0000:00:01.0: reg 0x20: [mem 0xfea10000-0xfea10fff] Nov 6 23:44:25.058244 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea00000-0xfea0ffff pref] Nov 6 23:44:25.058332 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 6 23:44:25.060604 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Nov 6 23:44:25.060664 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea11000-0xfea11fff] Nov 6 23:44:25.060718 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Nov 6 23:44:25.060765 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea12000-0xfea12fff] Nov 6 23:44:25.060816 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Nov 6 23:44:25.060871 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea13000-0xfea13fff] Nov 6 23:44:25.060922 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Nov 6 23:44:25.060968 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea14000-0xfea14fff] Nov 6 23:44:25.061021 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Nov 6 23:44:25.061065 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea15000-0xfea15fff] Nov 6 23:44:25.061114 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Nov 6 23:44:25.061158 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea16000-0xfea16fff] Nov 6 23:44:25.061210 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Nov 6 23:44:25.061260 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea17000-0xfea17fff] Nov 6 23:44:25.061311 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Nov 6 23:44:25.061355 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea18000-0xfea18fff] Nov 6 23:44:25.061405 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Nov 6 23:44:25.061450 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfea19000-0xfea19fff] Nov 6 23:44:25.061516 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Nov 6 23:44:25.061564 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 6 23:44:25.061616 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Nov 6 23:44:25.061661 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc040-0xc05f] Nov 6 23:44:25.061705 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea1a000-0xfea1afff] Nov 6 23:44:25.061755 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Nov 6 23:44:25.061800 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Nov 6 23:44:25.061864 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Nov 6 23:44:25.061915 kernel: pci 0000:01:00.0: reg 0x14: [mem 0xfe880000-0xfe880fff] Nov 6 23:44:25.061961 kernel: pci 0000:01:00.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Nov 6 23:44:25.062008 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfe800000-0xfe87ffff pref] Nov 6 23:44:25.062053 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Nov 6 23:44:25.062098 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Nov 6 23:44:25.062143 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Nov 6 23:44:25.062203 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Nov 6 23:44:25.062252 kernel: pci 0000:02:00.0: reg 0x10: [mem 0xfe600000-0xfe603fff 64bit] Nov 6 23:44:25.062299 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Nov 6 23:44:25.062343 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Nov 6 23:44:25.062388 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Nov 6 23:44:25.062439 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Nov 6 23:44:25.062499 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfe400000-0xfe400fff] Nov 6 23:44:25.062548 kernel: pci 0000:03:00.0: reg 0x20: [mem 0xfcc00000-0xfcc03fff 64bit pref] Nov 6 23:44:25.062595 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Nov 6 23:44:25.062639 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Nov 6 23:44:25.062683 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Nov 6 23:44:25.062735 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Nov 6 23:44:25.062783 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Nov 6 23:44:25.062839 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Nov 6 23:44:25.062886 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Nov 6 23:44:25.062933 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Nov 6 23:44:25.062985 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Nov 6 23:44:25.063033 kernel: pci 0000:05:00.0: reg 0x14: [mem 0xfe000000-0xfe000fff] Nov 6 23:44:25.063083 kernel: pci 0000:05:00.0: reg 0x20: [mem 0xfc800000-0xfc803fff 64bit pref] Nov 6 23:44:25.063130 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Nov 6 23:44:25.063174 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Nov 6 23:44:25.063219 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Nov 6 23:44:25.063271 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Nov 6 23:44:25.063320 kernel: pci 0000:06:00.0: reg 0x14: [mem 0xfde00000-0xfde00fff] Nov 6 23:44:25.063366 kernel: pci 0000:06:00.0: reg 0x20: [mem 0xfc600000-0xfc603fff 64bit pref] Nov 6 23:44:25.063419 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Nov 6 23:44:25.063466 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Nov 6 23:44:25.063595 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Nov 6 23:44:25.063603 kernel: acpiphp: Slot [0] registered Nov 6 23:44:25.063659 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Nov 6 23:44:25.063710 kernel: pci 0000:07:00.0: reg 0x14: [mem 0xfdc80000-0xfdc80fff] Nov 6 23:44:25.063755 kernel: pci 0000:07:00.0: reg 0x20: [mem 0xfc400000-0xfc403fff 64bit pref] Nov 6 23:44:25.063801 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfdc00000-0xfdc7ffff pref] Nov 6 23:44:25.063855 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Nov 6 23:44:25.063900 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Nov 6 23:44:25.063945 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Nov 6 23:44:25.063951 kernel: acpiphp: Slot [0-2] registered Nov 6 23:44:25.063995 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Nov 6 23:44:25.064042 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Nov 6 23:44:25.064086 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Nov 6 23:44:25.064093 kernel: acpiphp: Slot [0-3] registered Nov 6 23:44:25.064137 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Nov 6 23:44:25.064181 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Nov 6 23:44:25.064225 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Nov 6 23:44:25.064231 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 6 23:44:25.064236 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 6 23:44:25.064242 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 6 23:44:25.064247 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 6 23:44:25.064252 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 6 23:44:25.064257 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 6 23:44:25.064261 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 6 23:44:25.064266 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 6 23:44:25.064271 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 6 23:44:25.064279 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 6 23:44:25.064287 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 6 23:44:25.064294 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 6 23:44:25.064299 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 6 23:44:25.064304 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 6 23:44:25.064309 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 6 23:44:25.064314 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 6 23:44:25.064319 kernel: iommu: Default domain type: Translated Nov 6 23:44:25.064323 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 6 23:44:25.064328 kernel: PCI: Using ACPI for IRQ routing Nov 6 23:44:25.064333 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 6 23:44:25.064339 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Nov 6 23:44:25.064344 kernel: e820: reserve RAM buffer [mem 0x7cfdc000-0x7fffffff] Nov 6 23:44:25.064394 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 6 23:44:25.064439 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 6 23:44:25.064485 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 6 23:44:25.064504 kernel: vgaarb: loaded Nov 6 23:44:25.064509 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 6 23:44:25.064514 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 6 23:44:25.064519 kernel: clocksource: Switched to clocksource kvm-clock Nov 6 23:44:25.064526 kernel: VFS: Disk quotas dquot_6.6.0 Nov 6 23:44:25.064531 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 6 23:44:25.064536 kernel: pnp: PnP ACPI init Nov 6 23:44:25.064593 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Nov 6 23:44:25.064600 kernel: pnp: PnP ACPI: found 5 devices Nov 6 23:44:25.064605 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 6 23:44:25.064610 kernel: NET: Registered PF_INET protocol family Nov 6 23:44:25.064615 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 6 23:44:25.064621 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Nov 6 23:44:25.064627 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 6 23:44:25.064631 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 6 23:44:25.064636 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 6 23:44:25.064641 kernel: TCP: Hash tables configured (established 16384 bind 16384) Nov 6 23:44:25.064646 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 6 23:44:25.064651 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 6 23:44:25.064658 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 6 23:44:25.064666 kernel: NET: Registered PF_XDP protocol family Nov 6 23:44:25.064730 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Nov 6 23:44:25.064777 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Nov 6 23:44:25.064831 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Nov 6 23:44:25.064876 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x1000-0x1fff] Nov 6 23:44:25.064922 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x2000-0x2fff] Nov 6 23:44:25.064994 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x3000-0x3fff] Nov 6 23:44:25.065043 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Nov 6 23:44:25.065092 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Nov 6 23:44:25.065138 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Nov 6 23:44:25.065182 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Nov 6 23:44:25.065227 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Nov 6 23:44:25.065271 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Nov 6 23:44:25.065316 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Nov 6 23:44:25.065360 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Nov 6 23:44:25.065405 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Nov 6 23:44:25.065454 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Nov 6 23:44:25.065517 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Nov 6 23:44:25.065572 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Nov 6 23:44:25.065619 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Nov 6 23:44:25.065667 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Nov 6 23:44:25.065757 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Nov 6 23:44:25.065815 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Nov 6 23:44:25.065882 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Nov 6 23:44:25.065929 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Nov 6 23:44:25.065976 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Nov 6 23:44:25.066021 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Nov 6 23:44:25.066066 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Nov 6 23:44:25.066110 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Nov 6 23:44:25.066155 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Nov 6 23:44:25.066199 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Nov 6 23:44:25.066244 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Nov 6 23:44:25.066295 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Nov 6 23:44:25.066356 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Nov 6 23:44:25.066405 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Nov 6 23:44:25.066464 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Nov 6 23:44:25.066681 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Nov 6 23:44:25.066732 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 6 23:44:25.066789 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 6 23:44:25.066857 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 6 23:44:25.066898 kernel: pci_bus 0000:00: resource 7 [mem 0x7d000000-0xafffffff window] Nov 6 23:44:25.066937 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Nov 6 23:44:25.066975 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Nov 6 23:44:25.067026 kernel: pci_bus 0000:01: resource 1 [mem 0xfe800000-0xfe9fffff] Nov 6 23:44:25.067070 kernel: pci_bus 0000:01: resource 2 [mem 0xfd000000-0xfd1fffff 64bit pref] Nov 6 23:44:25.067117 kernel: pci_bus 0000:02: resource 1 [mem 0xfe600000-0xfe7fffff] Nov 6 23:44:25.067158 kernel: pci_bus 0000:02: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Nov 6 23:44:25.067206 kernel: pci_bus 0000:03: resource 1 [mem 0xfe400000-0xfe5fffff] Nov 6 23:44:25.067251 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Nov 6 23:44:25.067298 kernel: pci_bus 0000:04: resource 1 [mem 0xfe200000-0xfe3fffff] Nov 6 23:44:25.067338 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Nov 6 23:44:25.067383 kernel: pci_bus 0000:05: resource 1 [mem 0xfe000000-0xfe1fffff] Nov 6 23:44:25.067423 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Nov 6 23:44:25.067472 kernel: pci_bus 0000:06: resource 1 [mem 0xfde00000-0xfdffffff] Nov 6 23:44:25.067526 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Nov 6 23:44:25.067590 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Nov 6 23:44:25.067650 kernel: pci_bus 0000:07: resource 1 [mem 0xfdc00000-0xfddfffff] Nov 6 23:44:25.067707 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Nov 6 23:44:25.067755 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Nov 6 23:44:25.067795 kernel: pci_bus 0000:08: resource 1 [mem 0xfda00000-0xfdbfffff] Nov 6 23:44:25.067848 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Nov 6 23:44:25.067893 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Nov 6 23:44:25.067936 kernel: pci_bus 0000:09: resource 1 [mem 0xfd800000-0xfd9fffff] Nov 6 23:44:25.067976 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Nov 6 23:44:25.067982 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 6 23:44:25.067988 kernel: PCI: CLS 0 bytes, default 64 Nov 6 23:44:25.067993 kernel: Initialise system trusted keyrings Nov 6 23:44:25.067998 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Nov 6 23:44:25.068003 kernel: Key type asymmetric registered Nov 6 23:44:25.068008 kernel: Asymmetric key parser 'x509' registered Nov 6 23:44:25.068013 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 6 23:44:25.068020 kernel: io scheduler mq-deadline registered Nov 6 23:44:25.068025 kernel: io scheduler kyber registered Nov 6 23:44:25.068030 kernel: io scheduler bfq registered Nov 6 23:44:25.068078 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Nov 6 23:44:25.068127 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Nov 6 23:44:25.068172 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Nov 6 23:44:25.068216 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Nov 6 23:44:25.068260 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Nov 6 23:44:25.068307 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Nov 6 23:44:25.068351 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Nov 6 23:44:25.068395 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Nov 6 23:44:25.068440 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Nov 6 23:44:25.068485 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Nov 6 23:44:25.068588 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Nov 6 23:44:25.068651 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Nov 6 23:44:25.068706 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Nov 6 23:44:25.068755 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Nov 6 23:44:25.068799 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Nov 6 23:44:25.068854 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Nov 6 23:44:25.068861 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 6 23:44:25.068905 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Nov 6 23:44:25.068952 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Nov 6 23:44:25.068960 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 6 23:44:25.068966 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Nov 6 23:44:25.068971 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 6 23:44:25.068978 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 6 23:44:25.068983 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 6 23:44:25.068990 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 6 23:44:25.068995 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 6 23:44:25.069000 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 6 23:44:25.069047 kernel: rtc_cmos 00:03: RTC can wake from S4 Nov 6 23:44:25.069089 kernel: rtc_cmos 00:03: registered as rtc0 Nov 6 23:44:25.069130 kernel: rtc_cmos 00:03: setting system clock to 2025-11-06T23:44:24 UTC (1762472664) Nov 6 23:44:25.069175 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Nov 6 23:44:25.069181 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Nov 6 23:44:25.069186 kernel: NET: Registered PF_INET6 protocol family Nov 6 23:44:25.069192 kernel: Segment Routing with IPv6 Nov 6 23:44:25.069197 kernel: In-situ OAM (IOAM) with IPv6 Nov 6 23:44:25.069202 kernel: NET: Registered PF_PACKET protocol family Nov 6 23:44:25.069208 kernel: Key type dns_resolver registered Nov 6 23:44:25.069217 kernel: IPI shorthand broadcast: enabled Nov 6 23:44:25.069225 kernel: sched_clock: Marking stable (1143013003, 127936635)->(1307588934, -36639296) Nov 6 23:44:25.069233 kernel: registered taskstats version 1 Nov 6 23:44:25.069238 kernel: Loading compiled-in X.509 certificates Nov 6 23:44:25.069243 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: d06f6bc77ef9183fbb55ec1fc021fe2cce974996' Nov 6 23:44:25.069248 kernel: Key type .fscrypt registered Nov 6 23:44:25.069253 kernel: Key type fscrypt-provisioning registered Nov 6 23:44:25.069258 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 6 23:44:25.069263 kernel: ima: Allocated hash algorithm: sha1 Nov 6 23:44:25.069267 kernel: ima: No architecture policies found Nov 6 23:44:25.069272 kernel: clk: Disabling unused clocks Nov 6 23:44:25.069279 kernel: Freeing unused kernel image (initmem) memory: 43520K Nov 6 23:44:25.069284 kernel: Write protecting the kernel read-only data: 38912k Nov 6 23:44:25.069289 kernel: Freeing unused kernel image (rodata/data gap) memory: 1704K Nov 6 23:44:25.069294 kernel: Run /init as init process Nov 6 23:44:25.069299 kernel: with arguments: Nov 6 23:44:25.069305 kernel: /init Nov 6 23:44:25.069311 kernel: with environment: Nov 6 23:44:25.069316 kernel: HOME=/ Nov 6 23:44:25.069321 kernel: TERM=linux Nov 6 23:44:25.069329 systemd[1]: Successfully made /usr/ read-only. Nov 6 23:44:25.069337 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 6 23:44:25.069343 systemd[1]: Detected virtualization kvm. Nov 6 23:44:25.069348 systemd[1]: Detected architecture x86-64. Nov 6 23:44:25.069353 systemd[1]: Running in initrd. Nov 6 23:44:25.069358 systemd[1]: No hostname configured, using default hostname. Nov 6 23:44:25.069364 systemd[1]: Hostname set to . Nov 6 23:44:25.069370 systemd[1]: Initializing machine ID from VM UUID. Nov 6 23:44:25.069377 systemd[1]: Queued start job for default target initrd.target. Nov 6 23:44:25.069383 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 6 23:44:25.069388 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 6 23:44:25.069396 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 6 23:44:25.069401 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 6 23:44:25.069407 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 6 23:44:25.069414 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 6 23:44:25.069420 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 6 23:44:25.069428 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 6 23:44:25.069433 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 6 23:44:25.069439 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 6 23:44:25.069444 systemd[1]: Reached target paths.target - Path Units. Nov 6 23:44:25.069450 systemd[1]: Reached target slices.target - Slice Units. Nov 6 23:44:25.069455 systemd[1]: Reached target swap.target - Swaps. Nov 6 23:44:25.069464 systemd[1]: Reached target timers.target - Timer Units. Nov 6 23:44:25.069470 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 6 23:44:25.069475 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 6 23:44:25.069481 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 6 23:44:25.069498 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 6 23:44:25.069504 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 6 23:44:25.069510 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 6 23:44:25.069515 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 6 23:44:25.069520 systemd[1]: Reached target sockets.target - Socket Units. Nov 6 23:44:25.069527 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 6 23:44:25.069533 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 6 23:44:25.069539 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 6 23:44:25.069544 systemd[1]: Starting systemd-fsck-usr.service... Nov 6 23:44:25.069550 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 6 23:44:25.069555 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 6 23:44:25.069560 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 23:44:25.069566 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 6 23:44:25.069573 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 6 23:44:25.069579 systemd[1]: Finished systemd-fsck-usr.service. Nov 6 23:44:25.069601 systemd-journald[187]: Collecting audit messages is disabled. Nov 6 23:44:25.069617 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 6 23:44:25.069623 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 6 23:44:25.069628 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 6 23:44:25.069634 kernel: Bridge firewalling registered Nov 6 23:44:25.069640 systemd-journald[187]: Journal started Nov 6 23:44:25.069656 systemd-journald[187]: Runtime Journal (/run/log/journal/e369447eb59b4a7584604b6a02d7e5cd) is 4.8M, max 38.3M, 33.5M free. Nov 6 23:44:25.021582 systemd-modules-load[188]: Inserted module 'overlay' Nov 6 23:44:25.104726 systemd[1]: Started systemd-journald.service - Journal Service. Nov 6 23:44:25.058049 systemd-modules-load[188]: Inserted module 'br_netfilter' Nov 6 23:44:25.105250 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 6 23:44:25.106017 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 23:44:25.110589 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 6 23:44:25.111574 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 6 23:44:25.115607 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 6 23:44:25.119717 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 6 23:44:25.121090 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 6 23:44:25.123066 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 6 23:44:25.127519 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 6 23:44:25.129583 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 6 23:44:25.130113 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 6 23:44:25.133091 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 6 23:44:25.138972 dracut-cmdline[221]: dracut-dracut-053 Nov 6 23:44:25.141516 dracut-cmdline[221]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=1a4810aa24298684dd9efd264f1d9b812e4e16f32429f4615db9ff284dd4ac25 Nov 6 23:44:25.155328 systemd-resolved[224]: Positive Trust Anchors: Nov 6 23:44:25.155883 systemd-resolved[224]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 6 23:44:25.155908 systemd-resolved[224]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 6 23:44:25.158164 systemd-resolved[224]: Defaulting to hostname 'linux'. Nov 6 23:44:25.158739 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 6 23:44:25.159179 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 6 23:44:25.183556 kernel: SCSI subsystem initialized Nov 6 23:44:25.190507 kernel: Loading iSCSI transport class v2.0-870. Nov 6 23:44:25.207543 kernel: iscsi: registered transport (tcp) Nov 6 23:44:25.221680 kernel: iscsi: registered transport (qla4xxx) Nov 6 23:44:25.221742 kernel: QLogic iSCSI HBA Driver Nov 6 23:44:25.250907 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 6 23:44:25.256633 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 6 23:44:25.288412 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 6 23:44:25.288547 kernel: device-mapper: uevent: version 1.0.3 Nov 6 23:44:25.288603 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 6 23:44:25.347580 kernel: raid6: avx2x4 gen() 20210 MB/s Nov 6 23:44:25.365541 kernel: raid6: avx2x2 gen() 26468 MB/s Nov 6 23:44:25.382605 kernel: raid6: avx2x1 gen() 30096 MB/s Nov 6 23:44:25.382663 kernel: raid6: using algorithm avx2x1 gen() 30096 MB/s Nov 6 23:44:25.401521 kernel: raid6: .... xor() 28792 MB/s, rmw enabled Nov 6 23:44:25.401571 kernel: raid6: using avx2x2 recovery algorithm Nov 6 23:44:25.417522 kernel: xor: automatically using best checksumming function avx Nov 6 23:44:25.510530 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 6 23:44:25.515703 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 6 23:44:25.526659 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 6 23:44:25.538173 systemd-udevd[406]: Using default interface naming scheme 'v255'. Nov 6 23:44:25.540806 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 6 23:44:25.547637 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 6 23:44:25.555146 dracut-pre-trigger[416]: rd.md=0: removing MD RAID activation Nov 6 23:44:25.572540 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 6 23:44:25.576602 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 6 23:44:25.609037 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 6 23:44:25.616592 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 6 23:44:25.624350 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 6 23:44:25.626467 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 6 23:44:25.627326 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 6 23:44:25.628796 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 6 23:44:25.637583 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 6 23:44:25.643248 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 6 23:44:25.668519 kernel: scsi host0: Virtio SCSI HBA Nov 6 23:44:25.672604 kernel: cryptd: max_cpu_qlen set to 1000 Nov 6 23:44:25.679518 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Nov 6 23:44:25.696715 kernel: AVX2 version of gcm_enc/dec engaged. Nov 6 23:44:25.696765 kernel: AES CTR mode by8 optimization enabled Nov 6 23:44:25.696687 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 6 23:44:25.696778 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 6 23:44:25.730232 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 6 23:44:25.731387 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 6 23:44:25.731661 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 23:44:25.734935 kernel: ACPI: bus type USB registered Nov 6 23:44:25.735003 kernel: usbcore: registered new interface driver usbfs Nov 6 23:44:25.735011 kernel: usbcore: registered new interface driver hub Nov 6 23:44:25.735023 kernel: usbcore: registered new device driver usb Nov 6 23:44:25.733477 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 23:44:25.740853 kernel: libata version 3.00 loaded. Nov 6 23:44:25.744738 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 23:44:25.764584 kernel: sd 0:0:0:0: Power-on or device reset occurred Nov 6 23:44:25.764764 kernel: sd 0:0:0:0: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Nov 6 23:44:25.764859 kernel: sd 0:0:0:0: [sda] Write Protect is off Nov 6 23:44:25.764919 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Nov 6 23:44:25.764986 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Nov 6 23:44:25.767525 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 6 23:44:25.767550 kernel: GPT:17805311 != 80003071 Nov 6 23:44:25.767558 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 6 23:44:25.767564 kernel: GPT:17805311 != 80003071 Nov 6 23:44:25.767569 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 6 23:44:25.767575 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 6 23:44:25.767582 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Nov 6 23:44:25.769524 kernel: ahci 0000:00:1f.2: version 3.0 Nov 6 23:44:25.769658 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 6 23:44:25.770530 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Nov 6 23:44:25.770648 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 6 23:44:25.777504 kernel: scsi host1: ahci Nov 6 23:44:25.779963 kernel: scsi host2: ahci Nov 6 23:44:25.780094 kernel: scsi host3: ahci Nov 6 23:44:25.782505 kernel: scsi host4: ahci Nov 6 23:44:25.782618 kernel: scsi host5: ahci Nov 6 23:44:25.784643 kernel: scsi host6: ahci Nov 6 23:44:25.784747 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a100 irq 48 Nov 6 23:44:25.784809 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a180 irq 48 Nov 6 23:44:25.784816 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a200 irq 48 Nov 6 23:44:25.784822 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a280 irq 48 Nov 6 23:44:25.784828 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a300 irq 48 Nov 6 23:44:25.784849 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a380 irq 48 Nov 6 23:44:25.784855 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Nov 6 23:44:25.785686 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Nov 6 23:44:25.785770 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Nov 6 23:44:25.785953 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Nov 6 23:44:25.786069 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Nov 6 23:44:25.786516 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Nov 6 23:44:25.787137 kernel: hub 1-0:1.0: USB hub found Nov 6 23:44:25.788680 kernel: hub 1-0:1.0: 4 ports detected Nov 6 23:44:25.788744 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Nov 6 23:44:25.788809 kernel: hub 2-0:1.0: USB hub found Nov 6 23:44:25.788888 kernel: hub 2-0:1.0: 4 ports detected Nov 6 23:44:25.816516 kernel: BTRFS: device fsid 7e63b391-7474-48b8-9614-cf161680d90d devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (460) Nov 6 23:44:25.827128 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Nov 6 23:44:25.871437 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by (udev-worker) (468) Nov 6 23:44:25.875259 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 23:44:25.882471 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Nov 6 23:44:25.887909 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Nov 6 23:44:25.888582 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Nov 6 23:44:25.918009 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Nov 6 23:44:25.926637 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 6 23:44:25.929271 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 6 23:44:25.936686 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 6 23:44:25.936878 disk-uuid[570]: Primary Header is updated. Nov 6 23:44:25.936878 disk-uuid[570]: Secondary Entries is updated. Nov 6 23:44:25.936878 disk-uuid[570]: Secondary Header is updated. Nov 6 23:44:25.947768 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 6 23:44:26.026551 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Nov 6 23:44:26.096416 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Nov 6 23:44:26.096577 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 6 23:44:26.096608 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Nov 6 23:44:26.099787 kernel: ata1.00: applying bridge limits Nov 6 23:44:26.104897 kernel: ata3: SATA link down (SStatus 0 SControl 300) Nov 6 23:44:26.105549 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 6 23:44:26.110628 kernel: ata1.00: configured for UDMA/100 Nov 6 23:44:26.113988 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 6 23:44:26.115770 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 6 23:44:26.115826 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Nov 6 23:44:26.165027 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 6 23:44:26.165093 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Nov 6 23:44:26.167918 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 6 23:44:26.170985 kernel: usbcore: registered new interface driver usbhid Nov 6 23:44:26.172514 kernel: usbhid: USB HID core driver Nov 6 23:44:26.180461 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Nov 6 23:44:26.180546 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Nov 6 23:44:26.186531 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Nov 6 23:44:26.952540 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 6 23:44:26.954704 disk-uuid[575]: The operation has completed successfully. Nov 6 23:44:27.043835 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 6 23:44:27.044021 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 6 23:44:27.108915 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 6 23:44:27.112554 sh[600]: Success Nov 6 23:44:27.135565 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Nov 6 23:44:27.210573 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 6 23:44:27.222593 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 6 23:44:27.224771 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 6 23:44:27.252787 kernel: BTRFS info (device dm-0): first mount of filesystem 7e63b391-7474-48b8-9614-cf161680d90d Nov 6 23:44:27.252885 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 6 23:44:27.256644 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 6 23:44:27.259973 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 6 23:44:27.263909 kernel: BTRFS info (device dm-0): using free space tree Nov 6 23:44:27.274556 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 6 23:44:27.277701 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 6 23:44:27.279231 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 6 23:44:27.291733 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 6 23:44:27.294735 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 6 23:44:27.327147 kernel: BTRFS info (device sda6): first mount of filesystem c2193637-3855-459d-ac6d-9b4591136350 Nov 6 23:44:27.327227 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 6 23:44:27.327246 kernel: BTRFS info (device sda6): using free space tree Nov 6 23:44:27.336810 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 6 23:44:27.336895 kernel: BTRFS info (device sda6): auto enabling async discard Nov 6 23:44:27.347221 kernel: BTRFS info (device sda6): last unmount of filesystem c2193637-3855-459d-ac6d-9b4591136350 Nov 6 23:44:27.350288 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 6 23:44:27.357704 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 6 23:44:27.423037 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 6 23:44:27.433717 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 6 23:44:27.436386 ignition[703]: Ignition 2.20.0 Nov 6 23:44:27.436391 ignition[703]: Stage: fetch-offline Nov 6 23:44:27.436412 ignition[703]: no configs at "/usr/lib/ignition/base.d" Nov 6 23:44:27.437769 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 6 23:44:27.436417 ignition[703]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 6 23:44:27.436502 ignition[703]: parsed url from cmdline: "" Nov 6 23:44:27.436978 ignition[703]: no config URL provided Nov 6 23:44:27.436983 ignition[703]: reading system config file "/usr/lib/ignition/user.ign" Nov 6 23:44:27.436989 ignition[703]: no config at "/usr/lib/ignition/user.ign" Nov 6 23:44:27.436993 ignition[703]: failed to fetch config: resource requires networking Nov 6 23:44:27.437107 ignition[703]: Ignition finished successfully Nov 6 23:44:27.454011 systemd-networkd[782]: lo: Link UP Nov 6 23:44:27.454019 systemd-networkd[782]: lo: Gained carrier Nov 6 23:44:27.455472 systemd-networkd[782]: Enumeration completed Nov 6 23:44:27.455635 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 6 23:44:27.456069 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 6 23:44:27.456071 systemd-networkd[782]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 6 23:44:27.456759 systemd[1]: Reached target network.target - Network. Nov 6 23:44:27.456985 systemd-networkd[782]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 6 23:44:27.456988 systemd-networkd[782]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 6 23:44:27.457398 systemd-networkd[782]: eth0: Link UP Nov 6 23:44:27.457400 systemd-networkd[782]: eth0: Gained carrier Nov 6 23:44:27.457405 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 6 23:44:27.461687 systemd-networkd[782]: eth1: Link UP Nov 6 23:44:27.461690 systemd-networkd[782]: eth1: Gained carrier Nov 6 23:44:27.461695 systemd-networkd[782]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 6 23:44:27.465942 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 6 23:44:27.474989 ignition[786]: Ignition 2.20.0 Nov 6 23:44:27.474998 ignition[786]: Stage: fetch Nov 6 23:44:27.475105 ignition[786]: no configs at "/usr/lib/ignition/base.d" Nov 6 23:44:27.475196 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 6 23:44:27.475275 ignition[786]: parsed url from cmdline: "" Nov 6 23:44:27.475277 ignition[786]: no config URL provided Nov 6 23:44:27.475280 ignition[786]: reading system config file "/usr/lib/ignition/user.ign" Nov 6 23:44:27.475285 ignition[786]: no config at "/usr/lib/ignition/user.ign" Nov 6 23:44:27.475303 ignition[786]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Nov 6 23:44:27.475423 ignition[786]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Nov 6 23:44:27.488602 systemd-networkd[782]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Nov 6 23:44:27.514558 systemd-networkd[782]: eth0: DHCPv4 address 46.62.225.38/32, gateway 172.31.1.1 acquired from 172.31.1.1 Nov 6 23:44:27.675714 ignition[786]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Nov 6 23:44:27.686712 ignition[786]: GET result: OK Nov 6 23:44:27.686842 ignition[786]: parsing config with SHA512: 3971ad6544233537d30dfc29deb3cf30c4698a109488094731414155fec373faff6f31c00435b52fc2d09b6b35e80c2aed3defc98447e895f3fc66dcf09e005d Nov 6 23:44:27.698721 unknown[786]: fetched base config from "system" Nov 6 23:44:27.698744 unknown[786]: fetched base config from "system" Nov 6 23:44:27.699399 ignition[786]: fetch: fetch complete Nov 6 23:44:27.698753 unknown[786]: fetched user config from "hetzner" Nov 6 23:44:27.699409 ignition[786]: fetch: fetch passed Nov 6 23:44:27.703063 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 6 23:44:27.699485 ignition[786]: Ignition finished successfully Nov 6 23:44:27.709888 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 6 23:44:27.733119 ignition[794]: Ignition 2.20.0 Nov 6 23:44:27.733137 ignition[794]: Stage: kargs Nov 6 23:44:27.733390 ignition[794]: no configs at "/usr/lib/ignition/base.d" Nov 6 23:44:27.737381 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 6 23:44:27.733422 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 6 23:44:27.734920 ignition[794]: kargs: kargs passed Nov 6 23:44:27.734986 ignition[794]: Ignition finished successfully Nov 6 23:44:27.746872 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 6 23:44:27.766149 ignition[800]: Ignition 2.20.0 Nov 6 23:44:27.766168 ignition[800]: Stage: disks Nov 6 23:44:27.768039 ignition[800]: no configs at "/usr/lib/ignition/base.d" Nov 6 23:44:27.768066 ignition[800]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 6 23:44:27.772906 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 6 23:44:27.771170 ignition[800]: disks: disks passed Nov 6 23:44:27.784112 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 6 23:44:27.771282 ignition[800]: Ignition finished successfully Nov 6 23:44:27.786199 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 6 23:44:27.788775 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 6 23:44:27.790919 systemd[1]: Reached target sysinit.target - System Initialization. Nov 6 23:44:27.793640 systemd[1]: Reached target basic.target - Basic System. Nov 6 23:44:27.804666 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 6 23:44:27.823219 systemd-fsck[808]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Nov 6 23:44:27.826273 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 6 23:44:27.835656 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 6 23:44:27.929558 kernel: EXT4-fs (sda9): mounted filesystem 2abcf372-764b-46c0-a870-42c779c5f871 r/w with ordered data mode. Quota mode: none. Nov 6 23:44:27.931155 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 6 23:44:27.932956 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 6 23:44:27.943627 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 6 23:44:27.946537 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 6 23:44:27.951600 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Nov 6 23:44:27.952817 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 6 23:44:27.953630 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 6 23:44:27.967247 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (816) Nov 6 23:44:27.967307 kernel: BTRFS info (device sda6): first mount of filesystem c2193637-3855-459d-ac6d-9b4591136350 Nov 6 23:44:27.967337 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 6 23:44:27.967361 kernel: BTRFS info (device sda6): using free space tree Nov 6 23:44:27.954868 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 6 23:44:27.968654 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 6 23:44:27.972682 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 6 23:44:27.972730 kernel: BTRFS info (device sda6): auto enabling async discard Nov 6 23:44:27.978382 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 6 23:44:28.027259 initrd-setup-root[843]: cut: /sysroot/etc/passwd: No such file or directory Nov 6 23:44:28.032157 coreos-metadata[818]: Nov 06 23:44:28.031 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Nov 6 23:44:28.034816 coreos-metadata[818]: Nov 06 23:44:28.032 INFO Fetch successful Nov 6 23:44:28.034816 coreos-metadata[818]: Nov 06 23:44:28.032 INFO wrote hostname ci-4230-2-4-n-9b34d37d4f to /sysroot/etc/hostname Nov 6 23:44:28.038353 initrd-setup-root[850]: cut: /sysroot/etc/group: No such file or directory Nov 6 23:44:28.037537 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 6 23:44:28.040771 initrd-setup-root[858]: cut: /sysroot/etc/shadow: No such file or directory Nov 6 23:44:28.044471 initrd-setup-root[865]: cut: /sysroot/etc/gshadow: No such file or directory Nov 6 23:44:28.104734 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 6 23:44:28.109549 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 6 23:44:28.113569 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 6 23:44:28.117172 kernel: BTRFS info (device sda6): last unmount of filesystem c2193637-3855-459d-ac6d-9b4591136350 Nov 6 23:44:28.130422 ignition[932]: INFO : Ignition 2.20.0 Nov 6 23:44:28.130422 ignition[932]: INFO : Stage: mount Nov 6 23:44:28.132696 ignition[932]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 6 23:44:28.132696 ignition[932]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 6 23:44:28.135338 ignition[932]: INFO : mount: mount passed Nov 6 23:44:28.135338 ignition[932]: INFO : Ignition finished successfully Nov 6 23:44:28.134735 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 6 23:44:28.142650 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 6 23:44:28.143749 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 6 23:44:28.250716 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 6 23:44:28.267817 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 6 23:44:28.283919 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (945) Nov 6 23:44:28.283987 kernel: BTRFS info (device sda6): first mount of filesystem c2193637-3855-459d-ac6d-9b4591136350 Nov 6 23:44:28.287582 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 6 23:44:28.291528 kernel: BTRFS info (device sda6): using free space tree Nov 6 23:44:28.300434 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 6 23:44:28.300519 kernel: BTRFS info (device sda6): auto enabling async discard Nov 6 23:44:28.304377 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 6 23:44:28.331785 ignition[962]: INFO : Ignition 2.20.0 Nov 6 23:44:28.331785 ignition[962]: INFO : Stage: files Nov 6 23:44:28.333925 ignition[962]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 6 23:44:28.333925 ignition[962]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 6 23:44:28.333925 ignition[962]: DEBUG : files: compiled without relabeling support, skipping Nov 6 23:44:28.337920 ignition[962]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 6 23:44:28.337920 ignition[962]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 6 23:44:28.340796 ignition[962]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 6 23:44:28.340796 ignition[962]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 6 23:44:28.340796 ignition[962]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 6 23:44:28.340734 unknown[962]: wrote ssh authorized keys file for user: core Nov 6 23:44:28.347368 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 6 23:44:28.347368 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Nov 6 23:44:28.626390 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 6 23:44:28.635771 systemd-networkd[782]: eth1: Gained IPv6LL Nov 6 23:44:28.939955 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 6 23:44:28.939955 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 6 23:44:28.944451 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Nov 6 23:44:29.232819 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 6 23:44:29.276677 systemd-networkd[782]: eth0: Gained IPv6LL Nov 6 23:44:29.328519 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 6 23:44:29.330594 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Nov 6 23:44:29.330594 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Nov 6 23:44:29.330594 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 6 23:44:29.330594 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 6 23:44:29.330594 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 6 23:44:29.330594 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 6 23:44:29.330594 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 6 23:44:29.330594 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 6 23:44:29.330594 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 6 23:44:29.330594 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 6 23:44:29.330594 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 6 23:44:29.330594 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 6 23:44:29.330594 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 6 23:44:29.330594 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Nov 6 23:44:29.639159 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Nov 6 23:44:29.870807 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 6 23:44:29.870807 ignition[962]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Nov 6 23:44:29.874812 ignition[962]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 6 23:44:29.874812 ignition[962]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 6 23:44:29.874812 ignition[962]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Nov 6 23:44:29.874812 ignition[962]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Nov 6 23:44:29.874812 ignition[962]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Nov 6 23:44:29.874812 ignition[962]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Nov 6 23:44:29.874812 ignition[962]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Nov 6 23:44:29.874812 ignition[962]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Nov 6 23:44:29.874812 ignition[962]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Nov 6 23:44:29.874812 ignition[962]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 6 23:44:29.874812 ignition[962]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 6 23:44:29.874812 ignition[962]: INFO : files: files passed Nov 6 23:44:29.874812 ignition[962]: INFO : Ignition finished successfully Nov 6 23:44:29.874788 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 6 23:44:29.888655 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 6 23:44:29.893664 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 6 23:44:29.894466 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 6 23:44:29.895369 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 6 23:44:29.906156 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 6 23:44:29.906156 initrd-setup-root-after-ignition[990]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 6 23:44:29.908850 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 6 23:44:29.910560 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 6 23:44:29.911655 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 6 23:44:29.917737 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 6 23:44:29.947129 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 6 23:44:29.947215 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 6 23:44:29.948463 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 6 23:44:29.949253 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 6 23:44:29.950292 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 6 23:44:29.951595 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 6 23:44:29.960850 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 6 23:44:29.966681 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 6 23:44:29.979952 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 6 23:44:29.981791 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 6 23:44:29.983548 systemd[1]: Stopped target timers.target - Timer Units. Nov 6 23:44:29.984648 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 6 23:44:29.984837 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 6 23:44:29.986845 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 6 23:44:29.987943 systemd[1]: Stopped target basic.target - Basic System. Nov 6 23:44:29.989228 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 6 23:44:29.990704 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 6 23:44:29.992336 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 6 23:44:29.994667 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 6 23:44:29.997171 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 6 23:44:29.999186 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 6 23:44:30.001228 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 6 23:44:30.003101 systemd[1]: Stopped target swap.target - Swaps. Nov 6 23:44:30.004690 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 6 23:44:30.004873 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 6 23:44:30.007107 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 6 23:44:30.008438 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 6 23:44:30.010361 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 6 23:44:30.010526 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 6 23:44:30.011929 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 6 23:44:30.012023 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 6 23:44:30.014001 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 6 23:44:30.014101 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 6 23:44:30.014825 systemd[1]: ignition-files.service: Deactivated successfully. Nov 6 23:44:30.014918 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 6 23:44:30.016432 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 6 23:44:30.016526 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 6 23:44:30.022698 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 6 23:44:30.029903 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 6 23:44:30.031518 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 6 23:44:30.032797 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 6 23:44:30.035733 ignition[1014]: INFO : Ignition 2.20.0 Nov 6 23:44:30.035733 ignition[1014]: INFO : Stage: umount Nov 6 23:44:30.035733 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 6 23:44:30.035733 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 6 23:44:30.035733 ignition[1014]: INFO : umount: umount passed Nov 6 23:44:30.035733 ignition[1014]: INFO : Ignition finished successfully Nov 6 23:44:30.034515 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 6 23:44:30.034629 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 6 23:44:30.044754 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 6 23:44:30.044877 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 6 23:44:30.049018 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 6 23:44:30.049138 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 6 23:44:30.054125 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 6 23:44:30.054198 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 6 23:44:30.055406 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 6 23:44:30.055465 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 6 23:44:30.056802 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 6 23:44:30.056854 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 6 23:44:30.057855 systemd[1]: Stopped target network.target - Network. Nov 6 23:44:30.059474 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 6 23:44:30.059598 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 6 23:44:30.062222 systemd[1]: Stopped target paths.target - Path Units. Nov 6 23:44:30.064547 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 6 23:44:30.068616 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 6 23:44:30.070398 systemd[1]: Stopped target slices.target - Slice Units. Nov 6 23:44:30.071266 systemd[1]: Stopped target sockets.target - Socket Units. Nov 6 23:44:30.072740 systemd[1]: iscsid.socket: Deactivated successfully. Nov 6 23:44:30.072788 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 6 23:44:30.074097 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 6 23:44:30.074137 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 6 23:44:30.075364 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 6 23:44:30.075424 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 6 23:44:30.076774 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 6 23:44:30.076842 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 6 23:44:30.078357 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 6 23:44:30.079651 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 6 23:44:30.082635 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 6 23:44:30.083440 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 6 23:44:30.083607 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 6 23:44:30.085420 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 6 23:44:30.085558 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 6 23:44:30.088272 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 6 23:44:30.088405 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 6 23:44:30.093233 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Nov 6 23:44:30.093487 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 6 23:44:30.093664 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 6 23:44:30.096353 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Nov 6 23:44:30.097657 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 6 23:44:30.097716 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 6 23:44:30.109624 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 6 23:44:30.110632 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 6 23:44:30.110709 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 6 23:44:30.112144 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 6 23:44:30.112200 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 6 23:44:30.114646 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 6 23:44:30.114705 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 6 23:44:30.116263 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 6 23:44:30.116318 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 6 23:44:30.118637 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 6 23:44:30.123435 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Nov 6 23:44:30.123551 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Nov 6 23:44:30.132551 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 6 23:44:30.132650 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 6 23:44:30.135386 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 6 23:44:30.135588 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 6 23:44:30.138132 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 6 23:44:30.138195 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 6 23:44:30.139996 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 6 23:44:30.140060 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 6 23:44:30.141614 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 6 23:44:30.141651 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 6 23:44:30.144213 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 6 23:44:30.144245 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 6 23:44:30.145867 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 6 23:44:30.145912 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 6 23:44:30.154603 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 6 23:44:30.155343 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 6 23:44:30.155384 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 6 23:44:30.155982 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 6 23:44:30.156016 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 6 23:44:30.156547 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 6 23:44:30.156576 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 6 23:44:30.157592 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 6 23:44:30.157626 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 23:44:30.160183 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Nov 6 23:44:30.160225 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Nov 6 23:44:30.161914 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 6 23:44:30.162021 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 6 23:44:30.164265 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 6 23:44:30.173755 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 6 23:44:30.183144 systemd[1]: Switching root. Nov 6 23:44:30.254933 systemd-journald[187]: Journal stopped Nov 6 23:44:31.292396 systemd-journald[187]: Received SIGTERM from PID 1 (systemd). Nov 6 23:44:31.292452 kernel: SELinux: policy capability network_peer_controls=1 Nov 6 23:44:31.292466 kernel: SELinux: policy capability open_perms=1 Nov 6 23:44:31.292476 kernel: SELinux: policy capability extended_socket_class=1 Nov 6 23:44:31.292485 kernel: SELinux: policy capability always_check_network=0 Nov 6 23:44:31.292548 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 6 23:44:31.292558 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 6 23:44:31.292567 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 6 23:44:31.292576 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 6 23:44:31.292586 kernel: audit: type=1403 audit(1762472670.416:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 6 23:44:31.292599 systemd[1]: Successfully loaded SELinux policy in 62.939ms. Nov 6 23:44:31.292623 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 14.587ms. Nov 6 23:44:31.292634 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 6 23:44:31.292648 systemd[1]: Detected virtualization kvm. Nov 6 23:44:31.292658 systemd[1]: Detected architecture x86-64. Nov 6 23:44:31.292668 systemd[1]: Detected first boot. Nov 6 23:44:31.292679 systemd[1]: Hostname set to . Nov 6 23:44:31.292689 systemd[1]: Initializing machine ID from VM UUID. Nov 6 23:44:31.292700 zram_generator::config[1059]: No configuration found. Nov 6 23:44:31.292713 kernel: Guest personality initialized and is inactive Nov 6 23:44:31.292723 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Nov 6 23:44:31.292732 kernel: Initialized host personality Nov 6 23:44:31.292742 kernel: NET: Registered PF_VSOCK protocol family Nov 6 23:44:31.292752 systemd[1]: Populated /etc with preset unit settings. Nov 6 23:44:31.292762 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Nov 6 23:44:31.292778 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 6 23:44:31.292788 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 6 23:44:31.292799 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 6 23:44:31.292811 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 6 23:44:31.292821 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 6 23:44:31.292832 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 6 23:44:31.292841 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 6 23:44:31.292855 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 6 23:44:31.292865 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 6 23:44:31.292875 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 6 23:44:31.292886 systemd[1]: Created slice user.slice - User and Session Slice. Nov 6 23:44:31.292933 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 6 23:44:31.292945 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 6 23:44:31.292956 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 6 23:44:31.292966 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 6 23:44:31.292977 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 6 23:44:31.292988 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 6 23:44:31.293000 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 6 23:44:31.293010 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 6 23:44:31.293021 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 6 23:44:31.293031 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 6 23:44:31.293043 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 6 23:44:31.293053 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 6 23:44:31.293064 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 6 23:44:31.293074 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 6 23:44:31.293084 systemd[1]: Reached target slices.target - Slice Units. Nov 6 23:44:31.293094 systemd[1]: Reached target swap.target - Swaps. Nov 6 23:44:31.293108 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 6 23:44:31.293120 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 6 23:44:31.293132 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 6 23:44:31.293142 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 6 23:44:31.293153 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 6 23:44:31.293165 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 6 23:44:31.293175 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 6 23:44:31.293186 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 6 23:44:31.293196 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 6 23:44:31.293206 systemd[1]: Mounting media.mount - External Media Directory... Nov 6 23:44:31.293217 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 23:44:31.293227 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 6 23:44:31.293237 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 6 23:44:31.293247 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 6 23:44:31.293260 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 6 23:44:31.293271 systemd[1]: Reached target machines.target - Containers. Nov 6 23:44:31.293282 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 6 23:44:31.293293 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 6 23:44:31.293304 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 6 23:44:31.293314 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 6 23:44:31.293324 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 6 23:44:31.293334 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 6 23:44:31.293346 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 6 23:44:31.293357 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 6 23:44:31.293367 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 6 23:44:31.293377 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 6 23:44:31.293387 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 6 23:44:31.293398 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 6 23:44:31.293408 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 6 23:44:31.293418 systemd[1]: Stopped systemd-fsck-usr.service. Nov 6 23:44:31.293428 kernel: ACPI: bus type drm_connector registered Nov 6 23:44:31.293440 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 23:44:31.293450 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 6 23:44:31.293461 kernel: fuse: init (API version 7.39) Nov 6 23:44:31.293470 kernel: loop: module loaded Nov 6 23:44:31.293481 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 6 23:44:31.293503 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 6 23:44:31.293528 systemd-journald[1150]: Collecting audit messages is disabled. Nov 6 23:44:31.293554 systemd-journald[1150]: Journal started Nov 6 23:44:31.293576 systemd-journald[1150]: Runtime Journal (/run/log/journal/e369447eb59b4a7584604b6a02d7e5cd) is 4.8M, max 38.3M, 33.5M free. Nov 6 23:44:30.945083 systemd[1]: Queued start job for default target multi-user.target. Nov 6 23:44:30.958883 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Nov 6 23:44:30.959465 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 6 23:44:31.301637 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 6 23:44:31.307547 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 6 23:44:31.313681 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 6 23:44:31.313720 systemd[1]: verity-setup.service: Deactivated successfully. Nov 6 23:44:31.313730 systemd[1]: Stopped verity-setup.service. Nov 6 23:44:31.316464 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 23:44:31.319524 systemd[1]: Started systemd-journald.service - Journal Service. Nov 6 23:44:31.325768 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 6 23:44:31.326235 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 6 23:44:31.326692 systemd[1]: Mounted media.mount - External Media Directory. Nov 6 23:44:31.327144 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 6 23:44:31.327609 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 6 23:44:31.328102 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 6 23:44:31.328643 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 6 23:44:31.329243 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 6 23:44:31.329871 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 6 23:44:31.329980 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 6 23:44:31.330701 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 6 23:44:31.330798 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 6 23:44:31.331439 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 6 23:44:31.331726 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 6 23:44:31.332365 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 6 23:44:31.332454 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 6 23:44:31.333057 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 6 23:44:31.333144 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 6 23:44:31.333681 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 6 23:44:31.333762 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 6 23:44:31.334763 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 6 23:44:31.335341 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 6 23:44:31.336129 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 6 23:44:31.337077 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 6 23:44:31.343600 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 6 23:44:31.348560 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 6 23:44:31.351187 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 6 23:44:31.351683 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 6 23:44:31.351744 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 6 23:44:31.352841 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 6 23:44:31.356256 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 6 23:44:31.358130 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 6 23:44:31.358658 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 23:44:31.361925 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 6 23:44:31.368616 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 6 23:44:31.370590 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 6 23:44:31.371298 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 6 23:44:31.372732 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 6 23:44:31.373585 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 6 23:44:31.376651 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 6 23:44:31.380615 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 6 23:44:31.382484 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 6 23:44:31.384609 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 6 23:44:31.385383 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 6 23:44:31.405655 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 6 23:44:31.409258 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 6 23:44:31.422567 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 6 23:44:31.430987 systemd-journald[1150]: Time spent on flushing to /var/log/journal/e369447eb59b4a7584604b6a02d7e5cd is 53.752ms for 1150 entries. Nov 6 23:44:31.430987 systemd-journald[1150]: System Journal (/var/log/journal/e369447eb59b4a7584604b6a02d7e5cd) is 8M, max 584.8M, 576.8M free. Nov 6 23:44:31.507909 systemd-journald[1150]: Received client request to flush runtime journal. Nov 6 23:44:31.507952 kernel: loop0: detected capacity change from 0 to 8 Nov 6 23:44:31.507966 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 6 23:44:31.507976 kernel: loop1: detected capacity change from 0 to 138176 Nov 6 23:44:31.456469 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 6 23:44:31.477022 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 6 23:44:31.484307 systemd-tmpfiles[1186]: ACLs are not supported, ignoring. Nov 6 23:44:31.484316 systemd-tmpfiles[1186]: ACLs are not supported, ignoring. Nov 6 23:44:31.493999 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 6 23:44:31.503676 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 6 23:44:31.506718 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 6 23:44:31.509668 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 6 23:44:31.513523 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 6 23:44:31.534938 udevadm[1203]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Nov 6 23:44:31.547521 kernel: loop2: detected capacity change from 0 to 147912 Nov 6 23:44:31.553629 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 6 23:44:31.566209 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 6 23:44:31.589779 systemd-tmpfiles[1209]: ACLs are not supported, ignoring. Nov 6 23:44:31.590095 systemd-tmpfiles[1209]: ACLs are not supported, ignoring. Nov 6 23:44:31.594753 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 6 23:44:31.599567 kernel: loop3: detected capacity change from 0 to 224512 Nov 6 23:44:31.639575 kernel: loop4: detected capacity change from 0 to 8 Nov 6 23:44:31.642533 kernel: loop5: detected capacity change from 0 to 138176 Nov 6 23:44:31.655520 kernel: loop6: detected capacity change from 0 to 147912 Nov 6 23:44:31.674633 kernel: loop7: detected capacity change from 0 to 224512 Nov 6 23:44:31.691005 (sd-merge)[1214]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Nov 6 23:44:31.691761 (sd-merge)[1214]: Merged extensions into '/usr'. Nov 6 23:44:31.695271 systemd[1]: Reload requested from client PID 1185 ('systemd-sysext') (unit systemd-sysext.service)... Nov 6 23:44:31.695590 systemd[1]: Reloading... Nov 6 23:44:31.771514 zram_generator::config[1242]: No configuration found. Nov 6 23:44:31.848523 ldconfig[1180]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 6 23:44:31.862088 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 6 23:44:31.906634 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 6 23:44:31.907016 systemd[1]: Reloading finished in 210 ms. Nov 6 23:44:31.921415 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 6 23:44:31.922172 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 6 23:44:31.922888 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 6 23:44:31.936619 systemd[1]: Starting ensure-sysext.service... Nov 6 23:44:31.939601 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 6 23:44:31.942398 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 6 23:44:31.952542 systemd[1]: Reload requested from client PID 1286 ('systemctl') (unit ensure-sysext.service)... Nov 6 23:44:31.952566 systemd[1]: Reloading... Nov 6 23:44:31.954045 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 6 23:44:31.954177 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 6 23:44:31.955844 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 6 23:44:31.956055 systemd-tmpfiles[1287]: ACLs are not supported, ignoring. Nov 6 23:44:31.956580 systemd-tmpfiles[1287]: ACLs are not supported, ignoring. Nov 6 23:44:31.960286 systemd-tmpfiles[1287]: Detected autofs mount point /boot during canonicalization of boot. Nov 6 23:44:31.960872 systemd-tmpfiles[1287]: Skipping /boot Nov 6 23:44:31.971151 systemd-tmpfiles[1287]: Detected autofs mount point /boot during canonicalization of boot. Nov 6 23:44:31.971160 systemd-tmpfiles[1287]: Skipping /boot Nov 6 23:44:31.980690 systemd-udevd[1288]: Using default interface naming scheme 'v255'. Nov 6 23:44:32.006556 zram_generator::config[1316]: No configuration found. Nov 6 23:44:32.104512 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 6 23:44:32.109599 kernel: mousedev: PS/2 mouse device common for all mice Nov 6 23:44:32.116512 kernel: ACPI: button: Power Button [PWRF] Nov 6 23:44:32.132088 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 6 23:44:32.164513 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 6 23:44:32.164708 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Nov 6 23:44:32.165775 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 6 23:44:32.178506 kernel: EDAC MC: Ver: 3.0.0 Nov 6 23:44:32.184512 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Nov 6 23:44:32.193504 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1323) Nov 6 23:44:32.202241 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 6 23:44:32.202368 systemd[1]: Reloading finished in 249 ms. Nov 6 23:44:32.210734 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 6 23:44:32.212118 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 6 23:44:32.231603 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Nov 6 23:44:32.231666 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Nov 6 23:44:32.232777 kernel: Console: switching to colour dummy device 80x25 Nov 6 23:44:32.233621 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Nov 6 23:44:32.233649 kernel: [drm] features: -context_init Nov 6 23:44:32.238538 kernel: [drm] number of scanouts: 1 Nov 6 23:44:32.238572 kernel: [drm] number of cap sets: 0 Nov 6 23:44:32.241560 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Nov 6 23:44:32.241607 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Nov 6 23:44:32.241620 kernel: Console: switching to colour frame buffer device 160x50 Nov 6 23:44:32.250517 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Nov 6 23:44:32.269665 systemd[1]: Finished ensure-sysext.service. Nov 6 23:44:32.287088 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Nov 6 23:44:32.288125 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 23:44:32.292644 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 6 23:44:32.298628 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 6 23:44:32.298960 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 6 23:44:32.301404 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 6 23:44:32.306176 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 6 23:44:32.307660 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 6 23:44:32.309596 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 6 23:44:32.312256 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 23:44:32.313871 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 6 23:44:32.313945 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 23:44:32.315675 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 6 23:44:32.320653 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 6 23:44:32.324200 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 6 23:44:32.328604 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 6 23:44:32.330610 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 6 23:44:32.332466 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 23:44:32.332535 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 23:44:32.333786 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 6 23:44:32.333903 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 6 23:44:32.334148 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 6 23:44:32.334299 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 6 23:44:32.334485 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 6 23:44:32.334593 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 6 23:44:32.337786 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 6 23:44:32.344884 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 6 23:44:32.353467 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 6 23:44:32.353644 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 6 23:44:32.354014 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 6 23:44:32.362988 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 6 23:44:32.368727 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 6 23:44:32.381454 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 6 23:44:32.390811 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 6 23:44:32.399653 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 6 23:44:32.402606 augenrules[1445]: No rules Nov 6 23:44:32.403430 systemd[1]: audit-rules.service: Deactivated successfully. Nov 6 23:44:32.403656 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 6 23:44:32.409885 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 6 23:44:32.413002 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 6 23:44:32.423674 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 6 23:44:32.436411 lvm[1457]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 6 23:44:32.449882 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 23:44:32.454348 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 6 23:44:32.455981 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 6 23:44:32.457135 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 6 23:44:32.466602 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 6 23:44:32.466989 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 6 23:44:32.470799 lvm[1464]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 6 23:44:32.490989 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 6 23:44:32.498941 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 6 23:44:32.499399 systemd[1]: Reached target time-set.target - System Time Set. Nov 6 23:44:32.499847 systemd-resolved[1412]: Positive Trust Anchors: Nov 6 23:44:32.500057 systemd-resolved[1412]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 6 23:44:32.500105 systemd-resolved[1412]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 6 23:44:32.504217 systemd-resolved[1412]: Using system hostname 'ci-4230-2-4-n-9b34d37d4f'. Nov 6 23:44:32.505331 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 6 23:44:32.505754 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 6 23:44:32.506095 systemd[1]: Reached target sysinit.target - System Initialization. Nov 6 23:44:32.506453 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 6 23:44:32.506772 systemd-networkd[1411]: lo: Link UP Nov 6 23:44:32.506775 systemd-networkd[1411]: lo: Gained carrier Nov 6 23:44:32.506776 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 6 23:44:32.507155 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 6 23:44:32.507484 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 6 23:44:32.508347 systemd-networkd[1411]: Enumeration completed Nov 6 23:44:32.508773 systemd-networkd[1411]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 6 23:44:32.508781 systemd-networkd[1411]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 6 23:44:32.512255 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 6 23:44:32.512304 systemd-networkd[1411]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 6 23:44:32.512307 systemd-networkd[1411]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 6 23:44:32.512647 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 6 23:44:32.512669 systemd[1]: Reached target paths.target - Path Units. Nov 6 23:44:32.512961 systemd-networkd[1411]: eth0: Link UP Nov 6 23:44:32.512964 systemd-networkd[1411]: eth0: Gained carrier Nov 6 23:44:32.512975 systemd-networkd[1411]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 6 23:44:32.513016 systemd[1]: Reached target timers.target - Timer Units. Nov 6 23:44:32.515021 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 6 23:44:32.516467 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 6 23:44:32.516948 systemd-networkd[1411]: eth1: Link UP Nov 6 23:44:32.516997 systemd-networkd[1411]: eth1: Gained carrier Nov 6 23:44:32.517014 systemd-networkd[1411]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 6 23:44:32.521426 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 6 23:44:32.523009 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 6 23:44:32.523460 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 6 23:44:32.532128 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 6 23:44:32.533733 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 6 23:44:32.534598 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 6 23:44:32.535033 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 6 23:44:32.535434 systemd[1]: Reached target network.target - Network. Nov 6 23:44:32.535764 systemd[1]: Reached target sockets.target - Socket Units. Nov 6 23:44:32.536207 systemd[1]: Reached target basic.target - Basic System. Nov 6 23:44:32.536524 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 6 23:44:32.536541 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 6 23:44:32.537430 systemd[1]: Starting containerd.service - containerd container runtime... Nov 6 23:44:32.541277 systemd-networkd[1411]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Nov 6 23:44:32.542254 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 6 23:44:32.543669 systemd-timesyncd[1413]: Network configuration changed, trying to establish connection. Nov 6 23:44:32.544847 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 6 23:44:32.547647 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 6 23:44:32.549605 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 6 23:44:32.551722 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 6 23:44:32.553686 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 6 23:44:32.558279 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 6 23:44:32.561648 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Nov 6 23:44:32.569078 coreos-metadata[1471]: Nov 06 23:44:32.567 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Nov 6 23:44:32.569078 coreos-metadata[1471]: Nov 06 23:44:32.567 INFO Failed to fetch: error sending request for url (http://169.254.169.254/hetzner/v1/metadata) Nov 6 23:44:32.570609 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 6 23:44:32.583626 jq[1473]: false Nov 6 23:44:32.572005 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 6 23:44:32.574716 systemd-networkd[1411]: eth0: DHCPv4 address 46.62.225.38/32, gateway 172.31.1.1 acquired from 172.31.1.1 Nov 6 23:44:32.580950 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 6 23:44:32.582134 systemd-timesyncd[1413]: Network configuration changed, trying to establish connection. Nov 6 23:44:32.587666 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 6 23:44:32.590410 dbus-daemon[1472]: [system] SELinux support is enabled Nov 6 23:44:32.593617 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 6 23:44:32.594998 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 6 23:44:32.595373 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 6 23:44:32.597328 systemd[1]: Starting update-engine.service - Update Engine... Nov 6 23:44:32.600581 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 6 23:44:32.601363 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 6 23:44:32.607045 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 6 23:44:32.607189 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 6 23:44:32.608689 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 6 23:44:32.608819 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 6 23:44:32.609048 jq[1495]: true Nov 6 23:44:32.616572 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 6 23:44:32.616599 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 6 23:44:32.618745 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 6 23:44:32.618763 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 6 23:44:32.621235 jq[1499]: true Nov 6 23:44:32.628032 systemd[1]: motdgen.service: Deactivated successfully. Nov 6 23:44:32.628164 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 6 23:44:32.637035 (ntainerd)[1509]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 6 23:44:32.637326 update_engine[1492]: I20251106 23:44:32.635483 1492 main.cc:92] Flatcar Update Engine starting Nov 6 23:44:32.637388 systemd[1]: Started update-engine.service - Update Engine. Nov 6 23:44:32.641209 update_engine[1492]: I20251106 23:44:32.637432 1492 update_check_scheduler.cc:74] Next update check in 4m59s Nov 6 23:44:32.640733 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 6 23:44:32.642738 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 6 23:44:32.648210 extend-filesystems[1474]: Found loop4 Nov 6 23:44:32.650434 extend-filesystems[1474]: Found loop5 Nov 6 23:44:32.650434 extend-filesystems[1474]: Found loop6 Nov 6 23:44:32.650434 extend-filesystems[1474]: Found loop7 Nov 6 23:44:32.650434 extend-filesystems[1474]: Found sda Nov 6 23:44:32.650434 extend-filesystems[1474]: Found sda1 Nov 6 23:44:32.650434 extend-filesystems[1474]: Found sda2 Nov 6 23:44:32.650434 extend-filesystems[1474]: Found sda3 Nov 6 23:44:32.650434 extend-filesystems[1474]: Found usr Nov 6 23:44:32.650434 extend-filesystems[1474]: Found sda4 Nov 6 23:44:32.650434 extend-filesystems[1474]: Found sda6 Nov 6 23:44:32.650434 extend-filesystems[1474]: Found sda7 Nov 6 23:44:32.650434 extend-filesystems[1474]: Found sda9 Nov 6 23:44:32.650434 extend-filesystems[1474]: Checking size of /dev/sda9 Nov 6 23:44:32.672553 extend-filesystems[1474]: Resized partition /dev/sda9 Nov 6 23:44:32.672829 tar[1498]: linux-amd64/LICENSE Nov 6 23:44:32.672829 tar[1498]: linux-amd64/helm Nov 6 23:44:32.673021 extend-filesystems[1525]: resize2fs 1.47.1 (20-May-2024) Nov 6 23:44:32.678600 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Nov 6 23:44:32.729281 systemd-logind[1482]: New seat seat0. Nov 6 23:44:32.733766 systemd-logind[1482]: Watching system buttons on /dev/input/event2 (Power Button) Nov 6 23:44:32.733779 systemd-logind[1482]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 6 23:44:32.733924 systemd[1]: Started systemd-logind.service - User Login Management. Nov 6 23:44:32.740925 locksmithd[1510]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 6 23:44:32.756263 bash[1533]: Updated "/home/core/.ssh/authorized_keys" Nov 6 23:44:32.758054 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 6 23:44:32.769966 systemd[1]: Starting sshkeys.service... Nov 6 23:44:32.782527 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1323) Nov 6 23:44:32.789320 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 6 23:44:32.799598 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 6 23:44:32.826719 coreos-metadata[1547]: Nov 06 23:44:32.825 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Nov 6 23:44:32.827736 coreos-metadata[1547]: Nov 06 23:44:32.827 INFO Fetch successful Nov 6 23:44:32.852890 unknown[1547]: wrote ssh authorized keys file for user: core Nov 6 23:44:32.861516 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Nov 6 23:44:32.901464 extend-filesystems[1525]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Nov 6 23:44:32.901464 extend-filesystems[1525]: old_desc_blocks = 1, new_desc_blocks = 5 Nov 6 23:44:32.901464 extend-filesystems[1525]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Nov 6 23:44:32.912386 extend-filesystems[1474]: Resized filesystem in /dev/sda9 Nov 6 23:44:32.912386 extend-filesystems[1474]: Found sr0 Nov 6 23:44:32.913671 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 6 23:44:32.913808 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 6 23:44:32.917223 update-ssh-keys[1550]: Updated "/home/core/.ssh/authorized_keys" Nov 6 23:44:32.917967 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 6 23:44:32.920321 systemd[1]: Finished sshkeys.service. Nov 6 23:44:32.931840 containerd[1509]: time="2025-11-06T23:44:32.931786279Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Nov 6 23:44:32.971504 containerd[1509]: time="2025-11-06T23:44:32.969086486Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 6 23:44:32.975414 containerd[1509]: time="2025-11-06T23:44:32.974740823Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.113-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 6 23:44:32.975414 containerd[1509]: time="2025-11-06T23:44:32.974761573Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 6 23:44:32.975414 containerd[1509]: time="2025-11-06T23:44:32.974773423Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 6 23:44:32.975414 containerd[1509]: time="2025-11-06T23:44:32.974877513Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 6 23:44:32.975414 containerd[1509]: time="2025-11-06T23:44:32.974886603Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 6 23:44:32.975414 containerd[1509]: time="2025-11-06T23:44:32.974932683Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 6 23:44:32.975414 containerd[1509]: time="2025-11-06T23:44:32.974939563Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 6 23:44:32.975414 containerd[1509]: time="2025-11-06T23:44:32.975066984Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 6 23:44:32.975414 containerd[1509]: time="2025-11-06T23:44:32.975075304Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 6 23:44:32.975414 containerd[1509]: time="2025-11-06T23:44:32.975082534Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 6 23:44:32.975414 containerd[1509]: time="2025-11-06T23:44:32.975088274Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 6 23:44:32.975624 containerd[1509]: time="2025-11-06T23:44:32.975128404Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 6 23:44:32.975624 containerd[1509]: time="2025-11-06T23:44:32.975244634Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 6 23:44:32.975624 containerd[1509]: time="2025-11-06T23:44:32.975310284Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 6 23:44:32.975624 containerd[1509]: time="2025-11-06T23:44:32.975317174Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 6 23:44:32.975624 containerd[1509]: time="2025-11-06T23:44:32.975367414Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 6 23:44:32.975624 containerd[1509]: time="2025-11-06T23:44:32.975392334Z" level=info msg="metadata content store policy set" policy=shared Nov 6 23:44:32.980577 containerd[1509]: time="2025-11-06T23:44:32.980560520Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 6 23:44:32.980652 containerd[1509]: time="2025-11-06T23:44:32.980645690Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 6 23:44:32.980697 containerd[1509]: time="2025-11-06T23:44:32.980691391Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 6 23:44:32.980727 containerd[1509]: time="2025-11-06T23:44:32.980720131Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 6 23:44:32.980765 containerd[1509]: time="2025-11-06T23:44:32.980759031Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 6 23:44:32.980869 containerd[1509]: time="2025-11-06T23:44:32.980859391Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 6 23:44:32.981674 containerd[1509]: time="2025-11-06T23:44:32.981663932Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 6 23:44:32.981802 containerd[1509]: time="2025-11-06T23:44:32.981790572Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 6 23:44:32.981840 containerd[1509]: time="2025-11-06T23:44:32.981833922Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 6 23:44:32.981867 containerd[1509]: time="2025-11-06T23:44:32.981862212Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 6 23:44:32.981893 containerd[1509]: time="2025-11-06T23:44:32.981888312Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 6 23:44:32.981945 containerd[1509]: time="2025-11-06T23:44:32.981938272Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 6 23:44:32.981973 containerd[1509]: time="2025-11-06T23:44:32.981967962Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 6 23:44:32.984429 containerd[1509]: time="2025-11-06T23:44:32.983509754Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 6 23:44:32.984429 containerd[1509]: time="2025-11-06T23:44:32.983522014Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 6 23:44:32.984429 containerd[1509]: time="2025-11-06T23:44:32.983530974Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 6 23:44:32.984429 containerd[1509]: time="2025-11-06T23:44:32.983539684Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 6 23:44:32.984429 containerd[1509]: time="2025-11-06T23:44:32.983545994Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 6 23:44:32.984429 containerd[1509]: time="2025-11-06T23:44:32.983560174Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 6 23:44:32.984429 containerd[1509]: time="2025-11-06T23:44:32.983568344Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 6 23:44:32.984429 containerd[1509]: time="2025-11-06T23:44:32.983576344Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 6 23:44:32.984429 containerd[1509]: time="2025-11-06T23:44:32.983584504Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 6 23:44:32.984429 containerd[1509]: time="2025-11-06T23:44:32.983591654Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 6 23:44:32.984429 containerd[1509]: time="2025-11-06T23:44:32.983600894Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 6 23:44:32.984429 containerd[1509]: time="2025-11-06T23:44:32.983608514Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 6 23:44:32.984429 containerd[1509]: time="2025-11-06T23:44:32.983616974Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 6 23:44:32.984429 containerd[1509]: time="2025-11-06T23:44:32.983625784Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 6 23:44:32.984648 containerd[1509]: time="2025-11-06T23:44:32.983634944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 6 23:44:32.984648 containerd[1509]: time="2025-11-06T23:44:32.983642074Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 6 23:44:32.984648 containerd[1509]: time="2025-11-06T23:44:32.983649404Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 6 23:44:32.984648 containerd[1509]: time="2025-11-06T23:44:32.983656604Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 6 23:44:32.984648 containerd[1509]: time="2025-11-06T23:44:32.983665034Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 6 23:44:32.984648 containerd[1509]: time="2025-11-06T23:44:32.983678684Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 6 23:44:32.984648 containerd[1509]: time="2025-11-06T23:44:32.983690744Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 6 23:44:32.984648 containerd[1509]: time="2025-11-06T23:44:32.983696794Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 6 23:44:32.984648 containerd[1509]: time="2025-11-06T23:44:32.983726564Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 6 23:44:32.984648 containerd[1509]: time="2025-11-06T23:44:32.983736524Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 6 23:44:32.984648 containerd[1509]: time="2025-11-06T23:44:32.983742994Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 6 23:44:32.984648 containerd[1509]: time="2025-11-06T23:44:32.983750084Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 6 23:44:32.984648 containerd[1509]: time="2025-11-06T23:44:32.983755394Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 6 23:44:32.984787 containerd[1509]: time="2025-11-06T23:44:32.983761844Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 6 23:44:32.984787 containerd[1509]: time="2025-11-06T23:44:32.983767564Z" level=info msg="NRI interface is disabled by configuration." Nov 6 23:44:32.984787 containerd[1509]: time="2025-11-06T23:44:32.983773414Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 6 23:44:32.984820 containerd[1509]: time="2025-11-06T23:44:32.983989315Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 6 23:44:32.984820 containerd[1509]: time="2025-11-06T23:44:32.984017265Z" level=info msg="Connect containerd service" Nov 6 23:44:32.984820 containerd[1509]: time="2025-11-06T23:44:32.984041435Z" level=info msg="using legacy CRI server" Nov 6 23:44:32.984820 containerd[1509]: time="2025-11-06T23:44:32.984045825Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 6 23:44:32.984820 containerd[1509]: time="2025-11-06T23:44:32.984114025Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 6 23:44:32.985132 containerd[1509]: time="2025-11-06T23:44:32.985117496Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 6 23:44:32.985613 containerd[1509]: time="2025-11-06T23:44:32.985595357Z" level=info msg="Start subscribing containerd event" Nov 6 23:44:32.985655 containerd[1509]: time="2025-11-06T23:44:32.985649587Z" level=info msg="Start recovering state" Nov 6 23:44:32.985716 containerd[1509]: time="2025-11-06T23:44:32.985710907Z" level=info msg="Start event monitor" Nov 6 23:44:32.985745 containerd[1509]: time="2025-11-06T23:44:32.985740397Z" level=info msg="Start snapshots syncer" Nov 6 23:44:32.985771 containerd[1509]: time="2025-11-06T23:44:32.985766897Z" level=info msg="Start cni network conf syncer for default" Nov 6 23:44:32.985791 containerd[1509]: time="2025-11-06T23:44:32.985787377Z" level=info msg="Start streaming server" Nov 6 23:44:32.988678 containerd[1509]: time="2025-11-06T23:44:32.987565169Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 6 23:44:32.988678 containerd[1509]: time="2025-11-06T23:44:32.987592379Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 6 23:44:32.987689 systemd[1]: Started containerd.service - containerd container runtime. Nov 6 23:44:32.989428 containerd[1509]: time="2025-11-06T23:44:32.988791481Z" level=info msg="containerd successfully booted in 0.059083s" Nov 6 23:44:32.994725 sshd_keygen[1516]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 6 23:44:33.010275 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 6 23:44:33.025722 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 6 23:44:33.032629 systemd[1]: issuegen.service: Deactivated successfully. Nov 6 23:44:33.032866 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 6 23:44:33.047463 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 6 23:44:33.055386 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 6 23:44:33.064529 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 6 23:44:33.069475 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 6 23:44:33.070908 systemd[1]: Reached target getty.target - Login Prompts. Nov 6 23:44:33.222526 tar[1498]: linux-amd64/README.md Nov 6 23:44:33.229636 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 6 23:44:33.568151 coreos-metadata[1471]: Nov 06 23:44:33.568 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #2 Nov 6 23:44:33.569513 coreos-metadata[1471]: Nov 06 23:44:33.569 INFO Fetch successful Nov 6 23:44:33.569898 coreos-metadata[1471]: Nov 06 23:44:33.569 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Nov 6 23:44:33.570666 coreos-metadata[1471]: Nov 06 23:44:33.570 INFO Fetch successful Nov 6 23:44:33.607766 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 6 23:44:33.610246 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 6 23:44:33.691743 systemd-networkd[1411]: eth0: Gained IPv6LL Nov 6 23:44:33.692242 systemd-timesyncd[1413]: Network configuration changed, trying to establish connection. Nov 6 23:44:33.693513 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 6 23:44:33.695363 systemd[1]: Reached target network-online.target - Network is Online. Nov 6 23:44:33.703665 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 23:44:33.707552 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 6 23:44:33.731096 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 6 23:44:34.459653 systemd-networkd[1411]: eth1: Gained IPv6LL Nov 6 23:44:34.460040 systemd-timesyncd[1413]: Network configuration changed, trying to establish connection. Nov 6 23:44:34.880014 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 23:44:34.882866 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 6 23:44:34.885145 systemd[1]: Startup finished in 1.323s (kernel) + 5.634s (initrd) + 4.529s (userspace) = 11.486s. Nov 6 23:44:34.887132 (kubelet)[1605]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 23:44:35.715679 kubelet[1605]: E1106 23:44:35.715596 1605 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 23:44:35.718712 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 23:44:35.718835 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 23:44:35.719126 systemd[1]: kubelet.service: Consumed 1.384s CPU time, 266.1M memory peak. Nov 6 23:44:45.794667 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 6 23:44:45.802173 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 23:44:45.924056 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 23:44:45.925972 (kubelet)[1623]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 23:44:45.992016 kubelet[1623]: E1106 23:44:45.991902 1623 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 23:44:45.996609 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 23:44:45.996800 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 23:44:45.997184 systemd[1]: kubelet.service: Consumed 181ms CPU time, 111.3M memory peak. Nov 6 23:44:56.044735 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 6 23:44:56.061965 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 23:44:56.155335 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 23:44:56.167791 (kubelet)[1638]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 23:44:56.210776 kubelet[1638]: E1106 23:44:56.210710 1638 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 23:44:56.213748 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 23:44:56.213885 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 23:44:56.214245 systemd[1]: kubelet.service: Consumed 135ms CPU time, 112.5M memory peak. Nov 6 23:45:05.148256 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 6 23:45:05.154941 systemd[1]: Started sshd@0-46.62.225.38:22-139.178.89.65:54378.service - OpenSSH per-connection server daemon (139.178.89.65:54378). Nov 6 23:45:06.184524 sshd[1647]: Accepted publickey for core from 139.178.89.65 port 54378 ssh2: RSA SHA256:cjPMyP4iqQjYKk/6ojYcS0wCb6TI0fxaXqSTxqDpLQo Nov 6 23:45:06.186156 sshd-session[1647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:45:06.201976 systemd-logind[1482]: New session 1 of user core. Nov 6 23:45:06.203850 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 6 23:45:06.209242 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 6 23:45:06.221507 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 6 23:45:06.221886 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 6 23:45:06.226891 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 23:45:06.228720 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 6 23:45:06.237475 (systemd)[1652]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 6 23:45:06.240036 systemd-logind[1482]: New session c1 of user core. Nov 6 23:45:06.323601 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 23:45:06.325652 (kubelet)[1665]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 23:45:06.355297 kubelet[1665]: E1106 23:45:06.352977 1665 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 23:45:06.354797 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 23:45:06.354893 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 23:45:06.355098 systemd[1]: kubelet.service: Consumed 84ms CPU time, 112.1M memory peak. Nov 6 23:45:06.359832 systemd[1652]: Queued start job for default target default.target. Nov 6 23:45:06.366518 systemd[1652]: Created slice app.slice - User Application Slice. Nov 6 23:45:06.366546 systemd[1652]: Reached target paths.target - Paths. Nov 6 23:45:06.366666 systemd[1652]: Reached target timers.target - Timers. Nov 6 23:45:06.367864 systemd[1652]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 6 23:45:06.376842 systemd[1652]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 6 23:45:06.376918 systemd[1652]: Reached target sockets.target - Sockets. Nov 6 23:45:06.376952 systemd[1652]: Reached target basic.target - Basic System. Nov 6 23:45:06.376987 systemd[1652]: Reached target default.target - Main User Target. Nov 6 23:45:06.377009 systemd[1652]: Startup finished in 130ms. Nov 6 23:45:06.377151 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 6 23:45:06.380693 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 6 23:45:07.132543 systemd[1]: Started sshd@1-46.62.225.38:22-139.178.89.65:38218.service - OpenSSH per-connection server daemon (139.178.89.65:38218). Nov 6 23:45:08.256849 sshd[1678]: Accepted publickey for core from 139.178.89.65 port 38218 ssh2: RSA SHA256:cjPMyP4iqQjYKk/6ojYcS0wCb6TI0fxaXqSTxqDpLQo Nov 6 23:45:08.259040 sshd-session[1678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:45:08.266562 systemd-logind[1482]: New session 2 of user core. Nov 6 23:45:08.278776 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 6 23:45:09.028242 sshd[1680]: Connection closed by 139.178.89.65 port 38218 Nov 6 23:45:09.028929 sshd-session[1678]: pam_unix(sshd:session): session closed for user core Nov 6 23:45:09.032661 systemd-logind[1482]: Session 2 logged out. Waiting for processes to exit. Nov 6 23:45:09.033209 systemd[1]: sshd@1-46.62.225.38:22-139.178.89.65:38218.service: Deactivated successfully. Nov 6 23:45:09.035318 systemd[1]: session-2.scope: Deactivated successfully. Nov 6 23:45:09.036739 systemd-logind[1482]: Removed session 2. Nov 6 23:45:09.229307 systemd[1]: Started sshd@2-46.62.225.38:22-139.178.89.65:38220.service - OpenSSH per-connection server daemon (139.178.89.65:38220). Nov 6 23:45:10.366332 sshd[1686]: Accepted publickey for core from 139.178.89.65 port 38220 ssh2: RSA SHA256:cjPMyP4iqQjYKk/6ojYcS0wCb6TI0fxaXqSTxqDpLQo Nov 6 23:45:10.368447 sshd-session[1686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:45:10.377252 systemd-logind[1482]: New session 3 of user core. Nov 6 23:45:10.386741 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 6 23:45:11.130603 sshd[1688]: Connection closed by 139.178.89.65 port 38220 Nov 6 23:45:11.131345 sshd-session[1686]: pam_unix(sshd:session): session closed for user core Nov 6 23:45:11.136415 systemd[1]: sshd@2-46.62.225.38:22-139.178.89.65:38220.service: Deactivated successfully. Nov 6 23:45:11.139240 systemd[1]: session-3.scope: Deactivated successfully. Nov 6 23:45:11.140412 systemd-logind[1482]: Session 3 logged out. Waiting for processes to exit. Nov 6 23:45:11.142206 systemd-logind[1482]: Removed session 3. Nov 6 23:45:11.298867 systemd[1]: Started sshd@3-46.62.225.38:22-139.178.89.65:38228.service - OpenSSH per-connection server daemon (139.178.89.65:38228). Nov 6 23:45:12.318692 sshd[1694]: Accepted publickey for core from 139.178.89.65 port 38228 ssh2: RSA SHA256:cjPMyP4iqQjYKk/6ojYcS0wCb6TI0fxaXqSTxqDpLQo Nov 6 23:45:12.320716 sshd-session[1694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:45:12.329083 systemd-logind[1482]: New session 4 of user core. Nov 6 23:45:12.345598 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 6 23:45:13.019690 sshd[1696]: Connection closed by 139.178.89.65 port 38228 Nov 6 23:45:13.020398 sshd-session[1694]: pam_unix(sshd:session): session closed for user core Nov 6 23:45:13.023639 systemd[1]: sshd@3-46.62.225.38:22-139.178.89.65:38228.service: Deactivated successfully. Nov 6 23:45:13.025979 systemd[1]: session-4.scope: Deactivated successfully. Nov 6 23:45:13.027191 systemd-logind[1482]: Session 4 logged out. Waiting for processes to exit. Nov 6 23:45:13.028735 systemd-logind[1482]: Removed session 4. Nov 6 23:45:13.233047 systemd[1]: Started sshd@4-46.62.225.38:22-139.178.89.65:38232.service - OpenSSH per-connection server daemon (139.178.89.65:38232). Nov 6 23:45:14.344109 sshd[1702]: Accepted publickey for core from 139.178.89.65 port 38232 ssh2: RSA SHA256:cjPMyP4iqQjYKk/6ojYcS0wCb6TI0fxaXqSTxqDpLQo Nov 6 23:45:14.345921 sshd-session[1702]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:45:14.353469 systemd-logind[1482]: New session 5 of user core. Nov 6 23:45:14.358746 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 6 23:45:14.793524 systemd-timesyncd[1413]: Timed out waiting for reply from 85.215.93.134:123 (2.flatcar.pool.ntp.org). Nov 6 23:45:14.941954 sudo[1705]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 6 23:45:14.942322 sudo[1705]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 23:45:14.958997 sudo[1705]: pam_unix(sudo:session): session closed for user root Nov 6 23:45:15.139346 sshd[1704]: Connection closed by 139.178.89.65 port 38232 Nov 6 23:45:15.140422 sshd-session[1702]: pam_unix(sshd:session): session closed for user core Nov 6 23:45:15.144995 systemd[1]: sshd@4-46.62.225.38:22-139.178.89.65:38232.service: Deactivated successfully. Nov 6 23:45:15.147857 systemd[1]: session-5.scope: Deactivated successfully. Nov 6 23:45:15.150214 systemd-logind[1482]: Session 5 logged out. Waiting for processes to exit. Nov 6 23:45:15.151995 systemd-logind[1482]: Removed session 5. Nov 6 23:45:15.342849 systemd[1]: Started sshd@5-46.62.225.38:22-139.178.89.65:38238.service - OpenSSH per-connection server daemon (139.178.89.65:38238). Nov 6 23:45:16.490043 sshd[1711]: Accepted publickey for core from 139.178.89.65 port 38238 ssh2: RSA SHA256:cjPMyP4iqQjYKk/6ojYcS0wCb6TI0fxaXqSTxqDpLQo Nov 6 23:45:16.492198 sshd-session[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:45:16.494055 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Nov 6 23:45:16.502820 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 23:45:16.508479 systemd-logind[1482]: New session 6 of user core. Nov 6 23:45:16.515648 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 6 23:45:16.641696 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 23:45:16.654036 (kubelet)[1721]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 23:45:16.716525 kubelet[1721]: E1106 23:45:16.716307 1721 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 23:45:16.718778 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 23:45:16.719007 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 23:45:16.719518 systemd[1]: kubelet.service: Consumed 186ms CPU time, 109.9M memory peak. Nov 6 23:45:17.083961 sudo[1731]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 6 23:45:17.084391 sudo[1731]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 23:45:17.088659 sudo[1731]: pam_unix(sudo:session): session closed for user root Nov 6 23:45:17.096418 sudo[1730]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 6 23:45:17.097019 sudo[1730]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 23:45:17.111739 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 6 23:45:17.131220 augenrules[1753]: No rules Nov 6 23:45:17.132276 systemd[1]: audit-rules.service: Deactivated successfully. Nov 6 23:45:17.132482 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 6 23:45:17.133410 sudo[1730]: pam_unix(sudo:session): session closed for user root Nov 6 23:45:17.315315 sshd[1716]: Connection closed by 139.178.89.65 port 38238 Nov 6 23:45:17.316163 sshd-session[1711]: pam_unix(sshd:session): session closed for user core Nov 6 23:45:17.320169 systemd[1]: sshd@5-46.62.225.38:22-139.178.89.65:38238.service: Deactivated successfully. Nov 6 23:45:17.322794 systemd[1]: session-6.scope: Deactivated successfully. Nov 6 23:45:17.324897 systemd-logind[1482]: Session 6 logged out. Waiting for processes to exit. Nov 6 23:45:17.326704 systemd-logind[1482]: Removed session 6. Nov 6 23:45:17.399766 update_engine[1492]: I20251106 23:45:17.399547 1492 update_attempter.cc:509] Updating boot flags... Nov 6 23:45:17.433517 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1770) Nov 6 23:45:17.477702 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1773) Nov 6 23:45:17.479116 systemd[1]: Started sshd@6-46.62.225.38:22-139.178.89.65:52400.service - OpenSSH per-connection server daemon (139.178.89.65:52400). Nov 6 23:45:18.496296 sshd[1779]: Accepted publickey for core from 139.178.89.65 port 52400 ssh2: RSA SHA256:cjPMyP4iqQjYKk/6ojYcS0wCb6TI0fxaXqSTxqDpLQo Nov 6 23:45:18.497991 sshd-session[1779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:45:18.504693 systemd-logind[1482]: New session 7 of user core. Nov 6 23:45:18.514784 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 6 23:45:19.032737 sudo[1783]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 6 23:45:19.032976 sudo[1783]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 23:45:19.282751 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 6 23:45:19.298110 (dockerd)[1800]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 6 23:45:19.588779 dockerd[1800]: time="2025-11-06T23:45:19.588698287Z" level=info msg="Starting up" Nov 6 23:45:19.693255 dockerd[1800]: time="2025-11-06T23:45:19.693203258Z" level=info msg="Loading containers: start." Nov 6 23:45:19.846627 kernel: Initializing XFRM netlink socket Nov 6 23:45:19.951176 systemd-networkd[1411]: docker0: Link UP Nov 6 23:45:19.992742 dockerd[1800]: time="2025-11-06T23:45:19.992667732Z" level=info msg="Loading containers: done." Nov 6 23:45:20.016406 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4166987784-merged.mount: Deactivated successfully. Nov 6 23:45:20.019419 dockerd[1800]: time="2025-11-06T23:45:20.018697915Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 6 23:45:20.019419 dockerd[1800]: time="2025-11-06T23:45:20.018847165Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Nov 6 23:45:20.019419 dockerd[1800]: time="2025-11-06T23:45:20.019006155Z" level=info msg="Daemon has completed initialization" Nov 6 23:45:20.060783 dockerd[1800]: time="2025-11-06T23:45:20.060537997Z" level=info msg="API listen on /run/docker.sock" Nov 6 23:45:20.060930 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 6 23:45:21.408859 containerd[1509]: time="2025-11-06T23:45:21.408773512Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Nov 6 23:45:22.017116 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2573155241.mount: Deactivated successfully. Nov 6 23:45:23.772319 containerd[1509]: time="2025-11-06T23:45:23.772245636Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:45:23.773924 containerd[1509]: time="2025-11-06T23:45:23.773652428Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28838016" Nov 6 23:45:23.776519 containerd[1509]: time="2025-11-06T23:45:23.775062060Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:45:23.778243 containerd[1509]: time="2025-11-06T23:45:23.778214034Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:45:23.778852 containerd[1509]: time="2025-11-06T23:45:23.778826624Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 2.370008242s" Nov 6 23:45:23.778852 containerd[1509]: time="2025-11-06T23:45:23.778852095Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Nov 6 23:45:23.779318 containerd[1509]: time="2025-11-06T23:45:23.779286425Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Nov 6 23:45:25.043482 systemd-timesyncd[1413]: Timed out waiting for reply from 77.90.40.94:123 (2.flatcar.pool.ntp.org). Nov 6 23:45:25.071316 systemd-timesyncd[1413]: Contacted time server 168.119.211.223:123 (2.flatcar.pool.ntp.org). Nov 6 23:45:25.071380 systemd-timesyncd[1413]: Initial clock synchronization to Thu 2025-11-06 23:45:25.118565 UTC. Nov 6 23:45:25.167857 containerd[1509]: time="2025-11-06T23:45:25.167791811Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:45:25.168955 containerd[1509]: time="2025-11-06T23:45:25.168843332Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24787049" Nov 6 23:45:25.169991 containerd[1509]: time="2025-11-06T23:45:25.169778723Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:45:25.171928 containerd[1509]: time="2025-11-06T23:45:25.171894576Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:45:25.172611 containerd[1509]: time="2025-11-06T23:45:25.172588737Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 1.393268842s" Nov 6 23:45:25.172761 containerd[1509]: time="2025-11-06T23:45:25.172667907Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Nov 6 23:45:25.173167 containerd[1509]: time="2025-11-06T23:45:25.173132067Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Nov 6 23:45:26.429694 containerd[1509]: time="2025-11-06T23:45:26.429626920Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:45:26.431113 containerd[1509]: time="2025-11-06T23:45:26.430628140Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19176311" Nov 6 23:45:26.432503 containerd[1509]: time="2025-11-06T23:45:26.432452404Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:45:26.434585 containerd[1509]: time="2025-11-06T23:45:26.434539705Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:45:26.435556 containerd[1509]: time="2025-11-06T23:45:26.435378786Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 1.262219575s" Nov 6 23:45:26.435556 containerd[1509]: time="2025-11-06T23:45:26.435405605Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Nov 6 23:45:26.436222 containerd[1509]: time="2025-11-06T23:45:26.435892445Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Nov 6 23:45:26.794607 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Nov 6 23:45:26.799781 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 23:45:26.914608 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 23:45:26.916054 (kubelet)[2060]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 23:45:26.942119 kubelet[2060]: E1106 23:45:26.942055 2060 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 23:45:26.944454 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 23:45:26.944659 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 23:45:26.944929 systemd[1]: kubelet.service: Consumed 125ms CPU time, 108.8M memory peak. Nov 6 23:45:27.560390 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2851350842.mount: Deactivated successfully. Nov 6 23:45:27.954913 containerd[1509]: time="2025-11-06T23:45:27.954798748Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:45:27.955928 containerd[1509]: time="2025-11-06T23:45:27.955808246Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30924234" Nov 6 23:45:27.957508 containerd[1509]: time="2025-11-06T23:45:27.956846475Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:45:27.959004 containerd[1509]: time="2025-11-06T23:45:27.958978094Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:45:27.959440 containerd[1509]: time="2025-11-06T23:45:27.959412669Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 1.523500097s" Nov 6 23:45:27.959470 containerd[1509]: time="2025-11-06T23:45:27.959445170Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Nov 6 23:45:27.959921 containerd[1509]: time="2025-11-06T23:45:27.959831099Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Nov 6 23:45:28.489968 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1306063580.mount: Deactivated successfully. Nov 6 23:45:29.292627 containerd[1509]: time="2025-11-06T23:45:29.292564988Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:45:29.293933 containerd[1509]: time="2025-11-06T23:45:29.293890189Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565335" Nov 6 23:45:29.294972 containerd[1509]: time="2025-11-06T23:45:29.294784529Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:45:29.298442 containerd[1509]: time="2025-11-06T23:45:29.298425524Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:45:29.299267 containerd[1509]: time="2025-11-06T23:45:29.299252038Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.339400494s" Nov 6 23:45:29.299325 containerd[1509]: time="2025-11-06T23:45:29.299315473Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Nov 6 23:45:29.299839 containerd[1509]: time="2025-11-06T23:45:29.299749892Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 6 23:45:29.755333 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount165711541.mount: Deactivated successfully. Nov 6 23:45:29.763643 containerd[1509]: time="2025-11-06T23:45:29.763535046Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:45:29.765130 containerd[1509]: time="2025-11-06T23:45:29.764722390Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321160" Nov 6 23:45:29.768393 containerd[1509]: time="2025-11-06T23:45:29.766668123Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:45:29.770545 containerd[1509]: time="2025-11-06T23:45:29.769942845Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:45:29.771741 containerd[1509]: time="2025-11-06T23:45:29.770858237Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 470.954299ms" Nov 6 23:45:29.771741 containerd[1509]: time="2025-11-06T23:45:29.770898595Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 6 23:45:29.772255 containerd[1509]: time="2025-11-06T23:45:29.772192459Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Nov 6 23:45:30.281007 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount458154611.mount: Deactivated successfully. Nov 6 23:45:31.760622 containerd[1509]: time="2025-11-06T23:45:31.760569031Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:45:31.762223 containerd[1509]: time="2025-11-06T23:45:31.761954399Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682132" Nov 6 23:45:31.765509 containerd[1509]: time="2025-11-06T23:45:31.763388302Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:45:31.766773 containerd[1509]: time="2025-11-06T23:45:31.766750638Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:45:31.767514 containerd[1509]: time="2025-11-06T23:45:31.767473277Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 1.99523183s" Nov 6 23:45:31.767514 containerd[1509]: time="2025-11-06T23:45:31.767511619Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Nov 6 23:45:33.855663 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 23:45:33.855802 systemd[1]: kubelet.service: Consumed 125ms CPU time, 108.8M memory peak. Nov 6 23:45:33.863720 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 23:45:33.896638 systemd[1]: Reload requested from client PID 2213 ('systemctl') (unit session-7.scope)... Nov 6 23:45:33.896654 systemd[1]: Reloading... Nov 6 23:45:34.017568 zram_generator::config[2259]: No configuration found. Nov 6 23:45:34.127466 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 6 23:45:34.242960 systemd[1]: Reloading finished in 345 ms. Nov 6 23:45:34.285530 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 23:45:34.289222 (kubelet)[2303]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 6 23:45:34.294193 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 23:45:34.295909 systemd[1]: kubelet.service: Deactivated successfully. Nov 6 23:45:34.296120 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 23:45:34.296163 systemd[1]: kubelet.service: Consumed 81ms CPU time, 101.5M memory peak. Nov 6 23:45:34.302788 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 23:45:34.405780 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 23:45:34.411751 (kubelet)[2318]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 6 23:45:34.459883 kubelet[2318]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 6 23:45:34.459883 kubelet[2318]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 6 23:45:34.459883 kubelet[2318]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 6 23:45:34.460449 kubelet[2318]: I1106 23:45:34.459943 2318 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 6 23:45:34.985527 kubelet[2318]: I1106 23:45:34.984658 2318 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 6 23:45:34.985527 kubelet[2318]: I1106 23:45:34.984692 2318 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 6 23:45:34.985527 kubelet[2318]: I1106 23:45:34.985077 2318 server.go:954] "Client rotation is on, will bootstrap in background" Nov 6 23:45:35.016602 kubelet[2318]: E1106 23:45:35.016539 2318 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://46.62.225.38:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 46.62.225.38:6443: connect: connection refused" logger="UnhandledError" Nov 6 23:45:35.017331 kubelet[2318]: I1106 23:45:35.017205 2318 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 6 23:45:35.030338 kubelet[2318]: E1106 23:45:35.030302 2318 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 6 23:45:35.030338 kubelet[2318]: I1106 23:45:35.030330 2318 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 6 23:45:35.034824 kubelet[2318]: I1106 23:45:35.034799 2318 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 6 23:45:35.037320 kubelet[2318]: I1106 23:45:35.037283 2318 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 6 23:45:35.037503 kubelet[2318]: I1106 23:45:35.037315 2318 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-2-4-n-9b34d37d4f","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 6 23:45:35.038822 kubelet[2318]: I1106 23:45:35.038794 2318 topology_manager.go:138] "Creating topology manager with none policy" Nov 6 23:45:35.038822 kubelet[2318]: I1106 23:45:35.038817 2318 container_manager_linux.go:304] "Creating device plugin manager" Nov 6 23:45:35.039847 kubelet[2318]: I1106 23:45:35.039820 2318 state_mem.go:36] "Initialized new in-memory state store" Nov 6 23:45:35.042826 kubelet[2318]: I1106 23:45:35.042765 2318 kubelet.go:446] "Attempting to sync node with API server" Nov 6 23:45:35.042826 kubelet[2318]: I1106 23:45:35.042789 2318 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 6 23:45:35.042826 kubelet[2318]: I1106 23:45:35.042809 2318 kubelet.go:352] "Adding apiserver pod source" Nov 6 23:45:35.042826 kubelet[2318]: I1106 23:45:35.042820 2318 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 6 23:45:35.050252 kubelet[2318]: W1106 23:45:35.050194 2318 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://46.62.225.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-2-4-n-9b34d37d4f&limit=500&resourceVersion=0": dial tcp 46.62.225.38:6443: connect: connection refused Nov 6 23:45:35.050314 kubelet[2318]: E1106 23:45:35.050259 2318 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://46.62.225.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-2-4-n-9b34d37d4f&limit=500&resourceVersion=0\": dial tcp 46.62.225.38:6443: connect: connection refused" logger="UnhandledError" Nov 6 23:45:35.051521 kubelet[2318]: W1106 23:45:35.050813 2318 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://46.62.225.38:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 46.62.225.38:6443: connect: connection refused Nov 6 23:45:35.051521 kubelet[2318]: E1106 23:45:35.050851 2318 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://46.62.225.38:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 46.62.225.38:6443: connect: connection refused" logger="UnhandledError" Nov 6 23:45:35.054699 kubelet[2318]: I1106 23:45:35.054586 2318 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Nov 6 23:45:35.057431 kubelet[2318]: I1106 23:45:35.057412 2318 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 6 23:45:35.058766 kubelet[2318]: W1106 23:45:35.057952 2318 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 6 23:45:35.058766 kubelet[2318]: I1106 23:45:35.058438 2318 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 6 23:45:35.058766 kubelet[2318]: I1106 23:45:35.058507 2318 server.go:1287] "Started kubelet" Nov 6 23:45:35.060930 kubelet[2318]: I1106 23:45:35.060616 2318 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 6 23:45:35.060930 kubelet[2318]: I1106 23:45:35.060890 2318 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 6 23:45:35.063265 kubelet[2318]: I1106 23:45:35.063254 2318 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 6 23:45:35.069753 kubelet[2318]: E1106 23:45:35.066558 2318 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://46.62.225.38:6443/api/v1/namespaces/default/events\": dial tcp 46.62.225.38:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230-2-4-n-9b34d37d4f.18758f9d53784256 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230-2-4-n-9b34d37d4f,UID:ci-4230-2-4-n-9b34d37d4f,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230-2-4-n-9b34d37d4f,},FirstTimestamp:2025-11-06 23:45:35.058444886 +0000 UTC m=+0.643548460,LastTimestamp:2025-11-06 23:45:35.058444886 +0000 UTC m=+0.643548460,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-2-4-n-9b34d37d4f,}" Nov 6 23:45:35.071076 kubelet[2318]: I1106 23:45:35.070387 2318 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 6 23:45:35.071942 kubelet[2318]: I1106 23:45:35.071428 2318 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 6 23:45:35.071942 kubelet[2318]: E1106 23:45:35.071620 2318 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230-2-4-n-9b34d37d4f\" not found" Nov 6 23:45:35.075356 kubelet[2318]: I1106 23:45:35.075343 2318 server.go:479] "Adding debug handlers to kubelet server" Nov 6 23:45:35.075583 kubelet[2318]: I1106 23:45:35.075573 2318 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 6 23:45:35.075694 kubelet[2318]: I1106 23:45:35.075675 2318 reconciler.go:26] "Reconciler: start to sync state" Nov 6 23:45:35.077141 kubelet[2318]: I1106 23:45:35.076707 2318 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 6 23:45:35.078323 kubelet[2318]: E1106 23:45:35.078294 2318 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://46.62.225.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-2-4-n-9b34d37d4f?timeout=10s\": dial tcp 46.62.225.38:6443: connect: connection refused" interval="200ms" Nov 6 23:45:35.082298 kubelet[2318]: W1106 23:45:35.082150 2318 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://46.62.225.38:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 46.62.225.38:6443: connect: connection refused Nov 6 23:45:35.082298 kubelet[2318]: E1106 23:45:35.082196 2318 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://46.62.225.38:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 46.62.225.38:6443: connect: connection refused" logger="UnhandledError" Nov 6 23:45:35.082524 kubelet[2318]: I1106 23:45:35.082380 2318 factory.go:221] Registration of the containerd container factory successfully Nov 6 23:45:35.082524 kubelet[2318]: I1106 23:45:35.082391 2318 factory.go:221] Registration of the systemd container factory successfully Nov 6 23:45:35.082524 kubelet[2318]: I1106 23:45:35.082450 2318 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 6 23:45:35.084563 kubelet[2318]: I1106 23:45:35.084533 2318 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 6 23:45:35.087275 kubelet[2318]: I1106 23:45:35.087258 2318 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 6 23:45:35.087322 kubelet[2318]: I1106 23:45:35.087278 2318 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 6 23:45:35.087322 kubelet[2318]: I1106 23:45:35.087296 2318 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 6 23:45:35.087322 kubelet[2318]: I1106 23:45:35.087303 2318 kubelet.go:2382] "Starting kubelet main sync loop" Nov 6 23:45:35.087384 kubelet[2318]: E1106 23:45:35.087336 2318 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 6 23:45:35.092479 kubelet[2318]: W1106 23:45:35.092443 2318 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://46.62.225.38:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 46.62.225.38:6443: connect: connection refused Nov 6 23:45:35.092585 kubelet[2318]: E1106 23:45:35.092486 2318 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://46.62.225.38:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 46.62.225.38:6443: connect: connection refused" logger="UnhandledError" Nov 6 23:45:35.108203 kubelet[2318]: E1106 23:45:35.107153 2318 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 6 23:45:35.110634 kubelet[2318]: I1106 23:45:35.110611 2318 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 6 23:45:35.110697 kubelet[2318]: I1106 23:45:35.110622 2318 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 6 23:45:35.110697 kubelet[2318]: I1106 23:45:35.110655 2318 state_mem.go:36] "Initialized new in-memory state store" Nov 6 23:45:35.112810 kubelet[2318]: I1106 23:45:35.112783 2318 policy_none.go:49] "None policy: Start" Nov 6 23:45:35.112810 kubelet[2318]: I1106 23:45:35.112799 2318 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 6 23:45:35.112810 kubelet[2318]: I1106 23:45:35.112810 2318 state_mem.go:35] "Initializing new in-memory state store" Nov 6 23:45:35.118238 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 6 23:45:35.125461 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 6 23:45:35.127909 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 6 23:45:35.134152 kubelet[2318]: I1106 23:45:35.134049 2318 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 6 23:45:35.134236 kubelet[2318]: I1106 23:45:35.134218 2318 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 6 23:45:35.134269 kubelet[2318]: I1106 23:45:35.134228 2318 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 6 23:45:35.136864 kubelet[2318]: E1106 23:45:35.136803 2318 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 6 23:45:35.136864 kubelet[2318]: E1106 23:45:35.136847 2318 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230-2-4-n-9b34d37d4f\" not found" Nov 6 23:45:35.137880 kubelet[2318]: I1106 23:45:35.137722 2318 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 6 23:45:35.196529 systemd[1]: Created slice kubepods-burstable-podfdb0654bad4581ed0be549f4d71437c6.slice - libcontainer container kubepods-burstable-podfdb0654bad4581ed0be549f4d71437c6.slice. Nov 6 23:45:35.214019 kubelet[2318]: E1106 23:45:35.213715 2318 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-4-n-9b34d37d4f\" not found" node="ci-4230-2-4-n-9b34d37d4f" Nov 6 23:45:35.219962 systemd[1]: Created slice kubepods-burstable-pod26f3957d213d4e6c6136649e242c462c.slice - libcontainer container kubepods-burstable-pod26f3957d213d4e6c6136649e242c462c.slice. Nov 6 23:45:35.230087 kubelet[2318]: E1106 23:45:35.230007 2318 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-4-n-9b34d37d4f\" not found" node="ci-4230-2-4-n-9b34d37d4f" Nov 6 23:45:35.234441 systemd[1]: Created slice kubepods-burstable-pod7b6e6fc7fa6e83ab88dd600b23f2098a.slice - libcontainer container kubepods-burstable-pod7b6e6fc7fa6e83ab88dd600b23f2098a.slice. Nov 6 23:45:35.238556 kubelet[2318]: I1106 23:45:35.238419 2318 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-2-4-n-9b34d37d4f" Nov 6 23:45:35.240452 kubelet[2318]: E1106 23:45:35.239780 2318 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://46.62.225.38:6443/api/v1/nodes\": dial tcp 46.62.225.38:6443: connect: connection refused" node="ci-4230-2-4-n-9b34d37d4f" Nov 6 23:45:35.240452 kubelet[2318]: E1106 23:45:35.240110 2318 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-4-n-9b34d37d4f\" not found" node="ci-4230-2-4-n-9b34d37d4f" Nov 6 23:45:35.279977 kubelet[2318]: E1106 23:45:35.279905 2318 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://46.62.225.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-2-4-n-9b34d37d4f?timeout=10s\": dial tcp 46.62.225.38:6443: connect: connection refused" interval="400ms" Nov 6 23:45:35.377252 kubelet[2318]: I1106 23:45:35.377110 2318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b6e6fc7fa6e83ab88dd600b23f2098a-kubeconfig\") pod \"kube-scheduler-ci-4230-2-4-n-9b34d37d4f\" (UID: \"7b6e6fc7fa6e83ab88dd600b23f2098a\") " pod="kube-system/kube-scheduler-ci-4230-2-4-n-9b34d37d4f" Nov 6 23:45:35.377252 kubelet[2318]: I1106 23:45:35.377215 2318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fdb0654bad4581ed0be549f4d71437c6-ca-certs\") pod \"kube-apiserver-ci-4230-2-4-n-9b34d37d4f\" (UID: \"fdb0654bad4581ed0be549f4d71437c6\") " pod="kube-system/kube-apiserver-ci-4230-2-4-n-9b34d37d4f" Nov 6 23:45:35.377532 kubelet[2318]: I1106 23:45:35.377336 2318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/26f3957d213d4e6c6136649e242c462c-ca-certs\") pod \"kube-controller-manager-ci-4230-2-4-n-9b34d37d4f\" (UID: \"26f3957d213d4e6c6136649e242c462c\") " pod="kube-system/kube-controller-manager-ci-4230-2-4-n-9b34d37d4f" Nov 6 23:45:35.377592 kubelet[2318]: I1106 23:45:35.377481 2318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/26f3957d213d4e6c6136649e242c462c-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-2-4-n-9b34d37d4f\" (UID: \"26f3957d213d4e6c6136649e242c462c\") " pod="kube-system/kube-controller-manager-ci-4230-2-4-n-9b34d37d4f" Nov 6 23:45:35.377592 kubelet[2318]: I1106 23:45:35.377581 2318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/26f3957d213d4e6c6136649e242c462c-k8s-certs\") pod \"kube-controller-manager-ci-4230-2-4-n-9b34d37d4f\" (UID: \"26f3957d213d4e6c6136649e242c462c\") " pod="kube-system/kube-controller-manager-ci-4230-2-4-n-9b34d37d4f" Nov 6 23:45:35.377709 kubelet[2318]: I1106 23:45:35.377653 2318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/26f3957d213d4e6c6136649e242c462c-kubeconfig\") pod \"kube-controller-manager-ci-4230-2-4-n-9b34d37d4f\" (UID: \"26f3957d213d4e6c6136649e242c462c\") " pod="kube-system/kube-controller-manager-ci-4230-2-4-n-9b34d37d4f" Nov 6 23:45:35.377769 kubelet[2318]: I1106 23:45:35.377699 2318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/26f3957d213d4e6c6136649e242c462c-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-2-4-n-9b34d37d4f\" (UID: \"26f3957d213d4e6c6136649e242c462c\") " pod="kube-system/kube-controller-manager-ci-4230-2-4-n-9b34d37d4f" Nov 6 23:45:35.377769 kubelet[2318]: I1106 23:45:35.377745 2318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fdb0654bad4581ed0be549f4d71437c6-k8s-certs\") pod \"kube-apiserver-ci-4230-2-4-n-9b34d37d4f\" (UID: \"fdb0654bad4581ed0be549f4d71437c6\") " pod="kube-system/kube-apiserver-ci-4230-2-4-n-9b34d37d4f" Nov 6 23:45:35.377878 kubelet[2318]: I1106 23:45:35.377822 2318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fdb0654bad4581ed0be549f4d71437c6-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-2-4-n-9b34d37d4f\" (UID: \"fdb0654bad4581ed0be549f4d71437c6\") " pod="kube-system/kube-apiserver-ci-4230-2-4-n-9b34d37d4f" Nov 6 23:45:35.443132 kubelet[2318]: I1106 23:45:35.442714 2318 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-2-4-n-9b34d37d4f" Nov 6 23:45:35.443417 kubelet[2318]: E1106 23:45:35.443340 2318 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://46.62.225.38:6443/api/v1/nodes\": dial tcp 46.62.225.38:6443: connect: connection refused" node="ci-4230-2-4-n-9b34d37d4f" Nov 6 23:45:35.518858 containerd[1509]: time="2025-11-06T23:45:35.518238417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-2-4-n-9b34d37d4f,Uid:fdb0654bad4581ed0be549f4d71437c6,Namespace:kube-system,Attempt:0,}" Nov 6 23:45:35.532605 containerd[1509]: time="2025-11-06T23:45:35.531935449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-2-4-n-9b34d37d4f,Uid:26f3957d213d4e6c6136649e242c462c,Namespace:kube-system,Attempt:0,}" Nov 6 23:45:35.542075 containerd[1509]: time="2025-11-06T23:45:35.542021648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-2-4-n-9b34d37d4f,Uid:7b6e6fc7fa6e83ab88dd600b23f2098a,Namespace:kube-system,Attempt:0,}" Nov 6 23:45:35.681379 kubelet[2318]: E1106 23:45:35.681225 2318 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://46.62.225.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-2-4-n-9b34d37d4f?timeout=10s\": dial tcp 46.62.225.38:6443: connect: connection refused" interval="800ms" Nov 6 23:45:35.845282 kubelet[2318]: I1106 23:45:35.845236 2318 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-2-4-n-9b34d37d4f" Nov 6 23:45:35.845590 kubelet[2318]: E1106 23:45:35.845559 2318 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://46.62.225.38:6443/api/v1/nodes\": dial tcp 46.62.225.38:6443: connect: connection refused" node="ci-4230-2-4-n-9b34d37d4f" Nov 6 23:45:35.993051 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2309415435.mount: Deactivated successfully. Nov 6 23:45:35.994723 kubelet[2318]: W1106 23:45:35.994624 2318 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://46.62.225.38:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 46.62.225.38:6443: connect: connection refused Nov 6 23:45:35.994822 kubelet[2318]: E1106 23:45:35.994741 2318 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://46.62.225.38:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 46.62.225.38:6443: connect: connection refused" logger="UnhandledError" Nov 6 23:45:36.007135 containerd[1509]: time="2025-11-06T23:45:36.007069444Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 23:45:36.012184 containerd[1509]: time="2025-11-06T23:45:36.012078114Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312078" Nov 6 23:45:36.013789 containerd[1509]: time="2025-11-06T23:45:36.013732223Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 23:45:36.017160 containerd[1509]: time="2025-11-06T23:45:36.017081636Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 23:45:36.018671 containerd[1509]: time="2025-11-06T23:45:36.018464184Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 6 23:45:36.020695 containerd[1509]: time="2025-11-06T23:45:36.020644571Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 23:45:36.028774 containerd[1509]: time="2025-11-06T23:45:36.028723380Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 23:45:36.028885 containerd[1509]: time="2025-11-06T23:45:36.028799829Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 6 23:45:36.032795 containerd[1509]: time="2025-11-06T23:45:36.032750576Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 514.355311ms" Nov 6 23:45:36.035470 containerd[1509]: time="2025-11-06T23:45:36.035413843Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 493.271756ms" Nov 6 23:45:36.044650 containerd[1509]: time="2025-11-06T23:45:36.044540080Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 512.391679ms" Nov 6 23:45:36.221689 containerd[1509]: time="2025-11-06T23:45:36.218740959Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 6 23:45:36.221689 containerd[1509]: time="2025-11-06T23:45:36.218822676Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 6 23:45:36.221689 containerd[1509]: time="2025-11-06T23:45:36.218846173Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:45:36.221689 containerd[1509]: time="2025-11-06T23:45:36.218938766Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:45:36.227030 containerd[1509]: time="2025-11-06T23:45:36.225995989Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 6 23:45:36.227030 containerd[1509]: time="2025-11-06T23:45:36.226045145Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 6 23:45:36.227030 containerd[1509]: time="2025-11-06T23:45:36.226064535Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:45:36.227030 containerd[1509]: time="2025-11-06T23:45:36.226135355Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:45:36.227867 containerd[1509]: time="2025-11-06T23:45:36.227806491Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 6 23:45:36.229636 containerd[1509]: time="2025-11-06T23:45:36.229012265Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 6 23:45:36.229636 containerd[1509]: time="2025-11-06T23:45:36.229040198Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:45:36.229636 containerd[1509]: time="2025-11-06T23:45:36.229151731Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:45:36.256048 systemd[1]: Started cri-containerd-2084121e9a1f8056618a5e6731bf8bb074b8a806300441b199ea465d548d1fac.scope - libcontainer container 2084121e9a1f8056618a5e6731bf8bb074b8a806300441b199ea465d548d1fac. Nov 6 23:45:36.261353 systemd[1]: Started cri-containerd-2bb89907ccb8cd0d48a28187becbb529aa099741488440cf68f65d4f2d973e79.scope - libcontainer container 2bb89907ccb8cd0d48a28187becbb529aa099741488440cf68f65d4f2d973e79. Nov 6 23:45:36.265637 systemd[1]: Started cri-containerd-6784b1a5f8404f9f03230b6c801cd021cdfb4e0cf35a00b1b09f895f7ef398ac.scope - libcontainer container 6784b1a5f8404f9f03230b6c801cd021cdfb4e0cf35a00b1b09f895f7ef398ac. Nov 6 23:45:36.305133 containerd[1509]: time="2025-11-06T23:45:36.305055749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-2-4-n-9b34d37d4f,Uid:fdb0654bad4581ed0be549f4d71437c6,Namespace:kube-system,Attempt:0,} returns sandbox id \"2084121e9a1f8056618a5e6731bf8bb074b8a806300441b199ea465d548d1fac\"" Nov 6 23:45:36.310934 containerd[1509]: time="2025-11-06T23:45:36.310759319Z" level=info msg="CreateContainer within sandbox \"2084121e9a1f8056618a5e6731bf8bb074b8a806300441b199ea465d548d1fac\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 6 23:45:36.327395 containerd[1509]: time="2025-11-06T23:45:36.327303417Z" level=info msg="CreateContainer within sandbox \"2084121e9a1f8056618a5e6731bf8bb074b8a806300441b199ea465d548d1fac\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"125de339e09231974f66778a3ec5c873b85307e18e32e6b7b86bd704d81da5ff\"" Nov 6 23:45:36.328382 containerd[1509]: time="2025-11-06T23:45:36.327770083Z" level=info msg="StartContainer for \"125de339e09231974f66778a3ec5c873b85307e18e32e6b7b86bd704d81da5ff\"" Nov 6 23:45:36.338914 kubelet[2318]: W1106 23:45:36.338869 2318 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://46.62.225.38:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 46.62.225.38:6443: connect: connection refused Nov 6 23:45:36.339057 kubelet[2318]: E1106 23:45:36.339043 2318 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://46.62.225.38:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 46.62.225.38:6443: connect: connection refused" logger="UnhandledError" Nov 6 23:45:36.346610 systemd[1]: Started cri-containerd-125de339e09231974f66778a3ec5c873b85307e18e32e6b7b86bd704d81da5ff.scope - libcontainer container 125de339e09231974f66778a3ec5c873b85307e18e32e6b7b86bd704d81da5ff. Nov 6 23:45:36.348340 containerd[1509]: time="2025-11-06T23:45:36.348319854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-2-4-n-9b34d37d4f,Uid:26f3957d213d4e6c6136649e242c462c,Namespace:kube-system,Attempt:0,} returns sandbox id \"2bb89907ccb8cd0d48a28187becbb529aa099741488440cf68f65d4f2d973e79\"" Nov 6 23:45:36.348640 containerd[1509]: time="2025-11-06T23:45:36.348628493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-2-4-n-9b34d37d4f,Uid:7b6e6fc7fa6e83ab88dd600b23f2098a,Namespace:kube-system,Attempt:0,} returns sandbox id \"6784b1a5f8404f9f03230b6c801cd021cdfb4e0cf35a00b1b09f895f7ef398ac\"" Nov 6 23:45:36.350818 containerd[1509]: time="2025-11-06T23:45:36.350753214Z" level=info msg="CreateContainer within sandbox \"6784b1a5f8404f9f03230b6c801cd021cdfb4e0cf35a00b1b09f895f7ef398ac\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 6 23:45:36.351097 containerd[1509]: time="2025-11-06T23:45:36.351081053Z" level=info msg="CreateContainer within sandbox \"2bb89907ccb8cd0d48a28187becbb529aa099741488440cf68f65d4f2d973e79\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 6 23:45:36.371993 containerd[1509]: time="2025-11-06T23:45:36.371945624Z" level=info msg="CreateContainer within sandbox \"2bb89907ccb8cd0d48a28187becbb529aa099741488440cf68f65d4f2d973e79\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"fa5b9718f2ea575dc69a32f7ea493a08146d4536513c356e3a7732980ac84e45\"" Nov 6 23:45:36.373532 containerd[1509]: time="2025-11-06T23:45:36.372432350Z" level=info msg="StartContainer for \"fa5b9718f2ea575dc69a32f7ea493a08146d4536513c356e3a7732980ac84e45\"" Nov 6 23:45:36.373605 kubelet[2318]: W1106 23:45:36.373447 2318 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://46.62.225.38:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 46.62.225.38:6443: connect: connection refused Nov 6 23:45:36.373605 kubelet[2318]: E1106 23:45:36.373512 2318 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://46.62.225.38:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 46.62.225.38:6443: connect: connection refused" logger="UnhandledError" Nov 6 23:45:36.379956 containerd[1509]: time="2025-11-06T23:45:36.379902223Z" level=info msg="CreateContainer within sandbox \"6784b1a5f8404f9f03230b6c801cd021cdfb4e0cf35a00b1b09f895f7ef398ac\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"cac65ed4d44057f80462e303f38a61766853aad03e6643a414e89f28fc02136d\"" Nov 6 23:45:36.381040 containerd[1509]: time="2025-11-06T23:45:36.380365413Z" level=info msg="StartContainer for \"cac65ed4d44057f80462e303f38a61766853aad03e6643a414e89f28fc02136d\"" Nov 6 23:45:36.402755 containerd[1509]: time="2025-11-06T23:45:36.402728301Z" level=info msg="StartContainer for \"125de339e09231974f66778a3ec5c873b85307e18e32e6b7b86bd704d81da5ff\" returns successfully" Nov 6 23:45:36.414647 systemd[1]: Started cri-containerd-fa5b9718f2ea575dc69a32f7ea493a08146d4536513c356e3a7732980ac84e45.scope - libcontainer container fa5b9718f2ea575dc69a32f7ea493a08146d4536513c356e3a7732980ac84e45. Nov 6 23:45:36.417236 systemd[1]: Started cri-containerd-cac65ed4d44057f80462e303f38a61766853aad03e6643a414e89f28fc02136d.scope - libcontainer container cac65ed4d44057f80462e303f38a61766853aad03e6643a414e89f28fc02136d. Nov 6 23:45:36.456636 containerd[1509]: time="2025-11-06T23:45:36.456603410Z" level=info msg="StartContainer for \"fa5b9718f2ea575dc69a32f7ea493a08146d4536513c356e3a7732980ac84e45\" returns successfully" Nov 6 23:45:36.461097 containerd[1509]: time="2025-11-06T23:45:36.461014571Z" level=info msg="StartContainer for \"cac65ed4d44057f80462e303f38a61766853aad03e6643a414e89f28fc02136d\" returns successfully" Nov 6 23:45:36.482768 kubelet[2318]: E1106 23:45:36.482568 2318 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://46.62.225.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-2-4-n-9b34d37d4f?timeout=10s\": dial tcp 46.62.225.38:6443: connect: connection refused" interval="1.6s" Nov 6 23:45:36.625473 kubelet[2318]: W1106 23:45:36.625378 2318 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://46.62.225.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-2-4-n-9b34d37d4f&limit=500&resourceVersion=0": dial tcp 46.62.225.38:6443: connect: connection refused Nov 6 23:45:36.625473 kubelet[2318]: E1106 23:45:36.625436 2318 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://46.62.225.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-2-4-n-9b34d37d4f&limit=500&resourceVersion=0\": dial tcp 46.62.225.38:6443: connect: connection refused" logger="UnhandledError" Nov 6 23:45:36.647515 kubelet[2318]: I1106 23:45:36.647115 2318 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-2-4-n-9b34d37d4f" Nov 6 23:45:36.647515 kubelet[2318]: E1106 23:45:36.647365 2318 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://46.62.225.38:6443/api/v1/nodes\": dial tcp 46.62.225.38:6443: connect: connection refused" node="ci-4230-2-4-n-9b34d37d4f" Nov 6 23:45:37.130858 kubelet[2318]: E1106 23:45:37.130669 2318 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-4-n-9b34d37d4f\" not found" node="ci-4230-2-4-n-9b34d37d4f" Nov 6 23:45:37.133284 kubelet[2318]: E1106 23:45:37.132861 2318 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-4-n-9b34d37d4f\" not found" node="ci-4230-2-4-n-9b34d37d4f" Nov 6 23:45:37.135473 kubelet[2318]: E1106 23:45:37.135381 2318 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-4-n-9b34d37d4f\" not found" node="ci-4230-2-4-n-9b34d37d4f" Nov 6 23:45:38.087023 kubelet[2318]: E1106 23:45:38.086990 2318 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4230-2-4-n-9b34d37d4f\" not found" node="ci-4230-2-4-n-9b34d37d4f" Nov 6 23:45:38.138597 kubelet[2318]: E1106 23:45:38.138531 2318 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-4-n-9b34d37d4f\" not found" node="ci-4230-2-4-n-9b34d37d4f" Nov 6 23:45:38.139306 kubelet[2318]: E1106 23:45:38.139259 2318 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-4-n-9b34d37d4f\" not found" node="ci-4230-2-4-n-9b34d37d4f" Nov 6 23:45:38.250459 kubelet[2318]: I1106 23:45:38.250403 2318 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-2-4-n-9b34d37d4f" Nov 6 23:45:38.268435 kubelet[2318]: I1106 23:45:38.268380 2318 kubelet_node_status.go:78] "Successfully registered node" node="ci-4230-2-4-n-9b34d37d4f" Nov 6 23:45:38.268435 kubelet[2318]: E1106 23:45:38.268419 2318 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4230-2-4-n-9b34d37d4f\": node \"ci-4230-2-4-n-9b34d37d4f\" not found" Nov 6 23:45:38.284346 kubelet[2318]: E1106 23:45:38.280824 2318 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230-2-4-n-9b34d37d4f\" not found" Nov 6 23:45:38.381103 kubelet[2318]: E1106 23:45:38.380924 2318 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230-2-4-n-9b34d37d4f\" not found" Nov 6 23:45:38.472015 kubelet[2318]: I1106 23:45:38.471945 2318 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230-2-4-n-9b34d37d4f" Nov 6 23:45:38.479716 kubelet[2318]: E1106 23:45:38.479467 2318 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4230-2-4-n-9b34d37d4f\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4230-2-4-n-9b34d37d4f" Nov 6 23:45:38.479716 kubelet[2318]: I1106 23:45:38.479523 2318 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230-2-4-n-9b34d37d4f" Nov 6 23:45:38.482946 kubelet[2318]: E1106 23:45:38.482411 2318 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230-2-4-n-9b34d37d4f\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4230-2-4-n-9b34d37d4f" Nov 6 23:45:38.482946 kubelet[2318]: I1106 23:45:38.482527 2318 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230-2-4-n-9b34d37d4f" Nov 6 23:45:38.485790 kubelet[2318]: E1106 23:45:38.485732 2318 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230-2-4-n-9b34d37d4f\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4230-2-4-n-9b34d37d4f" Nov 6 23:45:39.055667 kubelet[2318]: I1106 23:45:39.055580 2318 apiserver.go:52] "Watching apiserver" Nov 6 23:45:39.076727 kubelet[2318]: I1106 23:45:39.076658 2318 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 6 23:45:39.137516 kubelet[2318]: I1106 23:45:39.137470 2318 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230-2-4-n-9b34d37d4f" Nov 6 23:45:39.861629 kubelet[2318]: I1106 23:45:39.861567 2318 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230-2-4-n-9b34d37d4f" Nov 6 23:45:40.385812 systemd[1]: Reload requested from client PID 2591 ('systemctl') (unit session-7.scope)... Nov 6 23:45:40.385832 systemd[1]: Reloading... Nov 6 23:45:40.490542 zram_generator::config[2633]: No configuration found. Nov 6 23:45:40.618190 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 6 23:45:40.716261 systemd[1]: Reloading finished in 330 ms. Nov 6 23:45:40.739759 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 23:45:40.745860 systemd[1]: kubelet.service: Deactivated successfully. Nov 6 23:45:40.746044 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 23:45:40.746087 systemd[1]: kubelet.service: Consumed 996ms CPU time, 127.2M memory peak. Nov 6 23:45:40.752709 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 23:45:40.852026 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 23:45:40.862149 (kubelet)[2687]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 6 23:45:40.910993 kubelet[2687]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 6 23:45:40.910993 kubelet[2687]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 6 23:45:40.910993 kubelet[2687]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 6 23:45:40.914786 kubelet[2687]: I1106 23:45:40.914708 2687 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 6 23:45:40.926037 kubelet[2687]: I1106 23:45:40.925993 2687 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 6 23:45:40.926037 kubelet[2687]: I1106 23:45:40.926011 2687 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 6 23:45:40.926216 kubelet[2687]: I1106 23:45:40.926197 2687 server.go:954] "Client rotation is on, will bootstrap in background" Nov 6 23:45:40.927472 kubelet[2687]: I1106 23:45:40.927448 2687 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 6 23:45:40.943215 kubelet[2687]: I1106 23:45:40.943133 2687 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 6 23:45:40.957005 kubelet[2687]: E1106 23:45:40.956964 2687 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 6 23:45:40.957005 kubelet[2687]: I1106 23:45:40.956992 2687 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 6 23:45:40.959269 kubelet[2687]: I1106 23:45:40.959242 2687 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 6 23:45:40.960512 kubelet[2687]: I1106 23:45:40.960455 2687 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 6 23:45:40.960663 kubelet[2687]: I1106 23:45:40.960500 2687 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-2-4-n-9b34d37d4f","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 6 23:45:40.960750 kubelet[2687]: I1106 23:45:40.960668 2687 topology_manager.go:138] "Creating topology manager with none policy" Nov 6 23:45:40.960750 kubelet[2687]: I1106 23:45:40.960676 2687 container_manager_linux.go:304] "Creating device plugin manager" Nov 6 23:45:40.960750 kubelet[2687]: I1106 23:45:40.960710 2687 state_mem.go:36] "Initialized new in-memory state store" Nov 6 23:45:40.961228 kubelet[2687]: I1106 23:45:40.961200 2687 kubelet.go:446] "Attempting to sync node with API server" Nov 6 23:45:40.961259 kubelet[2687]: I1106 23:45:40.961233 2687 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 6 23:45:40.961816 kubelet[2687]: I1106 23:45:40.961793 2687 kubelet.go:352] "Adding apiserver pod source" Nov 6 23:45:40.961816 kubelet[2687]: I1106 23:45:40.961810 2687 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 6 23:45:40.965778 kubelet[2687]: I1106 23:45:40.965742 2687 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Nov 6 23:45:40.966018 kubelet[2687]: I1106 23:45:40.965999 2687 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 6 23:45:40.970037 kubelet[2687]: I1106 23:45:40.969962 2687 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 6 23:45:40.970037 kubelet[2687]: I1106 23:45:40.970000 2687 server.go:1287] "Started kubelet" Nov 6 23:45:40.971123 kubelet[2687]: I1106 23:45:40.970142 2687 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 6 23:45:40.976060 kubelet[2687]: I1106 23:45:40.973433 2687 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 6 23:45:40.976060 kubelet[2687]: I1106 23:45:40.974857 2687 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 6 23:45:40.976060 kubelet[2687]: I1106 23:45:40.975798 2687 server.go:479] "Adding debug handlers to kubelet server" Nov 6 23:45:40.983548 kubelet[2687]: I1106 23:45:40.983478 2687 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 6 23:45:40.987162 kubelet[2687]: I1106 23:45:40.987141 2687 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 6 23:45:40.990760 kubelet[2687]: I1106 23:45:40.990720 2687 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 6 23:45:40.991290 kubelet[2687]: I1106 23:45:40.991251 2687 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 6 23:45:40.991377 kubelet[2687]: I1106 23:45:40.991362 2687 reconciler.go:26] "Reconciler: start to sync state" Nov 6 23:45:40.994620 kubelet[2687]: I1106 23:45:40.994600 2687 factory.go:221] Registration of the systemd container factory successfully Nov 6 23:45:40.994689 kubelet[2687]: I1106 23:45:40.994680 2687 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 6 23:45:40.997028 kubelet[2687]: I1106 23:45:40.997016 2687 factory.go:221] Registration of the containerd container factory successfully Nov 6 23:45:40.999618 kubelet[2687]: I1106 23:45:40.999592 2687 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 6 23:45:41.000248 kubelet[2687]: E1106 23:45:41.000237 2687 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 6 23:45:41.000386 kubelet[2687]: I1106 23:45:41.000308 2687 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 6 23:45:41.000461 kubelet[2687]: I1106 23:45:41.000436 2687 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 6 23:45:41.000542 kubelet[2687]: I1106 23:45:41.000534 2687 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 6 23:45:41.000580 kubelet[2687]: I1106 23:45:41.000576 2687 kubelet.go:2382] "Starting kubelet main sync loop" Nov 6 23:45:41.000650 kubelet[2687]: E1106 23:45:41.000636 2687 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 6 23:45:41.040272 kubelet[2687]: I1106 23:45:41.040249 2687 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 6 23:45:41.040426 kubelet[2687]: I1106 23:45:41.040418 2687 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 6 23:45:41.040471 kubelet[2687]: I1106 23:45:41.040467 2687 state_mem.go:36] "Initialized new in-memory state store" Nov 6 23:45:41.040681 kubelet[2687]: I1106 23:45:41.040661 2687 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 6 23:45:41.040754 kubelet[2687]: I1106 23:45:41.040736 2687 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 6 23:45:41.040788 kubelet[2687]: I1106 23:45:41.040784 2687 policy_none.go:49] "None policy: Start" Nov 6 23:45:41.040823 kubelet[2687]: I1106 23:45:41.040819 2687 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 6 23:45:41.040859 kubelet[2687]: I1106 23:45:41.040855 2687 state_mem.go:35] "Initializing new in-memory state store" Nov 6 23:45:41.040984 kubelet[2687]: I1106 23:45:41.040978 2687 state_mem.go:75] "Updated machine memory state" Nov 6 23:45:41.044014 kubelet[2687]: I1106 23:45:41.043988 2687 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 6 23:45:41.044145 kubelet[2687]: I1106 23:45:41.044129 2687 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 6 23:45:41.044187 kubelet[2687]: I1106 23:45:41.044141 2687 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 6 23:45:41.044507 kubelet[2687]: I1106 23:45:41.044473 2687 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 6 23:45:41.045757 kubelet[2687]: E1106 23:45:41.045738 2687 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 6 23:45:41.101938 kubelet[2687]: I1106 23:45:41.101883 2687 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230-2-4-n-9b34d37d4f" Nov 6 23:45:41.103602 kubelet[2687]: I1106 23:45:41.103565 2687 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230-2-4-n-9b34d37d4f" Nov 6 23:45:41.103937 kubelet[2687]: I1106 23:45:41.103898 2687 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230-2-4-n-9b34d37d4f" Nov 6 23:45:41.112906 kubelet[2687]: E1106 23:45:41.112815 2687 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230-2-4-n-9b34d37d4f\" already exists" pod="kube-system/kube-scheduler-ci-4230-2-4-n-9b34d37d4f" Nov 6 23:45:41.113724 kubelet[2687]: E1106 23:45:41.113664 2687 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230-2-4-n-9b34d37d4f\" already exists" pod="kube-system/kube-apiserver-ci-4230-2-4-n-9b34d37d4f" Nov 6 23:45:41.161719 kubelet[2687]: I1106 23:45:41.161673 2687 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-2-4-n-9b34d37d4f" Nov 6 23:45:41.174061 kubelet[2687]: I1106 23:45:41.174019 2687 kubelet_node_status.go:124] "Node was previously registered" node="ci-4230-2-4-n-9b34d37d4f" Nov 6 23:45:41.174225 kubelet[2687]: I1106 23:45:41.174138 2687 kubelet_node_status.go:78] "Successfully registered node" node="ci-4230-2-4-n-9b34d37d4f" Nov 6 23:45:41.293968 kubelet[2687]: I1106 23:45:41.292671 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fdb0654bad4581ed0be549f4d71437c6-k8s-certs\") pod \"kube-apiserver-ci-4230-2-4-n-9b34d37d4f\" (UID: \"fdb0654bad4581ed0be549f4d71437c6\") " pod="kube-system/kube-apiserver-ci-4230-2-4-n-9b34d37d4f" Nov 6 23:45:41.293968 kubelet[2687]: I1106 23:45:41.292737 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fdb0654bad4581ed0be549f4d71437c6-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-2-4-n-9b34d37d4f\" (UID: \"fdb0654bad4581ed0be549f4d71437c6\") " pod="kube-system/kube-apiserver-ci-4230-2-4-n-9b34d37d4f" Nov 6 23:45:41.293968 kubelet[2687]: I1106 23:45:41.292764 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/26f3957d213d4e6c6136649e242c462c-ca-certs\") pod \"kube-controller-manager-ci-4230-2-4-n-9b34d37d4f\" (UID: \"26f3957d213d4e6c6136649e242c462c\") " pod="kube-system/kube-controller-manager-ci-4230-2-4-n-9b34d37d4f" Nov 6 23:45:41.293968 kubelet[2687]: I1106 23:45:41.292782 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/26f3957d213d4e6c6136649e242c462c-k8s-certs\") pod \"kube-controller-manager-ci-4230-2-4-n-9b34d37d4f\" (UID: \"26f3957d213d4e6c6136649e242c462c\") " pod="kube-system/kube-controller-manager-ci-4230-2-4-n-9b34d37d4f" Nov 6 23:45:41.293968 kubelet[2687]: I1106 23:45:41.292812 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/26f3957d213d4e6c6136649e242c462c-kubeconfig\") pod \"kube-controller-manager-ci-4230-2-4-n-9b34d37d4f\" (UID: \"26f3957d213d4e6c6136649e242c462c\") " pod="kube-system/kube-controller-manager-ci-4230-2-4-n-9b34d37d4f" Nov 6 23:45:41.294172 kubelet[2687]: I1106 23:45:41.292829 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b6e6fc7fa6e83ab88dd600b23f2098a-kubeconfig\") pod \"kube-scheduler-ci-4230-2-4-n-9b34d37d4f\" (UID: \"7b6e6fc7fa6e83ab88dd600b23f2098a\") " pod="kube-system/kube-scheduler-ci-4230-2-4-n-9b34d37d4f" Nov 6 23:45:41.294172 kubelet[2687]: I1106 23:45:41.292845 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fdb0654bad4581ed0be549f4d71437c6-ca-certs\") pod \"kube-apiserver-ci-4230-2-4-n-9b34d37d4f\" (UID: \"fdb0654bad4581ed0be549f4d71437c6\") " pod="kube-system/kube-apiserver-ci-4230-2-4-n-9b34d37d4f" Nov 6 23:45:41.294172 kubelet[2687]: I1106 23:45:41.292864 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/26f3957d213d4e6c6136649e242c462c-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-2-4-n-9b34d37d4f\" (UID: \"26f3957d213d4e6c6136649e242c462c\") " pod="kube-system/kube-controller-manager-ci-4230-2-4-n-9b34d37d4f" Nov 6 23:45:41.294172 kubelet[2687]: I1106 23:45:41.292884 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/26f3957d213d4e6c6136649e242c462c-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-2-4-n-9b34d37d4f\" (UID: \"26f3957d213d4e6c6136649e242c462c\") " pod="kube-system/kube-controller-manager-ci-4230-2-4-n-9b34d37d4f" Nov 6 23:45:41.395674 sudo[2719]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Nov 6 23:45:41.395945 sudo[2719]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Nov 6 23:45:41.856394 sudo[2719]: pam_unix(sudo:session): session closed for user root Nov 6 23:45:41.965611 kubelet[2687]: I1106 23:45:41.965479 2687 apiserver.go:52] "Watching apiserver" Nov 6 23:45:41.993671 kubelet[2687]: I1106 23:45:41.993596 2687 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 6 23:45:42.030749 kubelet[2687]: I1106 23:45:42.030708 2687 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230-2-4-n-9b34d37d4f" Nov 6 23:45:42.036892 kubelet[2687]: I1106 23:45:42.036834 2687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230-2-4-n-9b34d37d4f" podStartSLOduration=3.036794277 podStartE2EDuration="3.036794277s" podCreationTimestamp="2025-11-06 23:45:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 23:45:42.03472129 +0000 UTC m=+1.167604477" watchObservedRunningTime="2025-11-06 23:45:42.036794277 +0000 UTC m=+1.169677464" Nov 6 23:45:42.041175 kubelet[2687]: E1106 23:45:42.041138 2687 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230-2-4-n-9b34d37d4f\" already exists" pod="kube-system/kube-apiserver-ci-4230-2-4-n-9b34d37d4f" Nov 6 23:45:42.060808 kubelet[2687]: I1106 23:45:42.060457 2687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230-2-4-n-9b34d37d4f" podStartSLOduration=1.060438726 podStartE2EDuration="1.060438726s" podCreationTimestamp="2025-11-06 23:45:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 23:45:42.059922931 +0000 UTC m=+1.192806118" watchObservedRunningTime="2025-11-06 23:45:42.060438726 +0000 UTC m=+1.193321913" Nov 6 23:45:42.060974 kubelet[2687]: I1106 23:45:42.060866 2687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230-2-4-n-9b34d37d4f" podStartSLOduration=3.060854815 podStartE2EDuration="3.060854815s" podCreationTimestamp="2025-11-06 23:45:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 23:45:42.049166307 +0000 UTC m=+1.182049504" watchObservedRunningTime="2025-11-06 23:45:42.060854815 +0000 UTC m=+1.193738002" Nov 6 23:45:43.993787 sudo[1783]: pam_unix(sudo:session): session closed for user root Nov 6 23:45:44.157741 sshd[1782]: Connection closed by 139.178.89.65 port 52400 Nov 6 23:45:44.159297 sshd-session[1779]: pam_unix(sshd:session): session closed for user core Nov 6 23:45:44.163260 systemd-logind[1482]: Session 7 logged out. Waiting for processes to exit. Nov 6 23:45:44.163876 systemd[1]: sshd@6-46.62.225.38:22-139.178.89.65:52400.service: Deactivated successfully. Nov 6 23:45:44.166250 systemd[1]: session-7.scope: Deactivated successfully. Nov 6 23:45:44.166436 systemd[1]: session-7.scope: Consumed 4.222s CPU time, 208.8M memory peak. Nov 6 23:45:44.168150 systemd-logind[1482]: Removed session 7. Nov 6 23:45:45.798989 kubelet[2687]: I1106 23:45:45.798616 2687 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 6 23:45:45.799347 containerd[1509]: time="2025-11-06T23:45:45.798899717Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 6 23:45:45.799854 kubelet[2687]: I1106 23:45:45.799814 2687 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 6 23:45:46.768453 systemd[1]: Created slice kubepods-besteffort-pod40f457cb_ef7c_4fb0_a70b_e288bf6ab0df.slice - libcontainer container kubepods-besteffort-pod40f457cb_ef7c_4fb0_a70b_e288bf6ab0df.slice. Nov 6 23:45:46.795630 systemd[1]: Created slice kubepods-burstable-podf80a5697_3751_4fa4_a8ec_a8ba8197449c.slice - libcontainer container kubepods-burstable-podf80a5697_3751_4fa4_a8ec_a8ba8197449c.slice. Nov 6 23:45:46.834557 kubelet[2687]: I1106 23:45:46.834515 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f80a5697-3751-4fa4-a8ec-a8ba8197449c-hostproc\") pod \"cilium-f9wx5\" (UID: \"f80a5697-3751-4fa4-a8ec-a8ba8197449c\") " pod="kube-system/cilium-f9wx5" Nov 6 23:45:46.834557 kubelet[2687]: I1106 23:45:46.834549 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f80a5697-3751-4fa4-a8ec-a8ba8197449c-cni-path\") pod \"cilium-f9wx5\" (UID: \"f80a5697-3751-4fa4-a8ec-a8ba8197449c\") " pod="kube-system/cilium-f9wx5" Nov 6 23:45:46.834557 kubelet[2687]: I1106 23:45:46.834562 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f80a5697-3751-4fa4-a8ec-a8ba8197449c-xtables-lock\") pod \"cilium-f9wx5\" (UID: \"f80a5697-3751-4fa4-a8ec-a8ba8197449c\") " pod="kube-system/cilium-f9wx5" Nov 6 23:45:46.834896 kubelet[2687]: I1106 23:45:46.834574 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/40f457cb-ef7c-4fb0-a70b-e288bf6ab0df-xtables-lock\") pod \"kube-proxy-jrjsp\" (UID: \"40f457cb-ef7c-4fb0-a70b-e288bf6ab0df\") " pod="kube-system/kube-proxy-jrjsp" Nov 6 23:45:46.834896 kubelet[2687]: I1106 23:45:46.834584 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/40f457cb-ef7c-4fb0-a70b-e288bf6ab0df-lib-modules\") pod \"kube-proxy-jrjsp\" (UID: \"40f457cb-ef7c-4fb0-a70b-e288bf6ab0df\") " pod="kube-system/kube-proxy-jrjsp" Nov 6 23:45:46.834896 kubelet[2687]: I1106 23:45:46.834595 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f80a5697-3751-4fa4-a8ec-a8ba8197449c-cilium-run\") pod \"cilium-f9wx5\" (UID: \"f80a5697-3751-4fa4-a8ec-a8ba8197449c\") " pod="kube-system/cilium-f9wx5" Nov 6 23:45:46.834896 kubelet[2687]: I1106 23:45:46.834605 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f80a5697-3751-4fa4-a8ec-a8ba8197449c-clustermesh-secrets\") pod \"cilium-f9wx5\" (UID: \"f80a5697-3751-4fa4-a8ec-a8ba8197449c\") " pod="kube-system/cilium-f9wx5" Nov 6 23:45:46.834896 kubelet[2687]: I1106 23:45:46.834618 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f80a5697-3751-4fa4-a8ec-a8ba8197449c-host-proc-sys-kernel\") pod \"cilium-f9wx5\" (UID: \"f80a5697-3751-4fa4-a8ec-a8ba8197449c\") " pod="kube-system/cilium-f9wx5" Nov 6 23:45:46.834973 kubelet[2687]: I1106 23:45:46.834636 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqcqh\" (UniqueName: \"kubernetes.io/projected/f80a5697-3751-4fa4-a8ec-a8ba8197449c-kube-api-access-jqcqh\") pod \"cilium-f9wx5\" (UID: \"f80a5697-3751-4fa4-a8ec-a8ba8197449c\") " pod="kube-system/cilium-f9wx5" Nov 6 23:45:46.834973 kubelet[2687]: I1106 23:45:46.834657 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f80a5697-3751-4fa4-a8ec-a8ba8197449c-bpf-maps\") pod \"cilium-f9wx5\" (UID: \"f80a5697-3751-4fa4-a8ec-a8ba8197449c\") " pod="kube-system/cilium-f9wx5" Nov 6 23:45:46.834973 kubelet[2687]: I1106 23:45:46.834676 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f80a5697-3751-4fa4-a8ec-a8ba8197449c-lib-modules\") pod \"cilium-f9wx5\" (UID: \"f80a5697-3751-4fa4-a8ec-a8ba8197449c\") " pod="kube-system/cilium-f9wx5" Nov 6 23:45:46.834973 kubelet[2687]: I1106 23:45:46.834695 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/40f457cb-ef7c-4fb0-a70b-e288bf6ab0df-kube-proxy\") pod \"kube-proxy-jrjsp\" (UID: \"40f457cb-ef7c-4fb0-a70b-e288bf6ab0df\") " pod="kube-system/kube-proxy-jrjsp" Nov 6 23:45:46.834973 kubelet[2687]: I1106 23:45:46.834705 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f80a5697-3751-4fa4-a8ec-a8ba8197449c-etc-cni-netd\") pod \"cilium-f9wx5\" (UID: \"f80a5697-3751-4fa4-a8ec-a8ba8197449c\") " pod="kube-system/cilium-f9wx5" Nov 6 23:45:46.834973 kubelet[2687]: I1106 23:45:46.834718 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f80a5697-3751-4fa4-a8ec-a8ba8197449c-host-proc-sys-net\") pod \"cilium-f9wx5\" (UID: \"f80a5697-3751-4fa4-a8ec-a8ba8197449c\") " pod="kube-system/cilium-f9wx5" Nov 6 23:45:46.835091 kubelet[2687]: I1106 23:45:46.834727 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f80a5697-3751-4fa4-a8ec-a8ba8197449c-hubble-tls\") pod \"cilium-f9wx5\" (UID: \"f80a5697-3751-4fa4-a8ec-a8ba8197449c\") " pod="kube-system/cilium-f9wx5" Nov 6 23:45:46.835091 kubelet[2687]: I1106 23:45:46.834737 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdct2\" (UniqueName: \"kubernetes.io/projected/40f457cb-ef7c-4fb0-a70b-e288bf6ab0df-kube-api-access-qdct2\") pod \"kube-proxy-jrjsp\" (UID: \"40f457cb-ef7c-4fb0-a70b-e288bf6ab0df\") " pod="kube-system/kube-proxy-jrjsp" Nov 6 23:45:46.835091 kubelet[2687]: I1106 23:45:46.834751 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f80a5697-3751-4fa4-a8ec-a8ba8197449c-cilium-cgroup\") pod \"cilium-f9wx5\" (UID: \"f80a5697-3751-4fa4-a8ec-a8ba8197449c\") " pod="kube-system/cilium-f9wx5" Nov 6 23:45:46.835091 kubelet[2687]: I1106 23:45:46.834762 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f80a5697-3751-4fa4-a8ec-a8ba8197449c-cilium-config-path\") pod \"cilium-f9wx5\" (UID: \"f80a5697-3751-4fa4-a8ec-a8ba8197449c\") " pod="kube-system/cilium-f9wx5" Nov 6 23:45:46.864629 systemd[1]: Created slice kubepods-besteffort-pod9e20300e_e311_4a12_a87b_f7e645683436.slice - libcontainer container kubepods-besteffort-pod9e20300e_e311_4a12_a87b_f7e645683436.slice. Nov 6 23:45:46.935162 kubelet[2687]: I1106 23:45:46.935120 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfvqr\" (UniqueName: \"kubernetes.io/projected/9e20300e-e311-4a12-a87b-f7e645683436-kube-api-access-vfvqr\") pod \"cilium-operator-6c4d7847fc-bhgbd\" (UID: \"9e20300e-e311-4a12-a87b-f7e645683436\") " pod="kube-system/cilium-operator-6c4d7847fc-bhgbd" Nov 6 23:45:46.935321 kubelet[2687]: I1106 23:45:46.935204 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9e20300e-e311-4a12-a87b-f7e645683436-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-bhgbd\" (UID: \"9e20300e-e311-4a12-a87b-f7e645683436\") " pod="kube-system/cilium-operator-6c4d7847fc-bhgbd" Nov 6 23:45:47.076883 containerd[1509]: time="2025-11-06T23:45:47.076830233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jrjsp,Uid:40f457cb-ef7c-4fb0-a70b-e288bf6ab0df,Namespace:kube-system,Attempt:0,}" Nov 6 23:45:47.104543 containerd[1509]: time="2025-11-06T23:45:47.102551710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-f9wx5,Uid:f80a5697-3751-4fa4-a8ec-a8ba8197449c,Namespace:kube-system,Attempt:0,}" Nov 6 23:45:47.109601 containerd[1509]: time="2025-11-06T23:45:47.107810514Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 6 23:45:47.109601 containerd[1509]: time="2025-11-06T23:45:47.107875854Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 6 23:45:47.109601 containerd[1509]: time="2025-11-06T23:45:47.107898901Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:45:47.109601 containerd[1509]: time="2025-11-06T23:45:47.107989420Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:45:47.141863 systemd[1]: Started cri-containerd-9d7db983e0425dcfbcf193239f729c0b1e65c890afb4da321e0c3d469fe5b512.scope - libcontainer container 9d7db983e0425dcfbcf193239f729c0b1e65c890afb4da321e0c3d469fe5b512. Nov 6 23:45:47.164707 containerd[1509]: time="2025-11-06T23:45:47.164310817Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 6 23:45:47.164707 containerd[1509]: time="2025-11-06T23:45:47.164409282Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 6 23:45:47.164707 containerd[1509]: time="2025-11-06T23:45:47.164430028Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:45:47.166052 containerd[1509]: time="2025-11-06T23:45:47.165977572Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:45:47.173720 containerd[1509]: time="2025-11-06T23:45:47.173677402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-bhgbd,Uid:9e20300e-e311-4a12-a87b-f7e645683436,Namespace:kube-system,Attempt:0,}" Nov 6 23:45:47.190121 containerd[1509]: time="2025-11-06T23:45:47.190070854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jrjsp,Uid:40f457cb-ef7c-4fb0-a70b-e288bf6ab0df,Namespace:kube-system,Attempt:0,} returns sandbox id \"9d7db983e0425dcfbcf193239f729c0b1e65c890afb4da321e0c3d469fe5b512\"" Nov 6 23:45:47.193851 systemd[1]: Started cri-containerd-2e08f55dbb09e1f8dab4788bc3583fc57365cafc6f98d043a0032cac273c8a05.scope - libcontainer container 2e08f55dbb09e1f8dab4788bc3583fc57365cafc6f98d043a0032cac273c8a05. Nov 6 23:45:47.198007 containerd[1509]: time="2025-11-06T23:45:47.197870169Z" level=info msg="CreateContainer within sandbox \"9d7db983e0425dcfbcf193239f729c0b1e65c890afb4da321e0c3d469fe5b512\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 6 23:45:47.225534 containerd[1509]: time="2025-11-06T23:45:47.225059409Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 6 23:45:47.225534 containerd[1509]: time="2025-11-06T23:45:47.225091604Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 6 23:45:47.225534 containerd[1509]: time="2025-11-06T23:45:47.225098669Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:45:47.225534 containerd[1509]: time="2025-11-06T23:45:47.225153833Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:45:47.239152 containerd[1509]: time="2025-11-06T23:45:47.238503164Z" level=info msg="CreateContainer within sandbox \"9d7db983e0425dcfbcf193239f729c0b1e65c890afb4da321e0c3d469fe5b512\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3f9d9ab4021c890244f6bfa06bd070deb21228a63ecf034fffc98e4ba3fc8d12\"" Nov 6 23:45:47.239737 containerd[1509]: time="2025-11-06T23:45:47.239714041Z" level=info msg="StartContainer for \"3f9d9ab4021c890244f6bfa06bd070deb21228a63ecf034fffc98e4ba3fc8d12\"" Nov 6 23:45:47.243638 systemd[1]: Started cri-containerd-5520cf1ac83fefaa11981ed6a3b17e3d72c456772353a531f8d41a56ae2da0fe.scope - libcontainer container 5520cf1ac83fefaa11981ed6a3b17e3d72c456772353a531f8d41a56ae2da0fe. Nov 6 23:45:47.245318 containerd[1509]: time="2025-11-06T23:45:47.245012203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-f9wx5,Uid:f80a5697-3751-4fa4-a8ec-a8ba8197449c,Namespace:kube-system,Attempt:0,} returns sandbox id \"2e08f55dbb09e1f8dab4788bc3583fc57365cafc6f98d043a0032cac273c8a05\"" Nov 6 23:45:47.247778 containerd[1509]: time="2025-11-06T23:45:47.247596772Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Nov 6 23:45:47.273640 systemd[1]: Started cri-containerd-3f9d9ab4021c890244f6bfa06bd070deb21228a63ecf034fffc98e4ba3fc8d12.scope - libcontainer container 3f9d9ab4021c890244f6bfa06bd070deb21228a63ecf034fffc98e4ba3fc8d12. Nov 6 23:45:47.299175 containerd[1509]: time="2025-11-06T23:45:47.299100252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-bhgbd,Uid:9e20300e-e311-4a12-a87b-f7e645683436,Namespace:kube-system,Attempt:0,} returns sandbox id \"5520cf1ac83fefaa11981ed6a3b17e3d72c456772353a531f8d41a56ae2da0fe\"" Nov 6 23:45:47.305045 containerd[1509]: time="2025-11-06T23:45:47.304564282Z" level=info msg="StartContainer for \"3f9d9ab4021c890244f6bfa06bd070deb21228a63ecf034fffc98e4ba3fc8d12\" returns successfully" Nov 6 23:45:48.060826 kubelet[2687]: I1106 23:45:48.060713 2687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jrjsp" podStartSLOduration=2.060686616 podStartE2EDuration="2.060686616s" podCreationTimestamp="2025-11-06 23:45:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 23:45:48.05998253 +0000 UTC m=+7.192865797" watchObservedRunningTime="2025-11-06 23:45:48.060686616 +0000 UTC m=+7.193569823" Nov 6 23:45:52.784742 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount478248819.mount: Deactivated successfully. Nov 6 23:45:54.011115 containerd[1509]: time="2025-11-06T23:45:54.011066030Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:45:54.012595 containerd[1509]: time="2025-11-06T23:45:54.012563270Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Nov 6 23:45:54.014280 containerd[1509]: time="2025-11-06T23:45:54.013299489Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:45:54.014280 containerd[1509]: time="2025-11-06T23:45:54.014190113Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 6.765541349s" Nov 6 23:45:54.014280 containerd[1509]: time="2025-11-06T23:45:54.014210443Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Nov 6 23:45:54.018218 containerd[1509]: time="2025-11-06T23:45:54.018198617Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Nov 6 23:45:54.020005 containerd[1509]: time="2025-11-06T23:45:54.019979225Z" level=info msg="CreateContainer within sandbox \"2e08f55dbb09e1f8dab4788bc3583fc57365cafc6f98d043a0032cac273c8a05\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 6 23:45:54.069799 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2902399463.mount: Deactivated successfully. Nov 6 23:45:54.093148 containerd[1509]: time="2025-11-06T23:45:54.093057863Z" level=info msg="CreateContainer within sandbox \"2e08f55dbb09e1f8dab4788bc3583fc57365cafc6f98d043a0032cac273c8a05\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1b1ed1ef95737634621c54564cd700901ec5609f4e5975ae92fe4fd347aa6fe7\"" Nov 6 23:45:54.093953 containerd[1509]: time="2025-11-06T23:45:54.093756572Z" level=info msg="StartContainer for \"1b1ed1ef95737634621c54564cd700901ec5609f4e5975ae92fe4fd347aa6fe7\"" Nov 6 23:45:54.192634 systemd[1]: Started cri-containerd-1b1ed1ef95737634621c54564cd700901ec5609f4e5975ae92fe4fd347aa6fe7.scope - libcontainer container 1b1ed1ef95737634621c54564cd700901ec5609f4e5975ae92fe4fd347aa6fe7. Nov 6 23:45:54.219789 containerd[1509]: time="2025-11-06T23:45:54.219740642Z" level=info msg="StartContainer for \"1b1ed1ef95737634621c54564cd700901ec5609f4e5975ae92fe4fd347aa6fe7\" returns successfully" Nov 6 23:45:54.235714 systemd[1]: cri-containerd-1b1ed1ef95737634621c54564cd700901ec5609f4e5975ae92fe4fd347aa6fe7.scope: Deactivated successfully. Nov 6 23:45:54.236159 systemd[1]: cri-containerd-1b1ed1ef95737634621c54564cd700901ec5609f4e5975ae92fe4fd347aa6fe7.scope: Consumed 21ms CPU time, 5M memory peak, 4K read from disk, 3.2M written to disk. Nov 6 23:45:54.318090 containerd[1509]: time="2025-11-06T23:45:54.301751223Z" level=info msg="shim disconnected" id=1b1ed1ef95737634621c54564cd700901ec5609f4e5975ae92fe4fd347aa6fe7 namespace=k8s.io Nov 6 23:45:54.318379 containerd[1509]: time="2025-11-06T23:45:54.318092741Z" level=warning msg="cleaning up after shim disconnected" id=1b1ed1ef95737634621c54564cd700901ec5609f4e5975ae92fe4fd347aa6fe7 namespace=k8s.io Nov 6 23:45:54.318379 containerd[1509]: time="2025-11-06T23:45:54.318118843Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 6 23:45:55.055180 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1b1ed1ef95737634621c54564cd700901ec5609f4e5975ae92fe4fd347aa6fe7-rootfs.mount: Deactivated successfully. Nov 6 23:45:55.074200 containerd[1509]: time="2025-11-06T23:45:55.074137308Z" level=info msg="CreateContainer within sandbox \"2e08f55dbb09e1f8dab4788bc3583fc57365cafc6f98d043a0032cac273c8a05\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 6 23:45:55.101707 containerd[1509]: time="2025-11-06T23:45:55.100896911Z" level=info msg="CreateContainer within sandbox \"2e08f55dbb09e1f8dab4788bc3583fc57365cafc6f98d043a0032cac273c8a05\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"024ecb097d066a8f252a3b4d8e9a07048d559dbea429d5653799ea3094840ce1\"" Nov 6 23:45:55.102862 containerd[1509]: time="2025-11-06T23:45:55.102829994Z" level=info msg="StartContainer for \"024ecb097d066a8f252a3b4d8e9a07048d559dbea429d5653799ea3094840ce1\"" Nov 6 23:45:55.174830 systemd[1]: Started cri-containerd-024ecb097d066a8f252a3b4d8e9a07048d559dbea429d5653799ea3094840ce1.scope - libcontainer container 024ecb097d066a8f252a3b4d8e9a07048d559dbea429d5653799ea3094840ce1. Nov 6 23:45:55.217387 containerd[1509]: time="2025-11-06T23:45:55.216582925Z" level=info msg="StartContainer for \"024ecb097d066a8f252a3b4d8e9a07048d559dbea429d5653799ea3094840ce1\" returns successfully" Nov 6 23:45:55.235408 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 6 23:45:55.236090 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 6 23:45:55.237247 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Nov 6 23:45:55.243736 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 6 23:45:55.243937 systemd[1]: cri-containerd-024ecb097d066a8f252a3b4d8e9a07048d559dbea429d5653799ea3094840ce1.scope: Deactivated successfully. Nov 6 23:45:55.284018 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 6 23:45:55.286900 containerd[1509]: time="2025-11-06T23:45:55.286550050Z" level=info msg="shim disconnected" id=024ecb097d066a8f252a3b4d8e9a07048d559dbea429d5653799ea3094840ce1 namespace=k8s.io Nov 6 23:45:55.286900 containerd[1509]: time="2025-11-06T23:45:55.286662771Z" level=warning msg="cleaning up after shim disconnected" id=024ecb097d066a8f252a3b4d8e9a07048d559dbea429d5653799ea3094840ce1 namespace=k8s.io Nov 6 23:45:55.286900 containerd[1509]: time="2025-11-06T23:45:55.286676558Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 6 23:45:55.302571 containerd[1509]: time="2025-11-06T23:45:55.302477650Z" level=warning msg="cleanup warnings time=\"2025-11-06T23:45:55Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Nov 6 23:45:56.054403 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-024ecb097d066a8f252a3b4d8e9a07048d559dbea429d5653799ea3094840ce1-rootfs.mount: Deactivated successfully. Nov 6 23:45:56.076876 containerd[1509]: time="2025-11-06T23:45:56.076732976Z" level=info msg="CreateContainer within sandbox \"2e08f55dbb09e1f8dab4788bc3583fc57365cafc6f98d043a0032cac273c8a05\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 6 23:45:56.106333 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3940305523.mount: Deactivated successfully. Nov 6 23:45:56.128141 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4003205649.mount: Deactivated successfully. Nov 6 23:45:56.142101 containerd[1509]: time="2025-11-06T23:45:56.142039509Z" level=info msg="CreateContainer within sandbox \"2e08f55dbb09e1f8dab4788bc3583fc57365cafc6f98d043a0032cac273c8a05\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e164c4d3c6f8569ab2008d3db11f3c10d49c189c6b2986aa5be05bba88100a63\"" Nov 6 23:45:56.144787 containerd[1509]: time="2025-11-06T23:45:56.144723080Z" level=info msg="StartContainer for \"e164c4d3c6f8569ab2008d3db11f3c10d49c189c6b2986aa5be05bba88100a63\"" Nov 6 23:45:56.179702 systemd[1]: Started cri-containerd-e164c4d3c6f8569ab2008d3db11f3c10d49c189c6b2986aa5be05bba88100a63.scope - libcontainer container e164c4d3c6f8569ab2008d3db11f3c10d49c189c6b2986aa5be05bba88100a63. Nov 6 23:45:56.213844 containerd[1509]: time="2025-11-06T23:45:56.213789586Z" level=info msg="StartContainer for \"e164c4d3c6f8569ab2008d3db11f3c10d49c189c6b2986aa5be05bba88100a63\" returns successfully" Nov 6 23:45:56.219715 systemd[1]: cri-containerd-e164c4d3c6f8569ab2008d3db11f3c10d49c189c6b2986aa5be05bba88100a63.scope: Deactivated successfully. Nov 6 23:45:56.256952 containerd[1509]: time="2025-11-06T23:45:56.256872104Z" level=info msg="shim disconnected" id=e164c4d3c6f8569ab2008d3db11f3c10d49c189c6b2986aa5be05bba88100a63 namespace=k8s.io Nov 6 23:45:56.256952 containerd[1509]: time="2025-11-06T23:45:56.256929838Z" level=warning msg="cleaning up after shim disconnected" id=e164c4d3c6f8569ab2008d3db11f3c10d49c189c6b2986aa5be05bba88100a63 namespace=k8s.io Nov 6 23:45:56.256952 containerd[1509]: time="2025-11-06T23:45:56.256937221Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 6 23:45:56.790429 containerd[1509]: time="2025-11-06T23:45:56.790373442Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:45:56.792472 containerd[1509]: time="2025-11-06T23:45:56.792277249Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Nov 6 23:45:56.794292 containerd[1509]: time="2025-11-06T23:45:56.793402181Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:45:56.794292 containerd[1509]: time="2025-11-06T23:45:56.794187127Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.77596551s" Nov 6 23:45:56.794292 containerd[1509]: time="2025-11-06T23:45:56.794206626Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Nov 6 23:45:56.797206 containerd[1509]: time="2025-11-06T23:45:56.797174238Z" level=info msg="CreateContainer within sandbox \"5520cf1ac83fefaa11981ed6a3b17e3d72c456772353a531f8d41a56ae2da0fe\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Nov 6 23:45:56.813259 containerd[1509]: time="2025-11-06T23:45:56.813208941Z" level=info msg="CreateContainer within sandbox \"5520cf1ac83fefaa11981ed6a3b17e3d72c456772353a531f8d41a56ae2da0fe\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"9833f544e557dfab93efa5eb4ec1e92ff98344e3ae34b37dad845d7e98f7ed40\"" Nov 6 23:45:56.814360 containerd[1509]: time="2025-11-06T23:45:56.813806388Z" level=info msg="StartContainer for \"9833f544e557dfab93efa5eb4ec1e92ff98344e3ae34b37dad845d7e98f7ed40\"" Nov 6 23:45:56.837637 systemd[1]: Started cri-containerd-9833f544e557dfab93efa5eb4ec1e92ff98344e3ae34b37dad845d7e98f7ed40.scope - libcontainer container 9833f544e557dfab93efa5eb4ec1e92ff98344e3ae34b37dad845d7e98f7ed40. Nov 6 23:45:56.861954 containerd[1509]: time="2025-11-06T23:45:56.861925814Z" level=info msg="StartContainer for \"9833f544e557dfab93efa5eb4ec1e92ff98344e3ae34b37dad845d7e98f7ed40\" returns successfully" Nov 6 23:45:57.084549 containerd[1509]: time="2025-11-06T23:45:57.084511011Z" level=info msg="CreateContainer within sandbox \"2e08f55dbb09e1f8dab4788bc3583fc57365cafc6f98d043a0032cac273c8a05\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 6 23:45:57.098402 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1081405095.mount: Deactivated successfully. Nov 6 23:45:57.098761 containerd[1509]: time="2025-11-06T23:45:57.098741752Z" level=info msg="CreateContainer within sandbox \"2e08f55dbb09e1f8dab4788bc3583fc57365cafc6f98d043a0032cac273c8a05\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f71b4bb38a26adf531e52630b08bba75fafc5b790fb5147536929e64c5b94ed9\"" Nov 6 23:45:57.099304 containerd[1509]: time="2025-11-06T23:45:57.099294093Z" level=info msg="StartContainer for \"f71b4bb38a26adf531e52630b08bba75fafc5b790fb5147536929e64c5b94ed9\"" Nov 6 23:45:57.118959 kubelet[2687]: I1106 23:45:57.118902 2687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-bhgbd" podStartSLOduration=1.6240447900000001 podStartE2EDuration="11.11864053s" podCreationTimestamp="2025-11-06 23:45:46 +0000 UTC" firstStartedPulling="2025-11-06 23:45:47.300308315 +0000 UTC m=+6.433191472" lastFinishedPulling="2025-11-06 23:45:56.794904045 +0000 UTC m=+15.927787212" observedRunningTime="2025-11-06 23:45:57.086600161 +0000 UTC m=+16.219483318" watchObservedRunningTime="2025-11-06 23:45:57.11864053 +0000 UTC m=+16.251523687" Nov 6 23:45:57.139621 systemd[1]: Started cri-containerd-f71b4bb38a26adf531e52630b08bba75fafc5b790fb5147536929e64c5b94ed9.scope - libcontainer container f71b4bb38a26adf531e52630b08bba75fafc5b790fb5147536929e64c5b94ed9. Nov 6 23:45:57.171608 systemd[1]: cri-containerd-f71b4bb38a26adf531e52630b08bba75fafc5b790fb5147536929e64c5b94ed9.scope: Deactivated successfully. Nov 6 23:45:57.173432 containerd[1509]: time="2025-11-06T23:45:57.173354942Z" level=info msg="StartContainer for \"f71b4bb38a26adf531e52630b08bba75fafc5b790fb5147536929e64c5b94ed9\" returns successfully" Nov 6 23:45:57.232847 containerd[1509]: time="2025-11-06T23:45:57.232681709Z" level=info msg="shim disconnected" id=f71b4bb38a26adf531e52630b08bba75fafc5b790fb5147536929e64c5b94ed9 namespace=k8s.io Nov 6 23:45:57.232847 containerd[1509]: time="2025-11-06T23:45:57.232840833Z" level=warning msg="cleaning up after shim disconnected" id=f71b4bb38a26adf531e52630b08bba75fafc5b790fb5147536929e64c5b94ed9 namespace=k8s.io Nov 6 23:45:57.232847 containerd[1509]: time="2025-11-06T23:45:57.232846865Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 6 23:45:57.251554 containerd[1509]: time="2025-11-06T23:45:57.251477855Z" level=warning msg="cleanup warnings time=\"2025-11-06T23:45:57Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Nov 6 23:45:58.056208 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f71b4bb38a26adf531e52630b08bba75fafc5b790fb5147536929e64c5b94ed9-rootfs.mount: Deactivated successfully. Nov 6 23:45:58.092600 containerd[1509]: time="2025-11-06T23:45:58.092297410Z" level=info msg="CreateContainer within sandbox \"2e08f55dbb09e1f8dab4788bc3583fc57365cafc6f98d043a0032cac273c8a05\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 6 23:45:58.119199 containerd[1509]: time="2025-11-06T23:45:58.119137937Z" level=info msg="CreateContainer within sandbox \"2e08f55dbb09e1f8dab4788bc3583fc57365cafc6f98d043a0032cac273c8a05\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"46e2c50d67945d60609c2e2d688e951aa60d1f5ce3f560fe9f3c31d3a74acf11\"" Nov 6 23:45:58.125547 containerd[1509]: time="2025-11-06T23:45:58.121612599Z" level=info msg="StartContainer for \"46e2c50d67945d60609c2e2d688e951aa60d1f5ce3f560fe9f3c31d3a74acf11\"" Nov 6 23:45:58.181333 systemd[1]: run-containerd-runc-k8s.io-46e2c50d67945d60609c2e2d688e951aa60d1f5ce3f560fe9f3c31d3a74acf11-runc.8tAqIg.mount: Deactivated successfully. Nov 6 23:45:58.193746 systemd[1]: Started cri-containerd-46e2c50d67945d60609c2e2d688e951aa60d1f5ce3f560fe9f3c31d3a74acf11.scope - libcontainer container 46e2c50d67945d60609c2e2d688e951aa60d1f5ce3f560fe9f3c31d3a74acf11. Nov 6 23:45:58.236376 containerd[1509]: time="2025-11-06T23:45:58.236306740Z" level=info msg="StartContainer for \"46e2c50d67945d60609c2e2d688e951aa60d1f5ce3f560fe9f3c31d3a74acf11\" returns successfully" Nov 6 23:45:58.363624 kubelet[2687]: I1106 23:45:58.362999 2687 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 6 23:45:58.402922 systemd[1]: Created slice kubepods-burstable-pod77f9bd09_def1_4f27_8ca2_638a0b2f3671.slice - libcontainer container kubepods-burstable-pod77f9bd09_def1_4f27_8ca2_638a0b2f3671.slice. Nov 6 23:45:58.410951 systemd[1]: Created slice kubepods-burstable-pod7d2814d8_61c7_435c_bd36_874dc87081e8.slice - libcontainer container kubepods-burstable-pod7d2814d8_61c7_435c_bd36_874dc87081e8.slice. Nov 6 23:45:58.439101 kubelet[2687]: I1106 23:45:58.438964 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/77f9bd09-def1-4f27-8ca2-638a0b2f3671-config-volume\") pod \"coredns-668d6bf9bc-9lx7k\" (UID: \"77f9bd09-def1-4f27-8ca2-638a0b2f3671\") " pod="kube-system/coredns-668d6bf9bc-9lx7k" Nov 6 23:45:58.439101 kubelet[2687]: I1106 23:45:58.439006 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pc9j4\" (UniqueName: \"kubernetes.io/projected/77f9bd09-def1-4f27-8ca2-638a0b2f3671-kube-api-access-pc9j4\") pod \"coredns-668d6bf9bc-9lx7k\" (UID: \"77f9bd09-def1-4f27-8ca2-638a0b2f3671\") " pod="kube-system/coredns-668d6bf9bc-9lx7k" Nov 6 23:45:58.439101 kubelet[2687]: I1106 23:45:58.439023 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7d2814d8-61c7-435c-bd36-874dc87081e8-config-volume\") pod \"coredns-668d6bf9bc-hdkcx\" (UID: \"7d2814d8-61c7-435c-bd36-874dc87081e8\") " pod="kube-system/coredns-668d6bf9bc-hdkcx" Nov 6 23:45:58.439101 kubelet[2687]: I1106 23:45:58.439042 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkvqb\" (UniqueName: \"kubernetes.io/projected/7d2814d8-61c7-435c-bd36-874dc87081e8-kube-api-access-vkvqb\") pod \"coredns-668d6bf9bc-hdkcx\" (UID: \"7d2814d8-61c7-435c-bd36-874dc87081e8\") " pod="kube-system/coredns-668d6bf9bc-hdkcx" Nov 6 23:45:58.709504 containerd[1509]: time="2025-11-06T23:45:58.708941546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9lx7k,Uid:77f9bd09-def1-4f27-8ca2-638a0b2f3671,Namespace:kube-system,Attempt:0,}" Nov 6 23:45:58.715786 containerd[1509]: time="2025-11-06T23:45:58.715616831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hdkcx,Uid:7d2814d8-61c7-435c-bd36-874dc87081e8,Namespace:kube-system,Attempt:0,}" Nov 6 23:45:59.153577 kubelet[2687]: I1106 23:45:59.153516 2687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-f9wx5" podStartSLOduration=6.381822048 podStartE2EDuration="13.153468809s" podCreationTimestamp="2025-11-06 23:45:46 +0000 UTC" firstStartedPulling="2025-11-06 23:45:47.246385634 +0000 UTC m=+6.379268792" lastFinishedPulling="2025-11-06 23:45:54.018032386 +0000 UTC m=+13.150915553" observedRunningTime="2025-11-06 23:45:59.152773524 +0000 UTC m=+18.285656721" watchObservedRunningTime="2025-11-06 23:45:59.153468809 +0000 UTC m=+18.286351996" Nov 6 23:46:00.394963 systemd-networkd[1411]: cilium_host: Link UP Nov 6 23:46:00.399820 systemd-networkd[1411]: cilium_net: Link UP Nov 6 23:46:00.402254 systemd-networkd[1411]: cilium_net: Gained carrier Nov 6 23:46:00.404465 systemd-networkd[1411]: cilium_host: Gained carrier Nov 6 23:46:00.532253 systemd-networkd[1411]: cilium_host: Gained IPv6LL Nov 6 23:46:00.533599 systemd-networkd[1411]: cilium_vxlan: Link UP Nov 6 23:46:00.533606 systemd-networkd[1411]: cilium_vxlan: Gained carrier Nov 6 23:46:00.930531 kernel: NET: Registered PF_ALG protocol family Nov 6 23:46:00.989040 systemd-networkd[1411]: cilium_net: Gained IPv6LL Nov 6 23:46:01.446779 systemd-networkd[1411]: lxc_health: Link UP Nov 6 23:46:01.446984 systemd-networkd[1411]: lxc_health: Gained carrier Nov 6 23:46:01.691705 systemd-networkd[1411]: cilium_vxlan: Gained IPv6LL Nov 6 23:46:01.791772 systemd-networkd[1411]: lxc90473c7ff1bd: Link UP Nov 6 23:46:01.799519 kernel: eth0: renamed from tmp7f16c Nov 6 23:46:01.818779 systemd-networkd[1411]: lxc9933f889ca37: Link UP Nov 6 23:46:01.821421 kernel: eth0: renamed from tmp98017 Nov 6 23:46:01.822316 systemd-networkd[1411]: lxc90473c7ff1bd: Gained carrier Nov 6 23:46:01.828593 systemd-networkd[1411]: lxc9933f889ca37: Gained carrier Nov 6 23:46:03.419799 systemd-networkd[1411]: lxc90473c7ff1bd: Gained IPv6LL Nov 6 23:46:03.421895 systemd-networkd[1411]: lxc9933f889ca37: Gained IPv6LL Nov 6 23:46:03.484118 systemd-networkd[1411]: lxc_health: Gained IPv6LL Nov 6 23:46:04.480359 containerd[1509]: time="2025-11-06T23:46:04.480198305Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 6 23:46:04.480359 containerd[1509]: time="2025-11-06T23:46:04.480248248Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 6 23:46:04.480359 containerd[1509]: time="2025-11-06T23:46:04.480257830Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:46:04.480359 containerd[1509]: time="2025-11-06T23:46:04.480307593Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:46:04.520615 systemd[1]: Started cri-containerd-7f16c3ad0c12c6d9fee4eac3b19a7ab05014a172731a2b4a7291c3d36072fc13.scope - libcontainer container 7f16c3ad0c12c6d9fee4eac3b19a7ab05014a172731a2b4a7291c3d36072fc13. Nov 6 23:46:04.546078 containerd[1509]: time="2025-11-06T23:46:04.545742937Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 6 23:46:04.546232 containerd[1509]: time="2025-11-06T23:46:04.546197823Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 6 23:46:04.546552 containerd[1509]: time="2025-11-06T23:46:04.546267722Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:46:04.546552 containerd[1509]: time="2025-11-06T23:46:04.546391943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:46:04.566607 systemd[1]: Started cri-containerd-9801783b0f22a024867112a0a3e5dfab10157793fa525bd99bb434e1c005f282.scope - libcontainer container 9801783b0f22a024867112a0a3e5dfab10157793fa525bd99bb434e1c005f282. Nov 6 23:46:04.591639 containerd[1509]: time="2025-11-06T23:46:04.591600221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9lx7k,Uid:77f9bd09-def1-4f27-8ca2-638a0b2f3671,Namespace:kube-system,Attempt:0,} returns sandbox id \"7f16c3ad0c12c6d9fee4eac3b19a7ab05014a172731a2b4a7291c3d36072fc13\"" Nov 6 23:46:04.594208 containerd[1509]: time="2025-11-06T23:46:04.594184294Z" level=info msg="CreateContainer within sandbox \"7f16c3ad0c12c6d9fee4eac3b19a7ab05014a172731a2b4a7291c3d36072fc13\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 6 23:46:04.618687 containerd[1509]: time="2025-11-06T23:46:04.618648935Z" level=info msg="CreateContainer within sandbox \"7f16c3ad0c12c6d9fee4eac3b19a7ab05014a172731a2b4a7291c3d36072fc13\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e960f56c8609bc540ef7a0478454343fd9598b2d52a0dec42f35507eb59d0ed6\"" Nov 6 23:46:04.620988 containerd[1509]: time="2025-11-06T23:46:04.620411297Z" level=info msg="StartContainer for \"e960f56c8609bc540ef7a0478454343fd9598b2d52a0dec42f35507eb59d0ed6\"" Nov 6 23:46:04.624234 containerd[1509]: time="2025-11-06T23:46:04.623917036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hdkcx,Uid:7d2814d8-61c7-435c-bd36-874dc87081e8,Namespace:kube-system,Attempt:0,} returns sandbox id \"9801783b0f22a024867112a0a3e5dfab10157793fa525bd99bb434e1c005f282\"" Nov 6 23:46:04.626979 containerd[1509]: time="2025-11-06T23:46:04.626953154Z" level=info msg="CreateContainer within sandbox \"9801783b0f22a024867112a0a3e5dfab10157793fa525bd99bb434e1c005f282\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 6 23:46:04.645709 systemd[1]: Started cri-containerd-e960f56c8609bc540ef7a0478454343fd9598b2d52a0dec42f35507eb59d0ed6.scope - libcontainer container e960f56c8609bc540ef7a0478454343fd9598b2d52a0dec42f35507eb59d0ed6. Nov 6 23:46:04.647606 containerd[1509]: time="2025-11-06T23:46:04.647166646Z" level=info msg="CreateContainer within sandbox \"9801783b0f22a024867112a0a3e5dfab10157793fa525bd99bb434e1c005f282\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"988e975cf41e23e77e7c34f073a5c3b331ea64c9b8db64d33b35cc21d1971df0\"" Nov 6 23:46:04.648803 containerd[1509]: time="2025-11-06T23:46:04.648721324Z" level=info msg="StartContainer for \"988e975cf41e23e77e7c34f073a5c3b331ea64c9b8db64d33b35cc21d1971df0\"" Nov 6 23:46:04.674625 systemd[1]: Started cri-containerd-988e975cf41e23e77e7c34f073a5c3b331ea64c9b8db64d33b35cc21d1971df0.scope - libcontainer container 988e975cf41e23e77e7c34f073a5c3b331ea64c9b8db64d33b35cc21d1971df0. Nov 6 23:46:04.682033 containerd[1509]: time="2025-11-06T23:46:04.681985320Z" level=info msg="StartContainer for \"e960f56c8609bc540ef7a0478454343fd9598b2d52a0dec42f35507eb59d0ed6\" returns successfully" Nov 6 23:46:04.701533 containerd[1509]: time="2025-11-06T23:46:04.701232174Z" level=info msg="StartContainer for \"988e975cf41e23e77e7c34f073a5c3b331ea64c9b8db64d33b35cc21d1971df0\" returns successfully" Nov 6 23:46:05.179185 kubelet[2687]: I1106 23:46:05.179094 2687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-hdkcx" podStartSLOduration=19.17906736 podStartE2EDuration="19.17906736s" podCreationTimestamp="2025-11-06 23:45:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 23:46:05.177823952 +0000 UTC m=+24.310707109" watchObservedRunningTime="2025-11-06 23:46:05.17906736 +0000 UTC m=+24.311950557" Nov 6 23:46:05.238747 kubelet[2687]: I1106 23:46:05.237203 2687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-9lx7k" podStartSLOduration=19.237183061 podStartE2EDuration="19.237183061s" podCreationTimestamp="2025-11-06 23:45:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 23:46:05.203826452 +0000 UTC m=+24.336709649" watchObservedRunningTime="2025-11-06 23:46:05.237183061 +0000 UTC m=+24.370066248" Nov 6 23:46:05.485891 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2347045457.mount: Deactivated successfully. Nov 6 23:48:06.820834 systemd[1]: Started sshd@7-46.62.225.38:22-139.178.89.65:60876.service - OpenSSH per-connection server daemon (139.178.89.65:60876). Nov 6 23:48:07.861863 sshd[4074]: Accepted publickey for core from 139.178.89.65 port 60876 ssh2: RSA SHA256:cjPMyP4iqQjYKk/6ojYcS0wCb6TI0fxaXqSTxqDpLQo Nov 6 23:48:07.863759 sshd-session[4074]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:48:07.870625 systemd-logind[1482]: New session 8 of user core. Nov 6 23:48:07.879643 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 6 23:48:09.170235 sshd[4076]: Connection closed by 139.178.89.65 port 60876 Nov 6 23:48:09.170802 sshd-session[4074]: pam_unix(sshd:session): session closed for user core Nov 6 23:48:09.174267 systemd[1]: sshd@7-46.62.225.38:22-139.178.89.65:60876.service: Deactivated successfully. Nov 6 23:48:09.177784 systemd[1]: session-8.scope: Deactivated successfully. Nov 6 23:48:09.179574 systemd-logind[1482]: Session 8 logged out. Waiting for processes to exit. Nov 6 23:48:09.181049 systemd-logind[1482]: Removed session 8. Nov 6 23:48:14.348926 systemd[1]: Started sshd@8-46.62.225.38:22-139.178.89.65:60878.service - OpenSSH per-connection server daemon (139.178.89.65:60878). Nov 6 23:48:15.386447 sshd[4089]: Accepted publickey for core from 139.178.89.65 port 60878 ssh2: RSA SHA256:cjPMyP4iqQjYKk/6ojYcS0wCb6TI0fxaXqSTxqDpLQo Nov 6 23:48:15.388239 sshd-session[4089]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:48:15.394416 systemd-logind[1482]: New session 9 of user core. Nov 6 23:48:15.404723 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 6 23:48:16.195947 sshd[4091]: Connection closed by 139.178.89.65 port 60878 Nov 6 23:48:16.197747 sshd-session[4089]: pam_unix(sshd:session): session closed for user core Nov 6 23:48:16.203080 systemd[1]: sshd@8-46.62.225.38:22-139.178.89.65:60878.service: Deactivated successfully. Nov 6 23:48:16.206950 systemd[1]: session-9.scope: Deactivated successfully. Nov 6 23:48:16.208098 systemd-logind[1482]: Session 9 logged out. Waiting for processes to exit. Nov 6 23:48:16.209709 systemd-logind[1482]: Removed session 9. Nov 6 23:48:21.377890 systemd[1]: Started sshd@9-46.62.225.38:22-139.178.89.65:60644.service - OpenSSH per-connection server daemon (139.178.89.65:60644). Nov 6 23:48:22.413583 sshd[4107]: Accepted publickey for core from 139.178.89.65 port 60644 ssh2: RSA SHA256:cjPMyP4iqQjYKk/6ojYcS0wCb6TI0fxaXqSTxqDpLQo Nov 6 23:48:22.415088 sshd-session[4107]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:48:22.423599 systemd-logind[1482]: New session 10 of user core. Nov 6 23:48:22.429739 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 6 23:48:23.221886 sshd[4109]: Connection closed by 139.178.89.65 port 60644 Nov 6 23:48:23.223120 sshd-session[4107]: pam_unix(sshd:session): session closed for user core Nov 6 23:48:23.228899 systemd[1]: sshd@9-46.62.225.38:22-139.178.89.65:60644.service: Deactivated successfully. Nov 6 23:48:23.229155 systemd-logind[1482]: Session 10 logged out. Waiting for processes to exit. Nov 6 23:48:23.232328 systemd[1]: session-10.scope: Deactivated successfully. Nov 6 23:48:23.234437 systemd-logind[1482]: Removed session 10. Nov 6 23:48:23.401857 systemd[1]: Started sshd@10-46.62.225.38:22-139.178.89.65:60656.service - OpenSSH per-connection server daemon (139.178.89.65:60656). Nov 6 23:48:24.434791 sshd[4122]: Accepted publickey for core from 139.178.89.65 port 60656 ssh2: RSA SHA256:cjPMyP4iqQjYKk/6ojYcS0wCb6TI0fxaXqSTxqDpLQo Nov 6 23:48:24.436723 sshd-session[4122]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:48:24.442006 systemd-logind[1482]: New session 11 of user core. Nov 6 23:48:24.453753 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 6 23:48:25.346036 sshd[4124]: Connection closed by 139.178.89.65 port 60656 Nov 6 23:48:25.346828 sshd-session[4122]: pam_unix(sshd:session): session closed for user core Nov 6 23:48:25.352906 systemd[1]: sshd@10-46.62.225.38:22-139.178.89.65:60656.service: Deactivated successfully. Nov 6 23:48:25.355865 systemd[1]: session-11.scope: Deactivated successfully. Nov 6 23:48:25.358288 systemd-logind[1482]: Session 11 logged out. Waiting for processes to exit. Nov 6 23:48:25.360322 systemd-logind[1482]: Removed session 11. Nov 6 23:48:25.527889 systemd[1]: Started sshd@11-46.62.225.38:22-139.178.89.65:60670.service - OpenSSH per-connection server daemon (139.178.89.65:60670). Nov 6 23:48:26.554564 sshd[4134]: Accepted publickey for core from 139.178.89.65 port 60670 ssh2: RSA SHA256:cjPMyP4iqQjYKk/6ojYcS0wCb6TI0fxaXqSTxqDpLQo Nov 6 23:48:26.556625 sshd-session[4134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:48:26.565399 systemd-logind[1482]: New session 12 of user core. Nov 6 23:48:26.576800 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 6 23:48:27.355790 sshd[4136]: Connection closed by 139.178.89.65 port 60670 Nov 6 23:48:27.356679 sshd-session[4134]: pam_unix(sshd:session): session closed for user core Nov 6 23:48:27.359119 systemd[1]: sshd@11-46.62.225.38:22-139.178.89.65:60670.service: Deactivated successfully. Nov 6 23:48:27.362562 systemd[1]: session-12.scope: Deactivated successfully. Nov 6 23:48:27.365961 systemd-logind[1482]: Session 12 logged out. Waiting for processes to exit. Nov 6 23:48:27.367175 systemd-logind[1482]: Removed session 12. Nov 6 23:48:32.563002 systemd[1]: Started sshd@12-46.62.225.38:22-139.178.89.65:34114.service - OpenSSH per-connection server daemon (139.178.89.65:34114). Nov 6 23:48:33.667704 sshd[4149]: Accepted publickey for core from 139.178.89.65 port 34114 ssh2: RSA SHA256:cjPMyP4iqQjYKk/6ojYcS0wCb6TI0fxaXqSTxqDpLQo Nov 6 23:48:33.669356 sshd-session[4149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:48:33.678062 systemd-logind[1482]: New session 13 of user core. Nov 6 23:48:33.684741 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 6 23:48:34.475994 sshd[4151]: Connection closed by 139.178.89.65 port 34114 Nov 6 23:48:34.478011 sshd-session[4149]: pam_unix(sshd:session): session closed for user core Nov 6 23:48:34.487173 systemd[1]: sshd@12-46.62.225.38:22-139.178.89.65:34114.service: Deactivated successfully. Nov 6 23:48:34.490937 systemd[1]: session-13.scope: Deactivated successfully. Nov 6 23:48:34.492309 systemd-logind[1482]: Session 13 logged out. Waiting for processes to exit. Nov 6 23:48:34.494683 systemd-logind[1482]: Removed session 13. Nov 6 23:48:34.635912 systemd[1]: Started sshd@13-46.62.225.38:22-139.178.89.65:34130.service - OpenSSH per-connection server daemon (139.178.89.65:34130). Nov 6 23:48:35.642521 sshd[4163]: Accepted publickey for core from 139.178.89.65 port 34130 ssh2: RSA SHA256:cjPMyP4iqQjYKk/6ojYcS0wCb6TI0fxaXqSTxqDpLQo Nov 6 23:48:35.644632 sshd-session[4163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:48:35.653594 systemd-logind[1482]: New session 14 of user core. Nov 6 23:48:35.658935 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 6 23:48:36.716709 sshd[4165]: Connection closed by 139.178.89.65 port 34130 Nov 6 23:48:36.717999 sshd-session[4163]: pam_unix(sshd:session): session closed for user core Nov 6 23:48:36.726095 systemd[1]: sshd@13-46.62.225.38:22-139.178.89.65:34130.service: Deactivated successfully. Nov 6 23:48:36.729432 systemd[1]: session-14.scope: Deactivated successfully. Nov 6 23:48:36.733111 systemd-logind[1482]: Session 14 logged out. Waiting for processes to exit. Nov 6 23:48:36.734745 systemd-logind[1482]: Removed session 14. Nov 6 23:48:36.935996 systemd[1]: Started sshd@14-46.62.225.38:22-139.178.89.65:55558.service - OpenSSH per-connection server daemon (139.178.89.65:55558). Nov 6 23:48:38.093663 sshd[4175]: Accepted publickey for core from 139.178.89.65 port 55558 ssh2: RSA SHA256:cjPMyP4iqQjYKk/6ojYcS0wCb6TI0fxaXqSTxqDpLQo Nov 6 23:48:38.095448 sshd-session[4175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:48:38.103133 systemd-logind[1482]: New session 15 of user core. Nov 6 23:48:38.107744 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 6 23:48:39.650013 sshd[4177]: Connection closed by 139.178.89.65 port 55558 Nov 6 23:48:39.650908 sshd-session[4175]: pam_unix(sshd:session): session closed for user core Nov 6 23:48:39.656576 systemd[1]: sshd@14-46.62.225.38:22-139.178.89.65:55558.service: Deactivated successfully. Nov 6 23:48:39.659111 systemd[1]: session-15.scope: Deactivated successfully. Nov 6 23:48:39.661717 systemd-logind[1482]: Session 15 logged out. Waiting for processes to exit. Nov 6 23:48:39.663721 systemd-logind[1482]: Removed session 15. Nov 6 23:48:39.818922 systemd[1]: Started sshd@15-46.62.225.38:22-139.178.89.65:55562.service - OpenSSH per-connection server daemon (139.178.89.65:55562). Nov 6 23:48:40.837358 sshd[4195]: Accepted publickey for core from 139.178.89.65 port 55562 ssh2: RSA SHA256:cjPMyP4iqQjYKk/6ojYcS0wCb6TI0fxaXqSTxqDpLQo Nov 6 23:48:40.838234 sshd-session[4195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:48:40.846084 systemd-logind[1482]: New session 16 of user core. Nov 6 23:48:40.853982 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 6 23:48:41.876612 sshd[4197]: Connection closed by 139.178.89.65 port 55562 Nov 6 23:48:41.877405 sshd-session[4195]: pam_unix(sshd:session): session closed for user core Nov 6 23:48:41.882878 systemd-logind[1482]: Session 16 logged out. Waiting for processes to exit. Nov 6 23:48:41.883392 systemd[1]: sshd@15-46.62.225.38:22-139.178.89.65:55562.service: Deactivated successfully. Nov 6 23:48:41.886372 systemd[1]: session-16.scope: Deactivated successfully. Nov 6 23:48:41.889549 systemd-logind[1482]: Removed session 16. Nov 6 23:48:42.094953 systemd[1]: Started sshd@16-46.62.225.38:22-139.178.89.65:55570.service - OpenSSH per-connection server daemon (139.178.89.65:55570). Nov 6 23:48:43.240954 sshd[4209]: Accepted publickey for core from 139.178.89.65 port 55570 ssh2: RSA SHA256:cjPMyP4iqQjYKk/6ojYcS0wCb6TI0fxaXqSTxqDpLQo Nov 6 23:48:43.242630 sshd-session[4209]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:48:43.248904 systemd-logind[1482]: New session 17 of user core. Nov 6 23:48:43.254731 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 6 23:48:44.072809 sshd[4213]: Connection closed by 139.178.89.65 port 55570 Nov 6 23:48:44.073950 sshd-session[4209]: pam_unix(sshd:session): session closed for user core Nov 6 23:48:44.078254 systemd[1]: sshd@16-46.62.225.38:22-139.178.89.65:55570.service: Deactivated successfully. Nov 6 23:48:44.082445 systemd[1]: session-17.scope: Deactivated successfully. Nov 6 23:48:44.084864 systemd-logind[1482]: Session 17 logged out. Waiting for processes to exit. Nov 6 23:48:44.086344 systemd-logind[1482]: Removed session 17. Nov 6 23:48:49.234941 systemd[1]: Started sshd@17-46.62.225.38:22-139.178.89.65:47398.service - OpenSSH per-connection server daemon (139.178.89.65:47398). Nov 6 23:48:50.241561 sshd[4226]: Accepted publickey for core from 139.178.89.65 port 47398 ssh2: RSA SHA256:cjPMyP4iqQjYKk/6ojYcS0wCb6TI0fxaXqSTxqDpLQo Nov 6 23:48:50.243864 sshd-session[4226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:48:50.253487 systemd-logind[1482]: New session 18 of user core. Nov 6 23:48:50.259837 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 6 23:48:51.016005 sshd[4228]: Connection closed by 139.178.89.65 port 47398 Nov 6 23:48:51.016566 sshd-session[4226]: pam_unix(sshd:session): session closed for user core Nov 6 23:48:51.019577 systemd[1]: sshd@17-46.62.225.38:22-139.178.89.65:47398.service: Deactivated successfully. Nov 6 23:48:51.021289 systemd[1]: session-18.scope: Deactivated successfully. Nov 6 23:48:51.022377 systemd-logind[1482]: Session 18 logged out. Waiting for processes to exit. Nov 6 23:48:51.023892 systemd-logind[1482]: Removed session 18. Nov 6 23:48:56.199958 systemd[1]: Started sshd@18-46.62.225.38:22-139.178.89.65:47412.service - OpenSSH per-connection server daemon (139.178.89.65:47412). Nov 6 23:48:57.229947 sshd[4240]: Accepted publickey for core from 139.178.89.65 port 47412 ssh2: RSA SHA256:cjPMyP4iqQjYKk/6ojYcS0wCb6TI0fxaXqSTxqDpLQo Nov 6 23:48:57.231769 sshd-session[4240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:48:57.240263 systemd-logind[1482]: New session 19 of user core. Nov 6 23:48:57.243723 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 6 23:48:58.029739 sshd[4242]: Connection closed by 139.178.89.65 port 47412 Nov 6 23:48:58.030828 sshd-session[4240]: pam_unix(sshd:session): session closed for user core Nov 6 23:48:58.035703 systemd[1]: sshd@18-46.62.225.38:22-139.178.89.65:47412.service: Deactivated successfully. Nov 6 23:48:58.038869 systemd[1]: session-19.scope: Deactivated successfully. Nov 6 23:48:58.042180 systemd-logind[1482]: Session 19 logged out. Waiting for processes to exit. Nov 6 23:48:58.044446 systemd-logind[1482]: Removed session 19. Nov 6 23:48:58.243013 systemd[1]: Started sshd@19-46.62.225.38:22-139.178.89.65:35396.service - OpenSSH per-connection server daemon (139.178.89.65:35396). Nov 6 23:48:59.365748 sshd[4254]: Accepted publickey for core from 139.178.89.65 port 35396 ssh2: RSA SHA256:cjPMyP4iqQjYKk/6ojYcS0wCb6TI0fxaXqSTxqDpLQo Nov 6 23:48:59.367634 sshd-session[4254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:48:59.373836 systemd-logind[1482]: New session 20 of user core. Nov 6 23:48:59.381755 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 6 23:49:01.324146 containerd[1509]: time="2025-11-06T23:49:01.323903986Z" level=info msg="StopContainer for \"9833f544e557dfab93efa5eb4ec1e92ff98344e3ae34b37dad845d7e98f7ed40\" with timeout 30 (s)" Nov 6 23:49:01.327969 containerd[1509]: time="2025-11-06T23:49:01.327921966Z" level=info msg="Stop container \"9833f544e557dfab93efa5eb4ec1e92ff98344e3ae34b37dad845d7e98f7ed40\" with signal terminated" Nov 6 23:49:01.346945 containerd[1509]: time="2025-11-06T23:49:01.346864105Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 6 23:49:01.350207 systemd[1]: cri-containerd-9833f544e557dfab93efa5eb4ec1e92ff98344e3ae34b37dad845d7e98f7ed40.scope: Deactivated successfully. Nov 6 23:49:01.360626 containerd[1509]: time="2025-11-06T23:49:01.358979614Z" level=info msg="StopContainer for \"46e2c50d67945d60609c2e2d688e951aa60d1f5ce3f560fe9f3c31d3a74acf11\" with timeout 2 (s)" Nov 6 23:49:01.360626 containerd[1509]: time="2025-11-06T23:49:01.359305836Z" level=info msg="Stop container \"46e2c50d67945d60609c2e2d688e951aa60d1f5ce3f560fe9f3c31d3a74acf11\" with signal terminated" Nov 6 23:49:01.373399 systemd-networkd[1411]: lxc_health: Link DOWN Nov 6 23:49:01.374607 systemd-networkd[1411]: lxc_health: Lost carrier Nov 6 23:49:01.393560 systemd[1]: cri-containerd-46e2c50d67945d60609c2e2d688e951aa60d1f5ce3f560fe9f3c31d3a74acf11.scope: Deactivated successfully. Nov 6 23:49:01.393964 systemd[1]: cri-containerd-46e2c50d67945d60609c2e2d688e951aa60d1f5ce3f560fe9f3c31d3a74acf11.scope: Consumed 5.844s CPU time, 194.1M memory peak, 73M read from disk, 13.3M written to disk. Nov 6 23:49:01.412693 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9833f544e557dfab93efa5eb4ec1e92ff98344e3ae34b37dad845d7e98f7ed40-rootfs.mount: Deactivated successfully. Nov 6 23:49:01.434424 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-46e2c50d67945d60609c2e2d688e951aa60d1f5ce3f560fe9f3c31d3a74acf11-rootfs.mount: Deactivated successfully. Nov 6 23:49:01.438286 containerd[1509]: time="2025-11-06T23:49:01.438112985Z" level=info msg="shim disconnected" id=46e2c50d67945d60609c2e2d688e951aa60d1f5ce3f560fe9f3c31d3a74acf11 namespace=k8s.io Nov 6 23:49:01.438286 containerd[1509]: time="2025-11-06T23:49:01.438278881Z" level=warning msg="cleaning up after shim disconnected" id=46e2c50d67945d60609c2e2d688e951aa60d1f5ce3f560fe9f3c31d3a74acf11 namespace=k8s.io Nov 6 23:49:01.438286 containerd[1509]: time="2025-11-06T23:49:01.438331819Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 6 23:49:01.438642 containerd[1509]: time="2025-11-06T23:49:01.438308830Z" level=info msg="shim disconnected" id=9833f544e557dfab93efa5eb4ec1e92ff98344e3ae34b37dad845d7e98f7ed40 namespace=k8s.io Nov 6 23:49:01.438642 containerd[1509]: time="2025-11-06T23:49:01.438397598Z" level=warning msg="cleaning up after shim disconnected" id=9833f544e557dfab93efa5eb4ec1e92ff98344e3ae34b37dad845d7e98f7ed40 namespace=k8s.io Nov 6 23:49:01.438642 containerd[1509]: time="2025-11-06T23:49:01.438401527Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 6 23:49:01.452629 containerd[1509]: time="2025-11-06T23:49:01.452567085Z" level=warning msg="cleanup warnings time=\"2025-11-06T23:49:01Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Nov 6 23:49:01.453082 containerd[1509]: time="2025-11-06T23:49:01.453026914Z" level=info msg="StopContainer for \"46e2c50d67945d60609c2e2d688e951aa60d1f5ce3f560fe9f3c31d3a74acf11\" returns successfully" Nov 6 23:49:01.453657 containerd[1509]: time="2025-11-06T23:49:01.453634388Z" level=info msg="StopPodSandbox for \"2e08f55dbb09e1f8dab4788bc3583fc57365cafc6f98d043a0032cac273c8a05\"" Nov 6 23:49:01.456641 containerd[1509]: time="2025-11-06T23:49:01.456611965Z" level=info msg="StopContainer for \"9833f544e557dfab93efa5eb4ec1e92ff98344e3ae34b37dad845d7e98f7ed40\" returns successfully" Nov 6 23:49:01.457480 containerd[1509]: time="2025-11-06T23:49:01.457452584Z" level=info msg="StopPodSandbox for \"5520cf1ac83fefaa11981ed6a3b17e3d72c456772353a531f8d41a56ae2da0fe\"" Nov 6 23:49:01.457480 containerd[1509]: time="2025-11-06T23:49:01.457472163Z" level=info msg="Container to stop \"9833f544e557dfab93efa5eb4ec1e92ff98344e3ae34b37dad845d7e98f7ed40\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 6 23:49:01.459881 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5520cf1ac83fefaa11981ed6a3b17e3d72c456772353a531f8d41a56ae2da0fe-shm.mount: Deactivated successfully. Nov 6 23:49:01.463761 containerd[1509]: time="2025-11-06T23:49:01.454703543Z" level=info msg="Container to stop \"1b1ed1ef95737634621c54564cd700901ec5609f4e5975ae92fe4fd347aa6fe7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 6 23:49:01.463761 containerd[1509]: time="2025-11-06T23:49:01.463750457Z" level=info msg="Container to stop \"e164c4d3c6f8569ab2008d3db11f3c10d49c189c6b2986aa5be05bba88100a63\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 6 23:49:01.463761 containerd[1509]: time="2025-11-06T23:49:01.463757357Z" level=info msg="Container to stop \"46e2c50d67945d60609c2e2d688e951aa60d1f5ce3f560fe9f3c31d3a74acf11\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 6 23:49:01.463761 containerd[1509]: time="2025-11-06T23:49:01.463763267Z" level=info msg="Container to stop \"024ecb097d066a8f252a3b4d8e9a07048d559dbea429d5653799ea3094840ce1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 6 23:49:01.464126 containerd[1509]: time="2025-11-06T23:49:01.463768807Z" level=info msg="Container to stop \"f71b4bb38a26adf531e52630b08bba75fafc5b790fb5147536929e64c5b94ed9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 6 23:49:01.466832 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2e08f55dbb09e1f8dab4788bc3583fc57365cafc6f98d043a0032cac273c8a05-shm.mount: Deactivated successfully. Nov 6 23:49:01.471337 systemd[1]: cri-containerd-2e08f55dbb09e1f8dab4788bc3583fc57365cafc6f98d043a0032cac273c8a05.scope: Deactivated successfully. Nov 6 23:49:01.473376 systemd[1]: cri-containerd-5520cf1ac83fefaa11981ed6a3b17e3d72c456772353a531f8d41a56ae2da0fe.scope: Deactivated successfully. Nov 6 23:49:01.497402 containerd[1509]: time="2025-11-06T23:49:01.497176186Z" level=info msg="shim disconnected" id=5520cf1ac83fefaa11981ed6a3b17e3d72c456772353a531f8d41a56ae2da0fe namespace=k8s.io Nov 6 23:49:01.497402 containerd[1509]: time="2025-11-06T23:49:01.497346471Z" level=warning msg="cleaning up after shim disconnected" id=5520cf1ac83fefaa11981ed6a3b17e3d72c456772353a531f8d41a56ae2da0fe namespace=k8s.io Nov 6 23:49:01.497402 containerd[1509]: time="2025-11-06T23:49:01.497353701Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 6 23:49:01.498468 containerd[1509]: time="2025-11-06T23:49:01.498433644Z" level=info msg="shim disconnected" id=2e08f55dbb09e1f8dab4788bc3583fc57365cafc6f98d043a0032cac273c8a05 namespace=k8s.io Nov 6 23:49:01.498468 containerd[1509]: time="2025-11-06T23:49:01.498459593Z" level=warning msg="cleaning up after shim disconnected" id=2e08f55dbb09e1f8dab4788bc3583fc57365cafc6f98d043a0032cac273c8a05 namespace=k8s.io Nov 6 23:49:01.498468 containerd[1509]: time="2025-11-06T23:49:01.498464633Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 6 23:49:01.508521 containerd[1509]: time="2025-11-06T23:49:01.508464405Z" level=info msg="TearDown network for sandbox \"2e08f55dbb09e1f8dab4788bc3583fc57365cafc6f98d043a0032cac273c8a05\" successfully" Nov 6 23:49:01.508521 containerd[1509]: time="2025-11-06T23:49:01.508486624Z" level=info msg="StopPodSandbox for \"2e08f55dbb09e1f8dab4788bc3583fc57365cafc6f98d043a0032cac273c8a05\" returns successfully" Nov 6 23:49:01.514566 containerd[1509]: time="2025-11-06T23:49:01.514284990Z" level=info msg="TearDown network for sandbox \"5520cf1ac83fefaa11981ed6a3b17e3d72c456772353a531f8d41a56ae2da0fe\" successfully" Nov 6 23:49:01.514566 containerd[1509]: time="2025-11-06T23:49:01.514320469Z" level=info msg="StopPodSandbox for \"5520cf1ac83fefaa11981ed6a3b17e3d72c456772353a531f8d41a56ae2da0fe\" returns successfully" Nov 6 23:49:01.547224 kubelet[2687]: I1106 23:49:01.546887 2687 scope.go:117] "RemoveContainer" containerID="9833f544e557dfab93efa5eb4ec1e92ff98344e3ae34b37dad845d7e98f7ed40" Nov 6 23:49:01.550895 containerd[1509]: time="2025-11-06T23:49:01.550867580Z" level=info msg="RemoveContainer for \"9833f544e557dfab93efa5eb4ec1e92ff98344e3ae34b37dad845d7e98f7ed40\"" Nov 6 23:49:01.553715 containerd[1509]: time="2025-11-06T23:49:01.553535323Z" level=info msg="RemoveContainer for \"9833f544e557dfab93efa5eb4ec1e92ff98344e3ae34b37dad845d7e98f7ed40\" returns successfully" Nov 6 23:49:01.553766 kubelet[2687]: I1106 23:49:01.553672 2687 scope.go:117] "RemoveContainer" containerID="9833f544e557dfab93efa5eb4ec1e92ff98344e3ae34b37dad845d7e98f7ed40" Nov 6 23:49:01.554185 containerd[1509]: time="2025-11-06T23:49:01.554152608Z" level=error msg="ContainerStatus for \"9833f544e557dfab93efa5eb4ec1e92ff98344e3ae34b37dad845d7e98f7ed40\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9833f544e557dfab93efa5eb4ec1e92ff98344e3ae34b37dad845d7e98f7ed40\": not found" Nov 6 23:49:01.554305 kubelet[2687]: E1106 23:49:01.554275 2687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9833f544e557dfab93efa5eb4ec1e92ff98344e3ae34b37dad845d7e98f7ed40\": not found" containerID="9833f544e557dfab93efa5eb4ec1e92ff98344e3ae34b37dad845d7e98f7ed40" Nov 6 23:49:01.566062 kubelet[2687]: I1106 23:49:01.555657 2687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9833f544e557dfab93efa5eb4ec1e92ff98344e3ae34b37dad845d7e98f7ed40"} err="failed to get container status \"9833f544e557dfab93efa5eb4ec1e92ff98344e3ae34b37dad845d7e98f7ed40\": rpc error: code = NotFound desc = an error occurred when try to find container \"9833f544e557dfab93efa5eb4ec1e92ff98344e3ae34b37dad845d7e98f7ed40\": not found" Nov 6 23:49:01.566062 kubelet[2687]: I1106 23:49:01.565689 2687 scope.go:117] "RemoveContainer" containerID="46e2c50d67945d60609c2e2d688e951aa60d1f5ce3f560fe9f3c31d3a74acf11" Nov 6 23:49:01.566062 kubelet[2687]: I1106 23:49:01.565755 2687 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f80a5697-3751-4fa4-a8ec-a8ba8197449c-bpf-maps\") pod \"f80a5697-3751-4fa4-a8ec-a8ba8197449c\" (UID: \"f80a5697-3751-4fa4-a8ec-a8ba8197449c\") " Nov 6 23:49:01.566062 kubelet[2687]: I1106 23:49:01.565770 2687 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f80a5697-3751-4fa4-a8ec-a8ba8197449c-cilium-config-path\") pod \"f80a5697-3751-4fa4-a8ec-a8ba8197449c\" (UID: \"f80a5697-3751-4fa4-a8ec-a8ba8197449c\") " Nov 6 23:49:01.566062 kubelet[2687]: I1106 23:49:01.565780 2687 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f80a5697-3751-4fa4-a8ec-a8ba8197449c-cilium-run\") pod \"f80a5697-3751-4fa4-a8ec-a8ba8197449c\" (UID: \"f80a5697-3751-4fa4-a8ec-a8ba8197449c\") " Nov 6 23:49:01.566062 kubelet[2687]: I1106 23:49:01.565789 2687 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f80a5697-3751-4fa4-a8ec-a8ba8197449c-host-proc-sys-kernel\") pod \"f80a5697-3751-4fa4-a8ec-a8ba8197449c\" (UID: \"f80a5697-3751-4fa4-a8ec-a8ba8197449c\") " Nov 6 23:49:01.566184 kubelet[2687]: I1106 23:49:01.565798 2687 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f80a5697-3751-4fa4-a8ec-a8ba8197449c-etc-cni-netd\") pod \"f80a5697-3751-4fa4-a8ec-a8ba8197449c\" (UID: \"f80a5697-3751-4fa4-a8ec-a8ba8197449c\") " Nov 6 23:49:01.566184 kubelet[2687]: I1106 23:49:01.565810 2687 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9e20300e-e311-4a12-a87b-f7e645683436-cilium-config-path\") pod \"9e20300e-e311-4a12-a87b-f7e645683436\" (UID: \"9e20300e-e311-4a12-a87b-f7e645683436\") " Nov 6 23:49:01.566184 kubelet[2687]: I1106 23:49:01.565818 2687 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f80a5697-3751-4fa4-a8ec-a8ba8197449c-xtables-lock\") pod \"f80a5697-3751-4fa4-a8ec-a8ba8197449c\" (UID: \"f80a5697-3751-4fa4-a8ec-a8ba8197449c\") " Nov 6 23:49:01.566184 kubelet[2687]: I1106 23:49:01.565826 2687 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f80a5697-3751-4fa4-a8ec-a8ba8197449c-lib-modules\") pod \"f80a5697-3751-4fa4-a8ec-a8ba8197449c\" (UID: \"f80a5697-3751-4fa4-a8ec-a8ba8197449c\") " Nov 6 23:49:01.566184 kubelet[2687]: I1106 23:49:01.565837 2687 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f80a5697-3751-4fa4-a8ec-a8ba8197449c-hubble-tls\") pod \"f80a5697-3751-4fa4-a8ec-a8ba8197449c\" (UID: \"f80a5697-3751-4fa4-a8ec-a8ba8197449c\") " Nov 6 23:49:01.566184 kubelet[2687]: I1106 23:49:01.565847 2687 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f80a5697-3751-4fa4-a8ec-a8ba8197449c-host-proc-sys-net\") pod \"f80a5697-3751-4fa4-a8ec-a8ba8197449c\" (UID: \"f80a5697-3751-4fa4-a8ec-a8ba8197449c\") " Nov 6 23:49:01.566274 kubelet[2687]: I1106 23:49:01.565858 2687 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f80a5697-3751-4fa4-a8ec-a8ba8197449c-hostproc\") pod \"f80a5697-3751-4fa4-a8ec-a8ba8197449c\" (UID: \"f80a5697-3751-4fa4-a8ec-a8ba8197449c\") " Nov 6 23:49:01.566274 kubelet[2687]: I1106 23:49:01.565867 2687 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f80a5697-3751-4fa4-a8ec-a8ba8197449c-cni-path\") pod \"f80a5697-3751-4fa4-a8ec-a8ba8197449c\" (UID: \"f80a5697-3751-4fa4-a8ec-a8ba8197449c\") " Nov 6 23:49:01.566274 kubelet[2687]: I1106 23:49:01.565876 2687 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jqcqh\" (UniqueName: \"kubernetes.io/projected/f80a5697-3751-4fa4-a8ec-a8ba8197449c-kube-api-access-jqcqh\") pod \"f80a5697-3751-4fa4-a8ec-a8ba8197449c\" (UID: \"f80a5697-3751-4fa4-a8ec-a8ba8197449c\") " Nov 6 23:49:01.566274 kubelet[2687]: I1106 23:49:01.565886 2687 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f80a5697-3751-4fa4-a8ec-a8ba8197449c-cilium-cgroup\") pod \"f80a5697-3751-4fa4-a8ec-a8ba8197449c\" (UID: \"f80a5697-3751-4fa4-a8ec-a8ba8197449c\") " Nov 6 23:49:01.566274 kubelet[2687]: I1106 23:49:01.565898 2687 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f80a5697-3751-4fa4-a8ec-a8ba8197449c-clustermesh-secrets\") pod \"f80a5697-3751-4fa4-a8ec-a8ba8197449c\" (UID: \"f80a5697-3751-4fa4-a8ec-a8ba8197449c\") " Nov 6 23:49:01.566274 kubelet[2687]: I1106 23:49:01.565907 2687 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vfvqr\" (UniqueName: \"kubernetes.io/projected/9e20300e-e311-4a12-a87b-f7e645683436-kube-api-access-vfvqr\") pod \"9e20300e-e311-4a12-a87b-f7e645683436\" (UID: \"9e20300e-e311-4a12-a87b-f7e645683436\") " Nov 6 23:49:01.574833 kubelet[2687]: I1106 23:49:01.573626 2687 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f80a5697-3751-4fa4-a8ec-a8ba8197449c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f80a5697-3751-4fa4-a8ec-a8ba8197449c" (UID: "f80a5697-3751-4fa4-a8ec-a8ba8197449c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 23:49:01.574833 kubelet[2687]: I1106 23:49:01.574813 2687 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f80a5697-3751-4fa4-a8ec-a8ba8197449c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "f80a5697-3751-4fa4-a8ec-a8ba8197449c" (UID: "f80a5697-3751-4fa4-a8ec-a8ba8197449c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 23:49:01.581638 kubelet[2687]: I1106 23:49:01.581616 2687 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f80a5697-3751-4fa4-a8ec-a8ba8197449c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f80a5697-3751-4fa4-a8ec-a8ba8197449c" (UID: "f80a5697-3751-4fa4-a8ec-a8ba8197449c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 6 23:49:01.581685 kubelet[2687]: I1106 23:49:01.581646 2687 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f80a5697-3751-4fa4-a8ec-a8ba8197449c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "f80a5697-3751-4fa4-a8ec-a8ba8197449c" (UID: "f80a5697-3751-4fa4-a8ec-a8ba8197449c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 23:49:01.581685 kubelet[2687]: I1106 23:49:01.581657 2687 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f80a5697-3751-4fa4-a8ec-a8ba8197449c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "f80a5697-3751-4fa4-a8ec-a8ba8197449c" (UID: "f80a5697-3751-4fa4-a8ec-a8ba8197449c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 23:49:01.581685 kubelet[2687]: I1106 23:49:01.581666 2687 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f80a5697-3751-4fa4-a8ec-a8ba8197449c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "f80a5697-3751-4fa4-a8ec-a8ba8197449c" (UID: "f80a5697-3751-4fa4-a8ec-a8ba8197449c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 23:49:01.582917 kubelet[2687]: I1106 23:49:01.582890 2687 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e20300e-e311-4a12-a87b-f7e645683436-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9e20300e-e311-4a12-a87b-f7e645683436" (UID: "9e20300e-e311-4a12-a87b-f7e645683436"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 6 23:49:01.582917 kubelet[2687]: I1106 23:49:01.582912 2687 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f80a5697-3751-4fa4-a8ec-a8ba8197449c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f80a5697-3751-4fa4-a8ec-a8ba8197449c" (UID: "f80a5697-3751-4fa4-a8ec-a8ba8197449c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 23:49:01.584605 kubelet[2687]: I1106 23:49:01.584421 2687 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f80a5697-3751-4fa4-a8ec-a8ba8197449c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "f80a5697-3751-4fa4-a8ec-a8ba8197449c" (UID: "f80a5697-3751-4fa4-a8ec-a8ba8197449c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 6 23:49:01.584605 kubelet[2687]: I1106 23:49:01.584454 2687 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f80a5697-3751-4fa4-a8ec-a8ba8197449c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "f80a5697-3751-4fa4-a8ec-a8ba8197449c" (UID: "f80a5697-3751-4fa4-a8ec-a8ba8197449c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 23:49:01.584605 kubelet[2687]: I1106 23:49:01.584464 2687 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f80a5697-3751-4fa4-a8ec-a8ba8197449c-hostproc" (OuterVolumeSpecName: "hostproc") pod "f80a5697-3751-4fa4-a8ec-a8ba8197449c" (UID: "f80a5697-3751-4fa4-a8ec-a8ba8197449c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 23:49:01.584605 kubelet[2687]: I1106 23:49:01.584472 2687 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f80a5697-3751-4fa4-a8ec-a8ba8197449c-cni-path" (OuterVolumeSpecName: "cni-path") pod "f80a5697-3751-4fa4-a8ec-a8ba8197449c" (UID: "f80a5697-3751-4fa4-a8ec-a8ba8197449c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 23:49:01.585790 kubelet[2687]: I1106 23:49:01.585777 2687 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f80a5697-3751-4fa4-a8ec-a8ba8197449c-kube-api-access-jqcqh" (OuterVolumeSpecName: "kube-api-access-jqcqh") pod "f80a5697-3751-4fa4-a8ec-a8ba8197449c" (UID: "f80a5697-3751-4fa4-a8ec-a8ba8197449c"). InnerVolumeSpecName "kube-api-access-jqcqh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 6 23:49:01.585845 kubelet[2687]: I1106 23:49:01.585838 2687 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f80a5697-3751-4fa4-a8ec-a8ba8197449c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "f80a5697-3751-4fa4-a8ec-a8ba8197449c" (UID: "f80a5697-3751-4fa4-a8ec-a8ba8197449c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 23:49:01.586906 kubelet[2687]: I1106 23:49:01.586888 2687 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e20300e-e311-4a12-a87b-f7e645683436-kube-api-access-vfvqr" (OuterVolumeSpecName: "kube-api-access-vfvqr") pod "9e20300e-e311-4a12-a87b-f7e645683436" (UID: "9e20300e-e311-4a12-a87b-f7e645683436"). InnerVolumeSpecName "kube-api-access-vfvqr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 6 23:49:01.587069 kubelet[2687]: I1106 23:49:01.587057 2687 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f80a5697-3751-4fa4-a8ec-a8ba8197449c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "f80a5697-3751-4fa4-a8ec-a8ba8197449c" (UID: "f80a5697-3751-4fa4-a8ec-a8ba8197449c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 6 23:49:01.587277 containerd[1509]: time="2025-11-06T23:49:01.587224695Z" level=info msg="RemoveContainer for \"46e2c50d67945d60609c2e2d688e951aa60d1f5ce3f560fe9f3c31d3a74acf11\"" Nov 6 23:49:01.589998 containerd[1509]: time="2025-11-06T23:49:01.589975757Z" level=info msg="RemoveContainer for \"46e2c50d67945d60609c2e2d688e951aa60d1f5ce3f560fe9f3c31d3a74acf11\" returns successfully" Nov 6 23:49:01.590175 kubelet[2687]: I1106 23:49:01.590147 2687 scope.go:117] "RemoveContainer" containerID="f71b4bb38a26adf531e52630b08bba75fafc5b790fb5147536929e64c5b94ed9" Nov 6 23:49:01.590856 containerd[1509]: time="2025-11-06T23:49:01.590837896Z" level=info msg="RemoveContainer for \"f71b4bb38a26adf531e52630b08bba75fafc5b790fb5147536929e64c5b94ed9\"" Nov 6 23:49:01.593328 containerd[1509]: time="2025-11-06T23:49:01.593309544Z" level=info msg="RemoveContainer for \"f71b4bb38a26adf531e52630b08bba75fafc5b790fb5147536929e64c5b94ed9\" returns successfully" Nov 6 23:49:01.593468 kubelet[2687]: I1106 23:49:01.593428 2687 scope.go:117] "RemoveContainer" containerID="e164c4d3c6f8569ab2008d3db11f3c10d49c189c6b2986aa5be05bba88100a63" Nov 6 23:49:01.594845 containerd[1509]: time="2025-11-06T23:49:01.594832916Z" level=info msg="RemoveContainer for \"e164c4d3c6f8569ab2008d3db11f3c10d49c189c6b2986aa5be05bba88100a63\"" Nov 6 23:49:01.603952 containerd[1509]: time="2025-11-06T23:49:01.603930949Z" level=info msg="RemoveContainer for \"e164c4d3c6f8569ab2008d3db11f3c10d49c189c6b2986aa5be05bba88100a63\" returns successfully" Nov 6 23:49:01.604046 kubelet[2687]: I1106 23:49:01.604020 2687 scope.go:117] "RemoveContainer" containerID="024ecb097d066a8f252a3b4d8e9a07048d559dbea429d5653799ea3094840ce1" Nov 6 23:49:01.604616 containerd[1509]: time="2025-11-06T23:49:01.604566304Z" level=info msg="RemoveContainer for \"024ecb097d066a8f252a3b4d8e9a07048d559dbea429d5653799ea3094840ce1\"" Nov 6 23:49:01.607166 containerd[1509]: time="2025-11-06T23:49:01.607142699Z" level=info msg="RemoveContainer for \"024ecb097d066a8f252a3b4d8e9a07048d559dbea429d5653799ea3094840ce1\" returns successfully" Nov 6 23:49:01.607320 kubelet[2687]: I1106 23:49:01.607248 2687 scope.go:117] "RemoveContainer" containerID="1b1ed1ef95737634621c54564cd700901ec5609f4e5975ae92fe4fd347aa6fe7" Nov 6 23:49:01.607915 containerd[1509]: time="2025-11-06T23:49:01.607896041Z" level=info msg="RemoveContainer for \"1b1ed1ef95737634621c54564cd700901ec5609f4e5975ae92fe4fd347aa6fe7\"" Nov 6 23:49:01.610120 containerd[1509]: time="2025-11-06T23:49:01.610060977Z" level=info msg="RemoveContainer for \"1b1ed1ef95737634621c54564cd700901ec5609f4e5975ae92fe4fd347aa6fe7\" returns successfully" Nov 6 23:49:01.610251 kubelet[2687]: I1106 23:49:01.610187 2687 scope.go:117] "RemoveContainer" containerID="46e2c50d67945d60609c2e2d688e951aa60d1f5ce3f560fe9f3c31d3a74acf11" Nov 6 23:49:01.610332 containerd[1509]: time="2025-11-06T23:49:01.610312801Z" level=error msg="ContainerStatus for \"46e2c50d67945d60609c2e2d688e951aa60d1f5ce3f560fe9f3c31d3a74acf11\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"46e2c50d67945d60609c2e2d688e951aa60d1f5ce3f560fe9f3c31d3a74acf11\": not found" Nov 6 23:49:01.610501 kubelet[2687]: E1106 23:49:01.610474 2687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"46e2c50d67945d60609c2e2d688e951aa60d1f5ce3f560fe9f3c31d3a74acf11\": not found" containerID="46e2c50d67945d60609c2e2d688e951aa60d1f5ce3f560fe9f3c31d3a74acf11" Nov 6 23:49:01.610574 kubelet[2687]: I1106 23:49:01.610553 2687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"46e2c50d67945d60609c2e2d688e951aa60d1f5ce3f560fe9f3c31d3a74acf11"} err="failed to get container status \"46e2c50d67945d60609c2e2d688e951aa60d1f5ce3f560fe9f3c31d3a74acf11\": rpc error: code = NotFound desc = an error occurred when try to find container \"46e2c50d67945d60609c2e2d688e951aa60d1f5ce3f560fe9f3c31d3a74acf11\": not found" Nov 6 23:49:01.610574 kubelet[2687]: I1106 23:49:01.610571 2687 scope.go:117] "RemoveContainer" containerID="f71b4bb38a26adf531e52630b08bba75fafc5b790fb5147536929e64c5b94ed9" Nov 6 23:49:01.610737 containerd[1509]: time="2025-11-06T23:49:01.610712811Z" level=error msg="ContainerStatus for \"f71b4bb38a26adf531e52630b08bba75fafc5b790fb5147536929e64c5b94ed9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f71b4bb38a26adf531e52630b08bba75fafc5b790fb5147536929e64c5b94ed9\": not found" Nov 6 23:49:01.610851 kubelet[2687]: E1106 23:49:01.610785 2687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f71b4bb38a26adf531e52630b08bba75fafc5b790fb5147536929e64c5b94ed9\": not found" containerID="f71b4bb38a26adf531e52630b08bba75fafc5b790fb5147536929e64c5b94ed9" Nov 6 23:49:01.610851 kubelet[2687]: I1106 23:49:01.610797 2687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f71b4bb38a26adf531e52630b08bba75fafc5b790fb5147536929e64c5b94ed9"} err="failed to get container status \"f71b4bb38a26adf531e52630b08bba75fafc5b790fb5147536929e64c5b94ed9\": rpc error: code = NotFound desc = an error occurred when try to find container \"f71b4bb38a26adf531e52630b08bba75fafc5b790fb5147536929e64c5b94ed9\": not found" Nov 6 23:49:01.610851 kubelet[2687]: I1106 23:49:01.610806 2687 scope.go:117] "RemoveContainer" containerID="e164c4d3c6f8569ab2008d3db11f3c10d49c189c6b2986aa5be05bba88100a63" Nov 6 23:49:01.610907 containerd[1509]: time="2025-11-06T23:49:01.610879257Z" level=error msg="ContainerStatus for \"e164c4d3c6f8569ab2008d3db11f3c10d49c189c6b2986aa5be05bba88100a63\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e164c4d3c6f8569ab2008d3db11f3c10d49c189c6b2986aa5be05bba88100a63\": not found" Nov 6 23:49:01.611009 kubelet[2687]: E1106 23:49:01.610964 2687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e164c4d3c6f8569ab2008d3db11f3c10d49c189c6b2986aa5be05bba88100a63\": not found" containerID="e164c4d3c6f8569ab2008d3db11f3c10d49c189c6b2986aa5be05bba88100a63" Nov 6 23:49:01.611033 kubelet[2687]: I1106 23:49:01.610987 2687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e164c4d3c6f8569ab2008d3db11f3c10d49c189c6b2986aa5be05bba88100a63"} err="failed to get container status \"e164c4d3c6f8569ab2008d3db11f3c10d49c189c6b2986aa5be05bba88100a63\": rpc error: code = NotFound desc = an error occurred when try to find container \"e164c4d3c6f8569ab2008d3db11f3c10d49c189c6b2986aa5be05bba88100a63\": not found" Nov 6 23:49:01.611033 kubelet[2687]: I1106 23:49:01.611023 2687 scope.go:117] "RemoveContainer" containerID="024ecb097d066a8f252a3b4d8e9a07048d559dbea429d5653799ea3094840ce1" Nov 6 23:49:01.611199 containerd[1509]: time="2025-11-06T23:49:01.611140210Z" level=error msg="ContainerStatus for \"024ecb097d066a8f252a3b4d8e9a07048d559dbea429d5653799ea3094840ce1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"024ecb097d066a8f252a3b4d8e9a07048d559dbea429d5653799ea3094840ce1\": not found" Nov 6 23:49:01.611274 kubelet[2687]: E1106 23:49:01.611259 2687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"024ecb097d066a8f252a3b4d8e9a07048d559dbea429d5653799ea3094840ce1\": not found" containerID="024ecb097d066a8f252a3b4d8e9a07048d559dbea429d5653799ea3094840ce1" Nov 6 23:49:01.611295 kubelet[2687]: I1106 23:49:01.611272 2687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"024ecb097d066a8f252a3b4d8e9a07048d559dbea429d5653799ea3094840ce1"} err="failed to get container status \"024ecb097d066a8f252a3b4d8e9a07048d559dbea429d5653799ea3094840ce1\": rpc error: code = NotFound desc = an error occurred when try to find container \"024ecb097d066a8f252a3b4d8e9a07048d559dbea429d5653799ea3094840ce1\": not found" Nov 6 23:49:01.611295 kubelet[2687]: I1106 23:49:01.611282 2687 scope.go:117] "RemoveContainer" containerID="1b1ed1ef95737634621c54564cd700901ec5609f4e5975ae92fe4fd347aa6fe7" Nov 6 23:49:01.611361 containerd[1509]: time="2025-11-06T23:49:01.611351706Z" level=error msg="ContainerStatus for \"1b1ed1ef95737634621c54564cd700901ec5609f4e5975ae92fe4fd347aa6fe7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1b1ed1ef95737634621c54564cd700901ec5609f4e5975ae92fe4fd347aa6fe7\": not found" Nov 6 23:49:01.611421 kubelet[2687]: E1106 23:49:01.611398 2687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1b1ed1ef95737634621c54564cd700901ec5609f4e5975ae92fe4fd347aa6fe7\": not found" containerID="1b1ed1ef95737634621c54564cd700901ec5609f4e5975ae92fe4fd347aa6fe7" Nov 6 23:49:01.611443 kubelet[2687]: I1106 23:49:01.611419 2687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1b1ed1ef95737634621c54564cd700901ec5609f4e5975ae92fe4fd347aa6fe7"} err="failed to get container status \"1b1ed1ef95737634621c54564cd700901ec5609f4e5975ae92fe4fd347aa6fe7\": rpc error: code = NotFound desc = an error occurred when try to find container \"1b1ed1ef95737634621c54564cd700901ec5609f4e5975ae92fe4fd347aa6fe7\": not found" Nov 6 23:49:01.668119 kubelet[2687]: I1106 23:49:01.666898 2687 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f80a5697-3751-4fa4-a8ec-a8ba8197449c-host-proc-sys-net\") on node \"ci-4230-2-4-n-9b34d37d4f\" DevicePath \"\"" Nov 6 23:49:01.668119 kubelet[2687]: I1106 23:49:01.666929 2687 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f80a5697-3751-4fa4-a8ec-a8ba8197449c-hostproc\") on node \"ci-4230-2-4-n-9b34d37d4f\" DevicePath \"\"" Nov 6 23:49:01.668119 kubelet[2687]: I1106 23:49:01.666947 2687 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f80a5697-3751-4fa4-a8ec-a8ba8197449c-cni-path\") on node \"ci-4230-2-4-n-9b34d37d4f\" DevicePath \"\"" Nov 6 23:49:01.668119 kubelet[2687]: I1106 23:49:01.666960 2687 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jqcqh\" (UniqueName: \"kubernetes.io/projected/f80a5697-3751-4fa4-a8ec-a8ba8197449c-kube-api-access-jqcqh\") on node \"ci-4230-2-4-n-9b34d37d4f\" DevicePath \"\"" Nov 6 23:49:01.668119 kubelet[2687]: I1106 23:49:01.666972 2687 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f80a5697-3751-4fa4-a8ec-a8ba8197449c-cilium-cgroup\") on node \"ci-4230-2-4-n-9b34d37d4f\" DevicePath \"\"" Nov 6 23:49:01.668119 kubelet[2687]: I1106 23:49:01.666984 2687 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f80a5697-3751-4fa4-a8ec-a8ba8197449c-clustermesh-secrets\") on node \"ci-4230-2-4-n-9b34d37d4f\" DevicePath \"\"" Nov 6 23:49:01.668119 kubelet[2687]: I1106 23:49:01.666996 2687 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vfvqr\" (UniqueName: \"kubernetes.io/projected/9e20300e-e311-4a12-a87b-f7e645683436-kube-api-access-vfvqr\") on node \"ci-4230-2-4-n-9b34d37d4f\" DevicePath \"\"" Nov 6 23:49:01.668119 kubelet[2687]: I1106 23:49:01.667008 2687 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f80a5697-3751-4fa4-a8ec-a8ba8197449c-bpf-maps\") on node \"ci-4230-2-4-n-9b34d37d4f\" DevicePath \"\"" Nov 6 23:49:01.668626 kubelet[2687]: I1106 23:49:01.667021 2687 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f80a5697-3751-4fa4-a8ec-a8ba8197449c-cilium-config-path\") on node \"ci-4230-2-4-n-9b34d37d4f\" DevicePath \"\"" Nov 6 23:49:01.668626 kubelet[2687]: I1106 23:49:01.667033 2687 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f80a5697-3751-4fa4-a8ec-a8ba8197449c-cilium-run\") on node \"ci-4230-2-4-n-9b34d37d4f\" DevicePath \"\"" Nov 6 23:49:01.668626 kubelet[2687]: I1106 23:49:01.667289 2687 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f80a5697-3751-4fa4-a8ec-a8ba8197449c-host-proc-sys-kernel\") on node \"ci-4230-2-4-n-9b34d37d4f\" DevicePath \"\"" Nov 6 23:49:01.668626 kubelet[2687]: I1106 23:49:01.667301 2687 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f80a5697-3751-4fa4-a8ec-a8ba8197449c-etc-cni-netd\") on node \"ci-4230-2-4-n-9b34d37d4f\" DevicePath \"\"" Nov 6 23:49:01.668626 kubelet[2687]: I1106 23:49:01.667315 2687 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9e20300e-e311-4a12-a87b-f7e645683436-cilium-config-path\") on node \"ci-4230-2-4-n-9b34d37d4f\" DevicePath \"\"" Nov 6 23:49:01.668626 kubelet[2687]: I1106 23:49:01.667337 2687 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f80a5697-3751-4fa4-a8ec-a8ba8197449c-lib-modules\") on node \"ci-4230-2-4-n-9b34d37d4f\" DevicePath \"\"" Nov 6 23:49:01.668626 kubelet[2687]: I1106 23:49:01.667545 2687 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f80a5697-3751-4fa4-a8ec-a8ba8197449c-hubble-tls\") on node \"ci-4230-2-4-n-9b34d37d4f\" DevicePath \"\"" Nov 6 23:49:01.668626 kubelet[2687]: I1106 23:49:01.667556 2687 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f80a5697-3751-4fa4-a8ec-a8ba8197449c-xtables-lock\") on node \"ci-4230-2-4-n-9b34d37d4f\" DevicePath \"\"" Nov 6 23:49:01.852012 systemd[1]: Removed slice kubepods-besteffort-pod9e20300e_e311_4a12_a87b_f7e645683436.slice - libcontainer container kubepods-besteffort-pod9e20300e_e311_4a12_a87b_f7e645683436.slice. Nov 6 23:49:01.862728 systemd[1]: Removed slice kubepods-burstable-podf80a5697_3751_4fa4_a8ec_a8ba8197449c.slice - libcontainer container kubepods-burstable-podf80a5697_3751_4fa4_a8ec_a8ba8197449c.slice. Nov 6 23:49:01.863074 systemd[1]: kubepods-burstable-podf80a5697_3751_4fa4_a8ec_a8ba8197449c.slice: Consumed 5.926s CPU time, 194.4M memory peak, 73.1M read from disk, 16.6M written to disk. Nov 6 23:49:02.312397 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5520cf1ac83fefaa11981ed6a3b17e3d72c456772353a531f8d41a56ae2da0fe-rootfs.mount: Deactivated successfully. Nov 6 23:49:02.312630 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2e08f55dbb09e1f8dab4788bc3583fc57365cafc6f98d043a0032cac273c8a05-rootfs.mount: Deactivated successfully. Nov 6 23:49:02.312814 systemd[1]: var-lib-kubelet-pods-9e20300e\x2de311\x2d4a12\x2da87b\x2df7e645683436-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvfvqr.mount: Deactivated successfully. Nov 6 23:49:02.312965 systemd[1]: var-lib-kubelet-pods-f80a5697\x2d3751\x2d4fa4\x2da8ec\x2da8ba8197449c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Nov 6 23:49:02.313123 systemd[1]: var-lib-kubelet-pods-f80a5697\x2d3751\x2d4fa4\x2da8ec\x2da8ba8197449c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djqcqh.mount: Deactivated successfully. Nov 6 23:49:02.313264 systemd[1]: var-lib-kubelet-pods-f80a5697\x2d3751\x2d4fa4\x2da8ec\x2da8ba8197449c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Nov 6 23:49:03.004391 kubelet[2687]: I1106 23:49:03.004317 2687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e20300e-e311-4a12-a87b-f7e645683436" path="/var/lib/kubelet/pods/9e20300e-e311-4a12-a87b-f7e645683436/volumes" Nov 6 23:49:03.004994 kubelet[2687]: I1106 23:49:03.004929 2687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f80a5697-3751-4fa4-a8ec-a8ba8197449c" path="/var/lib/kubelet/pods/f80a5697-3751-4fa4-a8ec-a8ba8197449c/volumes" Nov 6 23:49:03.402795 sshd[4256]: Connection closed by 139.178.89.65 port 35396 Nov 6 23:49:03.403368 sshd-session[4254]: pam_unix(sshd:session): session closed for user core Nov 6 23:49:03.406687 systemd[1]: sshd@19-46.62.225.38:22-139.178.89.65:35396.service: Deactivated successfully. Nov 6 23:49:03.408141 systemd[1]: session-20.scope: Deactivated successfully. Nov 6 23:49:03.408730 systemd-logind[1482]: Session 20 logged out. Waiting for processes to exit. Nov 6 23:49:03.409584 systemd-logind[1482]: Removed session 20. Nov 6 23:49:03.603874 systemd[1]: Started sshd@20-46.62.225.38:22-139.178.89.65:35410.service - OpenSSH per-connection server daemon (139.178.89.65:35410). Nov 6 23:49:04.707102 sshd[4417]: Accepted publickey for core from 139.178.89.65 port 35410 ssh2: RSA SHA256:cjPMyP4iqQjYKk/6ojYcS0wCb6TI0fxaXqSTxqDpLQo Nov 6 23:49:04.708594 sshd-session[4417]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:49:04.715440 systemd-logind[1482]: New session 21 of user core. Nov 6 23:49:04.723000 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 6 23:49:05.933117 kubelet[2687]: I1106 23:49:05.933073 2687 memory_manager.go:355] "RemoveStaleState removing state" podUID="f80a5697-3751-4fa4-a8ec-a8ba8197449c" containerName="cilium-agent" Nov 6 23:49:05.933117 kubelet[2687]: I1106 23:49:05.933109 2687 memory_manager.go:355] "RemoveStaleState removing state" podUID="9e20300e-e311-4a12-a87b-f7e645683436" containerName="cilium-operator" Nov 6 23:49:05.941768 kubelet[2687]: I1106 23:49:05.941703 2687 status_manager.go:890] "Failed to get status for pod" podUID="0735fb4b-0564-4209-9dbb-7f6c1f96774e" pod="kube-system/cilium-x7cgj" err="pods \"cilium-x7cgj\" is forbidden: User \"system:node:ci-4230-2-4-n-9b34d37d4f\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4230-2-4-n-9b34d37d4f' and this object" Nov 6 23:49:05.946329 systemd[1]: Created slice kubepods-burstable-pod0735fb4b_0564_4209_9dbb_7f6c1f96774e.slice - libcontainer container kubepods-burstable-pod0735fb4b_0564_4209_9dbb_7f6c1f96774e.slice. Nov 6 23:49:05.995108 kubelet[2687]: I1106 23:49:05.995077 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0735fb4b-0564-4209-9dbb-7f6c1f96774e-lib-modules\") pod \"cilium-x7cgj\" (UID: \"0735fb4b-0564-4209-9dbb-7f6c1f96774e\") " pod="kube-system/cilium-x7cgj" Nov 6 23:49:05.995108 kubelet[2687]: I1106 23:49:05.995107 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0735fb4b-0564-4209-9dbb-7f6c1f96774e-host-proc-sys-kernel\") pod \"cilium-x7cgj\" (UID: \"0735fb4b-0564-4209-9dbb-7f6c1f96774e\") " pod="kube-system/cilium-x7cgj" Nov 6 23:49:05.995108 kubelet[2687]: I1106 23:49:05.995120 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0735fb4b-0564-4209-9dbb-7f6c1f96774e-cilium-cgroup\") pod \"cilium-x7cgj\" (UID: \"0735fb4b-0564-4209-9dbb-7f6c1f96774e\") " pod="kube-system/cilium-x7cgj" Nov 6 23:49:05.995275 kubelet[2687]: I1106 23:49:05.995131 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0735fb4b-0564-4209-9dbb-7f6c1f96774e-xtables-lock\") pod \"cilium-x7cgj\" (UID: \"0735fb4b-0564-4209-9dbb-7f6c1f96774e\") " pod="kube-system/cilium-x7cgj" Nov 6 23:49:05.995275 kubelet[2687]: I1106 23:49:05.995140 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0735fb4b-0564-4209-9dbb-7f6c1f96774e-cilium-run\") pod \"cilium-x7cgj\" (UID: \"0735fb4b-0564-4209-9dbb-7f6c1f96774e\") " pod="kube-system/cilium-x7cgj" Nov 6 23:49:05.995275 kubelet[2687]: I1106 23:49:05.995150 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0735fb4b-0564-4209-9dbb-7f6c1f96774e-clustermesh-secrets\") pod \"cilium-x7cgj\" (UID: \"0735fb4b-0564-4209-9dbb-7f6c1f96774e\") " pod="kube-system/cilium-x7cgj" Nov 6 23:49:05.995275 kubelet[2687]: I1106 23:49:05.995161 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0735fb4b-0564-4209-9dbb-7f6c1f96774e-cilium-config-path\") pod \"cilium-x7cgj\" (UID: \"0735fb4b-0564-4209-9dbb-7f6c1f96774e\") " pod="kube-system/cilium-x7cgj" Nov 6 23:49:05.995275 kubelet[2687]: I1106 23:49:05.995171 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0735fb4b-0564-4209-9dbb-7f6c1f96774e-cilium-ipsec-secrets\") pod \"cilium-x7cgj\" (UID: \"0735fb4b-0564-4209-9dbb-7f6c1f96774e\") " pod="kube-system/cilium-x7cgj" Nov 6 23:49:05.995275 kubelet[2687]: I1106 23:49:05.995192 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0735fb4b-0564-4209-9dbb-7f6c1f96774e-bpf-maps\") pod \"cilium-x7cgj\" (UID: \"0735fb4b-0564-4209-9dbb-7f6c1f96774e\") " pod="kube-system/cilium-x7cgj" Nov 6 23:49:05.995362 kubelet[2687]: I1106 23:49:05.995201 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bw7sm\" (UniqueName: \"kubernetes.io/projected/0735fb4b-0564-4209-9dbb-7f6c1f96774e-kube-api-access-bw7sm\") pod \"cilium-x7cgj\" (UID: \"0735fb4b-0564-4209-9dbb-7f6c1f96774e\") " pod="kube-system/cilium-x7cgj" Nov 6 23:49:05.995362 kubelet[2687]: I1106 23:49:05.995215 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0735fb4b-0564-4209-9dbb-7f6c1f96774e-hostproc\") pod \"cilium-x7cgj\" (UID: \"0735fb4b-0564-4209-9dbb-7f6c1f96774e\") " pod="kube-system/cilium-x7cgj" Nov 6 23:49:05.995362 kubelet[2687]: I1106 23:49:05.995225 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0735fb4b-0564-4209-9dbb-7f6c1f96774e-host-proc-sys-net\") pod \"cilium-x7cgj\" (UID: \"0735fb4b-0564-4209-9dbb-7f6c1f96774e\") " pod="kube-system/cilium-x7cgj" Nov 6 23:49:05.995362 kubelet[2687]: I1106 23:49:05.995235 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0735fb4b-0564-4209-9dbb-7f6c1f96774e-cni-path\") pod \"cilium-x7cgj\" (UID: \"0735fb4b-0564-4209-9dbb-7f6c1f96774e\") " pod="kube-system/cilium-x7cgj" Nov 6 23:49:05.995362 kubelet[2687]: I1106 23:49:05.995244 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0735fb4b-0564-4209-9dbb-7f6c1f96774e-hubble-tls\") pod \"cilium-x7cgj\" (UID: \"0735fb4b-0564-4209-9dbb-7f6c1f96774e\") " pod="kube-system/cilium-x7cgj" Nov 6 23:49:05.995362 kubelet[2687]: I1106 23:49:05.995253 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0735fb4b-0564-4209-9dbb-7f6c1f96774e-etc-cni-netd\") pod \"cilium-x7cgj\" (UID: \"0735fb4b-0564-4209-9dbb-7f6c1f96774e\") " pod="kube-system/cilium-x7cgj" Nov 6 23:49:06.155894 kubelet[2687]: E1106 23:49:06.155850 2687 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 6 23:49:06.163514 sshd[4419]: Connection closed by 139.178.89.65 port 35410 Nov 6 23:49:06.164131 sshd-session[4417]: pam_unix(sshd:session): session closed for user core Nov 6 23:49:06.168099 systemd[1]: sshd@20-46.62.225.38:22-139.178.89.65:35410.service: Deactivated successfully. Nov 6 23:49:06.171466 systemd[1]: session-21.scope: Deactivated successfully. Nov 6 23:49:06.173687 systemd-logind[1482]: Session 21 logged out. Waiting for processes to exit. Nov 6 23:49:06.174985 systemd-logind[1482]: Removed session 21. Nov 6 23:49:06.257585 containerd[1509]: time="2025-11-06T23:49:06.257466848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-x7cgj,Uid:0735fb4b-0564-4209-9dbb-7f6c1f96774e,Namespace:kube-system,Attempt:0,}" Nov 6 23:49:06.272580 containerd[1509]: time="2025-11-06T23:49:06.272298734Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 6 23:49:06.272580 containerd[1509]: time="2025-11-06T23:49:06.272369343Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 6 23:49:06.272580 containerd[1509]: time="2025-11-06T23:49:06.272385802Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:49:06.273836 containerd[1509]: time="2025-11-06T23:49:06.272977078Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:49:06.294706 systemd[1]: Started cri-containerd-9de9a63acc826c4b72875bb417175e01f02cdd3c2ef1ad56dc76f7bb1b1584a7.scope - libcontainer container 9de9a63acc826c4b72875bb417175e01f02cdd3c2ef1ad56dc76f7bb1b1584a7. Nov 6 23:49:06.325422 systemd[1]: Started sshd@21-46.62.225.38:22-139.178.89.65:42078.service - OpenSSH per-connection server daemon (139.178.89.65:42078). Nov 6 23:49:06.330007 containerd[1509]: time="2025-11-06T23:49:06.329949148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-x7cgj,Uid:0735fb4b-0564-4209-9dbb-7f6c1f96774e,Namespace:kube-system,Attempt:0,} returns sandbox id \"9de9a63acc826c4b72875bb417175e01f02cdd3c2ef1ad56dc76f7bb1b1584a7\"" Nov 6 23:49:06.337324 containerd[1509]: time="2025-11-06T23:49:06.337188801Z" level=info msg="CreateContainer within sandbox \"9de9a63acc826c4b72875bb417175e01f02cdd3c2ef1ad56dc76f7bb1b1584a7\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 6 23:49:06.349414 containerd[1509]: time="2025-11-06T23:49:06.349363608Z" level=info msg="CreateContainer within sandbox \"9de9a63acc826c4b72875bb417175e01f02cdd3c2ef1ad56dc76f7bb1b1584a7\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"02f42c19314da1e51d574886c1c2c61902a9d8f6883fca42e297709c697bdaec\"" Nov 6 23:49:06.350799 containerd[1509]: time="2025-11-06T23:49:06.350667129Z" level=info msg="StartContainer for \"02f42c19314da1e51d574886c1c2c61902a9d8f6883fca42e297709c697bdaec\"" Nov 6 23:49:06.372670 systemd[1]: Started cri-containerd-02f42c19314da1e51d574886c1c2c61902a9d8f6883fca42e297709c697bdaec.scope - libcontainer container 02f42c19314da1e51d574886c1c2c61902a9d8f6883fca42e297709c697bdaec. Nov 6 23:49:06.403409 containerd[1509]: time="2025-11-06T23:49:06.403353507Z" level=info msg="StartContainer for \"02f42c19314da1e51d574886c1c2c61902a9d8f6883fca42e297709c697bdaec\" returns successfully" Nov 6 23:49:06.420389 systemd[1]: cri-containerd-02f42c19314da1e51d574886c1c2c61902a9d8f6883fca42e297709c697bdaec.scope: Deactivated successfully. Nov 6 23:49:06.421122 systemd[1]: cri-containerd-02f42c19314da1e51d574886c1c2c61902a9d8f6883fca42e297709c697bdaec.scope: Consumed 23ms CPU time, 9M memory peak, 2.6M read from disk. Nov 6 23:49:06.477993 containerd[1509]: time="2025-11-06T23:49:06.477741523Z" level=info msg="shim disconnected" id=02f42c19314da1e51d574886c1c2c61902a9d8f6883fca42e297709c697bdaec namespace=k8s.io Nov 6 23:49:06.477993 containerd[1509]: time="2025-11-06T23:49:06.477810241Z" level=warning msg="cleaning up after shim disconnected" id=02f42c19314da1e51d574886c1c2c61902a9d8f6883fca42e297709c697bdaec namespace=k8s.io Nov 6 23:49:06.477993 containerd[1509]: time="2025-11-06T23:49:06.477822451Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 6 23:49:06.496021 containerd[1509]: time="2025-11-06T23:49:06.495947451Z" level=warning msg="cleanup warnings time=\"2025-11-06T23:49:06Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Nov 6 23:49:06.562262 containerd[1509]: time="2025-11-06T23:49:06.562053440Z" level=info msg="CreateContainer within sandbox \"9de9a63acc826c4b72875bb417175e01f02cdd3c2ef1ad56dc76f7bb1b1584a7\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 6 23:49:06.576864 containerd[1509]: time="2025-11-06T23:49:06.576786208Z" level=info msg="CreateContainer within sandbox \"9de9a63acc826c4b72875bb417175e01f02cdd3c2ef1ad56dc76f7bb1b1584a7\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f907be9be12e6dc346a7a2fc7eb75b8c7b319ecf17101e8d141faf9fb90026df\"" Nov 6 23:49:06.578603 containerd[1509]: time="2025-11-06T23:49:06.577792565Z" level=info msg="StartContainer for \"f907be9be12e6dc346a7a2fc7eb75b8c7b319ecf17101e8d141faf9fb90026df\"" Nov 6 23:49:06.618783 systemd[1]: Started cri-containerd-f907be9be12e6dc346a7a2fc7eb75b8c7b319ecf17101e8d141faf9fb90026df.scope - libcontainer container f907be9be12e6dc346a7a2fc7eb75b8c7b319ecf17101e8d141faf9fb90026df. Nov 6 23:49:06.652890 containerd[1509]: time="2025-11-06T23:49:06.652835456Z" level=info msg="StartContainer for \"f907be9be12e6dc346a7a2fc7eb75b8c7b319ecf17101e8d141faf9fb90026df\" returns successfully" Nov 6 23:49:06.665093 systemd[1]: cri-containerd-f907be9be12e6dc346a7a2fc7eb75b8c7b319ecf17101e8d141faf9fb90026df.scope: Deactivated successfully. Nov 6 23:49:06.665436 systemd[1]: cri-containerd-f907be9be12e6dc346a7a2fc7eb75b8c7b319ecf17101e8d141faf9fb90026df.scope: Consumed 25ms CPU time, 7.6M memory peak, 2.2M read from disk. Nov 6 23:49:06.702356 containerd[1509]: time="2025-11-06T23:49:06.702243351Z" level=info msg="shim disconnected" id=f907be9be12e6dc346a7a2fc7eb75b8c7b319ecf17101e8d141faf9fb90026df namespace=k8s.io Nov 6 23:49:06.702356 containerd[1509]: time="2025-11-06T23:49:06.702331899Z" level=warning msg="cleaning up after shim disconnected" id=f907be9be12e6dc346a7a2fc7eb75b8c7b319ecf17101e8d141faf9fb90026df namespace=k8s.io Nov 6 23:49:06.702356 containerd[1509]: time="2025-11-06T23:49:06.702344609Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 6 23:49:07.333772 sshd[4469]: Accepted publickey for core from 139.178.89.65 port 42078 ssh2: RSA SHA256:cjPMyP4iqQjYKk/6ojYcS0wCb6TI0fxaXqSTxqDpLQo Nov 6 23:49:07.336016 sshd-session[4469]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:49:07.344473 systemd-logind[1482]: New session 22 of user core. Nov 6 23:49:07.351746 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 6 23:49:07.565801 containerd[1509]: time="2025-11-06T23:49:07.565723236Z" level=info msg="CreateContainer within sandbox \"9de9a63acc826c4b72875bb417175e01f02cdd3c2ef1ad56dc76f7bb1b1584a7\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 6 23:49:07.592420 containerd[1509]: time="2025-11-06T23:49:07.592062894Z" level=info msg="CreateContainer within sandbox \"9de9a63acc826c4b72875bb417175e01f02cdd3c2ef1ad56dc76f7bb1b1584a7\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"195246d233af7de9efd3eb4ea176a3a3ad7c9c10c2a8e5edc9aa734658cd1cd6\"" Nov 6 23:49:07.594544 containerd[1509]: time="2025-11-06T23:49:07.593429233Z" level=info msg="StartContainer for \"195246d233af7de9efd3eb4ea176a3a3ad7c9c10c2a8e5edc9aa734658cd1cd6\"" Nov 6 23:49:07.643938 systemd[1]: Started cri-containerd-195246d233af7de9efd3eb4ea176a3a3ad7c9c10c2a8e5edc9aa734658cd1cd6.scope - libcontainer container 195246d233af7de9efd3eb4ea176a3a3ad7c9c10c2a8e5edc9aa734658cd1cd6. Nov 6 23:49:07.693291 containerd[1509]: time="2025-11-06T23:49:07.693188264Z" level=info msg="StartContainer for \"195246d233af7de9efd3eb4ea176a3a3ad7c9c10c2a8e5edc9aa734658cd1cd6\" returns successfully" Nov 6 23:49:07.701753 systemd[1]: cri-containerd-195246d233af7de9efd3eb4ea176a3a3ad7c9c10c2a8e5edc9aa734658cd1cd6.scope: Deactivated successfully. Nov 6 23:49:07.738770 containerd[1509]: time="2025-11-06T23:49:07.738659965Z" level=info msg="shim disconnected" id=195246d233af7de9efd3eb4ea176a3a3ad7c9c10c2a8e5edc9aa734658cd1cd6 namespace=k8s.io Nov 6 23:49:07.738770 containerd[1509]: time="2025-11-06T23:49:07.738730064Z" level=warning msg="cleaning up after shim disconnected" id=195246d233af7de9efd3eb4ea176a3a3ad7c9c10c2a8e5edc9aa734658cd1cd6 namespace=k8s.io Nov 6 23:49:07.738770 containerd[1509]: time="2025-11-06T23:49:07.738741744Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 6 23:49:08.027322 sshd[4602]: Connection closed by 139.178.89.65 port 42078 Nov 6 23:49:08.028230 sshd-session[4469]: pam_unix(sshd:session): session closed for user core Nov 6 23:49:08.033464 systemd[1]: sshd@21-46.62.225.38:22-139.178.89.65:42078.service: Deactivated successfully. Nov 6 23:49:08.036018 systemd[1]: session-22.scope: Deactivated successfully. Nov 6 23:49:08.039416 systemd-logind[1482]: Session 22 logged out. Waiting for processes to exit. Nov 6 23:49:08.041313 systemd-logind[1482]: Removed session 22. Nov 6 23:49:08.109559 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-195246d233af7de9efd3eb4ea176a3a3ad7c9c10c2a8e5edc9aa734658cd1cd6-rootfs.mount: Deactivated successfully. Nov 6 23:49:08.241905 systemd[1]: Started sshd@22-46.62.225.38:22-139.178.89.65:42094.service - OpenSSH per-connection server daemon (139.178.89.65:42094). Nov 6 23:49:08.574487 containerd[1509]: time="2025-11-06T23:49:08.574407814Z" level=info msg="CreateContainer within sandbox \"9de9a63acc826c4b72875bb417175e01f02cdd3c2ef1ad56dc76f7bb1b1584a7\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 6 23:49:08.604031 containerd[1509]: time="2025-11-06T23:49:08.603732214Z" level=info msg="CreateContainer within sandbox \"9de9a63acc826c4b72875bb417175e01f02cdd3c2ef1ad56dc76f7bb1b1584a7\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f6f29783eee6f1cc8560eb2a9415059bb6bc45e56abbc7d5b309aaec02bee557\"" Nov 6 23:49:08.606006 containerd[1509]: time="2025-11-06T23:49:08.605878675Z" level=info msg="StartContainer for \"f6f29783eee6f1cc8560eb2a9415059bb6bc45e56abbc7d5b309aaec02bee557\"" Nov 6 23:49:08.651777 systemd[1]: Started cri-containerd-f6f29783eee6f1cc8560eb2a9415059bb6bc45e56abbc7d5b309aaec02bee557.scope - libcontainer container f6f29783eee6f1cc8560eb2a9415059bb6bc45e56abbc7d5b309aaec02bee557. Nov 6 23:49:08.698259 systemd[1]: cri-containerd-f6f29783eee6f1cc8560eb2a9415059bb6bc45e56abbc7d5b309aaec02bee557.scope: Deactivated successfully. Nov 6 23:49:08.701538 containerd[1509]: time="2025-11-06T23:49:08.701372855Z" level=info msg="StartContainer for \"f6f29783eee6f1cc8560eb2a9415059bb6bc45e56abbc7d5b309aaec02bee557\" returns successfully" Nov 6 23:49:08.743996 containerd[1509]: time="2025-11-06T23:49:08.743927666Z" level=info msg="shim disconnected" id=f6f29783eee6f1cc8560eb2a9415059bb6bc45e56abbc7d5b309aaec02bee557 namespace=k8s.io Nov 6 23:49:08.744337 containerd[1509]: time="2025-11-06T23:49:08.744247628Z" level=warning msg="cleaning up after shim disconnected" id=f6f29783eee6f1cc8560eb2a9415059bb6bc45e56abbc7d5b309aaec02bee557 namespace=k8s.io Nov 6 23:49:08.744337 containerd[1509]: time="2025-11-06T23:49:08.744274117Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 6 23:49:09.110435 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f6f29783eee6f1cc8560eb2a9415059bb6bc45e56abbc7d5b309aaec02bee557-rootfs.mount: Deactivated successfully. Nov 6 23:49:09.386162 sshd[4666]: Accepted publickey for core from 139.178.89.65 port 42094 ssh2: RSA SHA256:cjPMyP4iqQjYKk/6ojYcS0wCb6TI0fxaXqSTxqDpLQo Nov 6 23:49:09.388379 sshd-session[4666]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:49:09.396758 systemd-logind[1482]: New session 23 of user core. Nov 6 23:49:09.406834 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 6 23:49:09.579657 containerd[1509]: time="2025-11-06T23:49:09.579554774Z" level=info msg="CreateContainer within sandbox \"9de9a63acc826c4b72875bb417175e01f02cdd3c2ef1ad56dc76f7bb1b1584a7\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 6 23:49:09.631198 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3066836014.mount: Deactivated successfully. Nov 6 23:49:09.633795 containerd[1509]: time="2025-11-06T23:49:09.633729521Z" level=info msg="CreateContainer within sandbox \"9de9a63acc826c4b72875bb417175e01f02cdd3c2ef1ad56dc76f7bb1b1584a7\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"67c246df23be9caf504b18ea113deecd96aa72981d7eba7e4842485ba36bbc1f\"" Nov 6 23:49:09.634806 containerd[1509]: time="2025-11-06T23:49:09.634619481Z" level=info msg="StartContainer for \"67c246df23be9caf504b18ea113deecd96aa72981d7eba7e4842485ba36bbc1f\"" Nov 6 23:49:09.680825 systemd[1]: Started cri-containerd-67c246df23be9caf504b18ea113deecd96aa72981d7eba7e4842485ba36bbc1f.scope - libcontainer container 67c246df23be9caf504b18ea113deecd96aa72981d7eba7e4842485ba36bbc1f. Nov 6 23:49:09.720206 containerd[1509]: time="2025-11-06T23:49:09.720166921Z" level=info msg="StartContainer for \"67c246df23be9caf504b18ea113deecd96aa72981d7eba7e4842485ba36bbc1f\" returns successfully" Nov 6 23:49:10.170574 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Nov 6 23:49:10.607058 kubelet[2687]: I1106 23:49:10.606931 2687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-x7cgj" podStartSLOduration=5.606899964 podStartE2EDuration="5.606899964s" podCreationTimestamp="2025-11-06 23:49:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 23:49:10.605156132 +0000 UTC m=+209.738039329" watchObservedRunningTime="2025-11-06 23:49:10.606899964 +0000 UTC m=+209.739783171" Nov 6 23:49:13.174812 systemd-networkd[1411]: lxc_health: Link UP Nov 6 23:49:13.175512 systemd-networkd[1411]: lxc_health: Gained carrier Nov 6 23:49:14.438891 systemd[1]: run-containerd-runc-k8s.io-67c246df23be9caf504b18ea113deecd96aa72981d7eba7e4842485ba36bbc1f-runc.BLFqKC.mount: Deactivated successfully. Nov 6 23:49:15.164555 systemd-networkd[1411]: lxc_health: Gained IPv6LL Nov 6 23:49:16.552376 systemd[1]: run-containerd-runc-k8s.io-67c246df23be9caf504b18ea113deecd96aa72981d7eba7e4842485ba36bbc1f-runc.cRryk8.mount: Deactivated successfully. Nov 6 23:49:18.686369 systemd[1]: run-containerd-runc-k8s.io-67c246df23be9caf504b18ea113deecd96aa72981d7eba7e4842485ba36bbc1f-runc.4POdWy.mount: Deactivated successfully. Nov 6 23:49:18.937453 sshd[4723]: Connection closed by 139.178.89.65 port 42094 Nov 6 23:49:18.939144 sshd-session[4666]: pam_unix(sshd:session): session closed for user core Nov 6 23:49:18.944588 systemd[1]: sshd@22-46.62.225.38:22-139.178.89.65:42094.service: Deactivated successfully. Nov 6 23:49:18.948265 systemd[1]: session-23.scope: Deactivated successfully. Nov 6 23:49:18.950101 systemd-logind[1482]: Session 23 logged out. Waiting for processes to exit. Nov 6 23:49:18.951836 systemd-logind[1482]: Removed session 23. Nov 6 23:49:31.449524 update_engine[1492]: I20251106 23:49:31.449298 1492 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Nov 6 23:49:31.449524 update_engine[1492]: I20251106 23:49:31.449381 1492 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Nov 6 23:49:31.464382 update_engine[1492]: I20251106 23:49:31.453886 1492 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Nov 6 23:49:31.464382 update_engine[1492]: I20251106 23:49:31.454797 1492 omaha_request_params.cc:62] Current group set to stable Nov 6 23:49:31.464382 update_engine[1492]: I20251106 23:49:31.455001 1492 update_attempter.cc:499] Already updated boot flags. Skipping. Nov 6 23:49:31.464382 update_engine[1492]: I20251106 23:49:31.455013 1492 update_attempter.cc:643] Scheduling an action processor start. Nov 6 23:49:31.464382 update_engine[1492]: I20251106 23:49:31.455039 1492 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Nov 6 23:49:31.464382 update_engine[1492]: I20251106 23:49:31.455085 1492 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Nov 6 23:49:31.464382 update_engine[1492]: I20251106 23:49:31.455160 1492 omaha_request_action.cc:271] Posting an Omaha request to disabled Nov 6 23:49:31.464382 update_engine[1492]: I20251106 23:49:31.455169 1492 omaha_request_action.cc:272] Request: Nov 6 23:49:31.464382 update_engine[1492]: Nov 6 23:49:31.464382 update_engine[1492]: Nov 6 23:49:31.464382 update_engine[1492]: Nov 6 23:49:31.464382 update_engine[1492]: Nov 6 23:49:31.464382 update_engine[1492]: Nov 6 23:49:31.464382 update_engine[1492]: Nov 6 23:49:31.464382 update_engine[1492]: Nov 6 23:49:31.464382 update_engine[1492]: Nov 6 23:49:31.464382 update_engine[1492]: I20251106 23:49:31.455178 1492 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 6 23:49:31.470738 update_engine[1492]: I20251106 23:49:31.468288 1492 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 6 23:49:31.472116 update_engine[1492]: I20251106 23:49:31.471349 1492 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 6 23:49:31.472534 update_engine[1492]: E20251106 23:49:31.472370 1492 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 6 23:49:31.472534 update_engine[1492]: I20251106 23:49:31.472452 1492 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Nov 6 23:49:31.472776 locksmithd[1510]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Nov 6 23:49:37.247128 kubelet[2687]: E1106 23:49:37.247069 2687 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:53238->10.0.0.2:2379: read: connection timed out" Nov 6 23:49:37.256620 systemd[1]: cri-containerd-cac65ed4d44057f80462e303f38a61766853aad03e6643a414e89f28fc02136d.scope: Deactivated successfully. Nov 6 23:49:37.258968 systemd[1]: cri-containerd-cac65ed4d44057f80462e303f38a61766853aad03e6643a414e89f28fc02136d.scope: Consumed 2.737s CPU time, 33.4M memory peak, 14.5M read from disk. Nov 6 23:49:37.288916 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cac65ed4d44057f80462e303f38a61766853aad03e6643a414e89f28fc02136d-rootfs.mount: Deactivated successfully. Nov 6 23:49:37.295576 containerd[1509]: time="2025-11-06T23:49:37.295466591Z" level=info msg="shim disconnected" id=cac65ed4d44057f80462e303f38a61766853aad03e6643a414e89f28fc02136d namespace=k8s.io Nov 6 23:49:37.295576 containerd[1509]: time="2025-11-06T23:49:37.295572439Z" level=warning msg="cleaning up after shim disconnected" id=cac65ed4d44057f80462e303f38a61766853aad03e6643a414e89f28fc02136d namespace=k8s.io Nov 6 23:49:37.295952 containerd[1509]: time="2025-11-06T23:49:37.295584469Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 6 23:49:37.640739 kubelet[2687]: I1106 23:49:37.640711 2687 scope.go:117] "RemoveContainer" containerID="cac65ed4d44057f80462e303f38a61766853aad03e6643a414e89f28fc02136d" Nov 6 23:49:37.642678 containerd[1509]: time="2025-11-06T23:49:37.642644519Z" level=info msg="CreateContainer within sandbox \"6784b1a5f8404f9f03230b6c801cd021cdfb4e0cf35a00b1b09f895f7ef398ac\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Nov 6 23:49:37.660471 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1912672466.mount: Deactivated successfully. Nov 6 23:49:37.661182 containerd[1509]: time="2025-11-06T23:49:37.661088070Z" level=info msg="CreateContainer within sandbox \"6784b1a5f8404f9f03230b6c801cd021cdfb4e0cf35a00b1b09f895f7ef398ac\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"c6bfc669b41aee59f5d9029de8c933dca98468699908e47c32c8e7d459013977\"" Nov 6 23:49:37.662417 containerd[1509]: time="2025-11-06T23:49:37.661601083Z" level=info msg="StartContainer for \"c6bfc669b41aee59f5d9029de8c933dca98468699908e47c32c8e7d459013977\"" Nov 6 23:49:37.665399 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1425848834.mount: Deactivated successfully. Nov 6 23:49:37.693668 systemd[1]: Started cri-containerd-c6bfc669b41aee59f5d9029de8c933dca98468699908e47c32c8e7d459013977.scope - libcontainer container c6bfc669b41aee59f5d9029de8c933dca98468699908e47c32c8e7d459013977. Nov 6 23:49:37.738481 containerd[1509]: time="2025-11-06T23:49:37.738363931Z" level=info msg="StartContainer for \"c6bfc669b41aee59f5d9029de8c933dca98468699908e47c32c8e7d459013977\" returns successfully" Nov 6 23:49:37.889253 systemd[1]: cri-containerd-fa5b9718f2ea575dc69a32f7ea493a08146d4536513c356e3a7732980ac84e45.scope: Deactivated successfully. Nov 6 23:49:37.890188 systemd[1]: cri-containerd-fa5b9718f2ea575dc69a32f7ea493a08146d4536513c356e3a7732980ac84e45.scope: Consumed 4.670s CPU time, 69.4M memory peak, 23.9M read from disk. Nov 6 23:49:37.927248 containerd[1509]: time="2025-11-06T23:49:37.927124796Z" level=info msg="shim disconnected" id=fa5b9718f2ea575dc69a32f7ea493a08146d4536513c356e3a7732980ac84e45 namespace=k8s.io Nov 6 23:49:37.927617 containerd[1509]: time="2025-11-06T23:49:37.927588489Z" level=warning msg="cleaning up after shim disconnected" id=fa5b9718f2ea575dc69a32f7ea493a08146d4536513c356e3a7732980ac84e45 namespace=k8s.io Nov 6 23:49:37.927718 containerd[1509]: time="2025-11-06T23:49:37.927703017Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 6 23:49:38.287025 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fa5b9718f2ea575dc69a32f7ea493a08146d4536513c356e3a7732980ac84e45-rootfs.mount: Deactivated successfully. Nov 6 23:49:38.646029 kubelet[2687]: I1106 23:49:38.645409 2687 scope.go:117] "RemoveContainer" containerID="fa5b9718f2ea575dc69a32f7ea493a08146d4536513c356e3a7732980ac84e45" Nov 6 23:49:38.649278 containerd[1509]: time="2025-11-06T23:49:38.649211233Z" level=info msg="CreateContainer within sandbox \"2bb89907ccb8cd0d48a28187becbb529aa099741488440cf68f65d4f2d973e79\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Nov 6 23:49:38.675846 containerd[1509]: time="2025-11-06T23:49:38.675616568Z" level=info msg="CreateContainer within sandbox \"2bb89907ccb8cd0d48a28187becbb529aa099741488440cf68f65d4f2d973e79\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"32c282becc3534e97a0bcb00113dc39506c54194e9640cb56b34bdaf3d1348e4\"" Nov 6 23:49:38.678274 containerd[1509]: time="2025-11-06T23:49:38.676389816Z" level=info msg="StartContainer for \"32c282becc3534e97a0bcb00113dc39506c54194e9640cb56b34bdaf3d1348e4\"" Nov 6 23:49:38.729754 systemd[1]: Started cri-containerd-32c282becc3534e97a0bcb00113dc39506c54194e9640cb56b34bdaf3d1348e4.scope - libcontainer container 32c282becc3534e97a0bcb00113dc39506c54194e9640cb56b34bdaf3d1348e4. Nov 6 23:49:38.799283 containerd[1509]: time="2025-11-06T23:49:38.799225523Z" level=info msg="StartContainer for \"32c282becc3534e97a0bcb00113dc39506c54194e9640cb56b34bdaf3d1348e4\" returns successfully" Nov 6 23:49:39.286128 systemd[1]: run-containerd-runc-k8s.io-32c282becc3534e97a0bcb00113dc39506c54194e9640cb56b34bdaf3d1348e4-runc.Lp74va.mount: Deactivated successfully. Nov 6 23:49:40.534255 kubelet[2687]: E1106 23:49:40.513815 2687 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:53042->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4230-2-4-n-9b34d37d4f.18758fd40aca1417 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4230-2-4-n-9b34d37d4f,UID:fdb0654bad4581ed0be549f4d71437c6,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4230-2-4-n-9b34d37d4f,},FirstTimestamp:2025-11-06 23:49:30.062271511 +0000 UTC m=+229.195154678,LastTimestamp:2025-11-06 23:49:30.062271511 +0000 UTC m=+229.195154678,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-2-4-n-9b34d37d4f,}" Nov 6 23:49:41.010120 containerd[1509]: time="2025-11-06T23:49:41.010078033Z" level=info msg="StopPodSandbox for \"5520cf1ac83fefaa11981ed6a3b17e3d72c456772353a531f8d41a56ae2da0fe\"" Nov 6 23:49:41.010866 containerd[1509]: time="2025-11-06T23:49:41.010817122Z" level=info msg="TearDown network for sandbox \"5520cf1ac83fefaa11981ed6a3b17e3d72c456772353a531f8d41a56ae2da0fe\" successfully" Nov 6 23:49:41.010866 containerd[1509]: time="2025-11-06T23:49:41.010853472Z" level=info msg="StopPodSandbox for \"5520cf1ac83fefaa11981ed6a3b17e3d72c456772353a531f8d41a56ae2da0fe\" returns successfully" Nov 6 23:49:41.011531 containerd[1509]: time="2025-11-06T23:49:41.011442604Z" level=info msg="RemovePodSandbox for \"5520cf1ac83fefaa11981ed6a3b17e3d72c456772353a531f8d41a56ae2da0fe\"" Nov 6 23:49:41.011624 containerd[1509]: time="2025-11-06T23:49:41.011543852Z" level=info msg="Forcibly stopping sandbox \"5520cf1ac83fefaa11981ed6a3b17e3d72c456772353a531f8d41a56ae2da0fe\"" Nov 6 23:49:41.011665 containerd[1509]: time="2025-11-06T23:49:41.011635451Z" level=info msg="TearDown network for sandbox \"5520cf1ac83fefaa11981ed6a3b17e3d72c456772353a531f8d41a56ae2da0fe\" successfully" Nov 6 23:49:41.026995 containerd[1509]: time="2025-11-06T23:49:41.026938171Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5520cf1ac83fefaa11981ed6a3b17e3d72c456772353a531f8d41a56ae2da0fe\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 6 23:49:41.027135 containerd[1509]: time="2025-11-06T23:49:41.027012460Z" level=info msg="RemovePodSandbox \"5520cf1ac83fefaa11981ed6a3b17e3d72c456772353a531f8d41a56ae2da0fe\" returns successfully" Nov 6 23:49:41.028165 containerd[1509]: time="2025-11-06T23:49:41.028099395Z" level=info msg="StopPodSandbox for \"2e08f55dbb09e1f8dab4788bc3583fc57365cafc6f98d043a0032cac273c8a05\"" Nov 6 23:49:41.028269 containerd[1509]: time="2025-11-06T23:49:41.028197163Z" level=info msg="TearDown network for sandbox \"2e08f55dbb09e1f8dab4788bc3583fc57365cafc6f98d043a0032cac273c8a05\" successfully" Nov 6 23:49:41.028269 containerd[1509]: time="2025-11-06T23:49:41.028249943Z" level=info msg="StopPodSandbox for \"2e08f55dbb09e1f8dab4788bc3583fc57365cafc6f98d043a0032cac273c8a05\" returns successfully" Nov 6 23:49:41.028835 containerd[1509]: time="2025-11-06T23:49:41.028791404Z" level=info msg="RemovePodSandbox for \"2e08f55dbb09e1f8dab4788bc3583fc57365cafc6f98d043a0032cac273c8a05\"" Nov 6 23:49:41.028835 containerd[1509]: time="2025-11-06T23:49:41.028831245Z" level=info msg="Forcibly stopping sandbox \"2e08f55dbb09e1f8dab4788bc3583fc57365cafc6f98d043a0032cac273c8a05\"" Nov 6 23:49:41.029047 containerd[1509]: time="2025-11-06T23:49:41.028896423Z" level=info msg="TearDown network for sandbox \"2e08f55dbb09e1f8dab4788bc3583fc57365cafc6f98d043a0032cac273c8a05\" successfully" Nov 6 23:49:41.032666 containerd[1509]: time="2025-11-06T23:49:41.032580260Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2e08f55dbb09e1f8dab4788bc3583fc57365cafc6f98d043a0032cac273c8a05\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 6 23:49:41.032666 containerd[1509]: time="2025-11-06T23:49:41.032665059Z" level=info msg="RemovePodSandbox \"2e08f55dbb09e1f8dab4788bc3583fc57365cafc6f98d043a0032cac273c8a05\" returns successfully" Nov 6 23:49:41.399715 update_engine[1492]: I20251106 23:49:41.399612 1492 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 6 23:49:41.400236 update_engine[1492]: I20251106 23:49:41.399995 1492 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 6 23:49:41.400408 update_engine[1492]: I20251106 23:49:41.400357 1492 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 6 23:49:41.400753 update_engine[1492]: E20251106 23:49:41.400712 1492 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 6 23:49:41.400810 update_engine[1492]: I20251106 23:49:41.400797 1492 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Nov 6 23:49:46.526188 kubelet[2687]: I1106 23:49:46.526085 2687 status_manager.go:890] "Failed to get status for pod" podUID="fdb0654bad4581ed0be549f4d71437c6" pod="kube-system/kube-apiserver-ci-4230-2-4-n-9b34d37d4f" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:53160->10.0.0.2:2379: read: connection timed out" Nov 6 23:49:47.262660 kubelet[2687]: E1106 23:49:47.262560 2687 controller.go:195] "Failed to update lease" err="Put \"https://46.62.225.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-2-4-n-9b34d37d4f?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 6 23:49:51.399822 update_engine[1492]: I20251106 23:49:51.399698 1492 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 6 23:49:51.400581 update_engine[1492]: I20251106 23:49:51.400072 1492 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 6 23:49:51.400581 update_engine[1492]: I20251106 23:49:51.400534 1492 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 6 23:49:51.401297 update_engine[1492]: E20251106 23:49:51.401221 1492 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 6 23:49:51.401421 update_engine[1492]: I20251106 23:49:51.401311 1492 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Nov 6 23:49:57.263937 kubelet[2687]: E1106 23:49:57.263433 2687 controller.go:195] "Failed to update lease" err="Put \"https://46.62.225.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-2-4-n-9b34d37d4f?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 6 23:50:01.404557 update_engine[1492]: I20251106 23:50:01.404414 1492 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 6 23:50:01.405065 update_engine[1492]: I20251106 23:50:01.404794 1492 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 6 23:50:01.405198 update_engine[1492]: I20251106 23:50:01.405159 1492 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 6 23:50:01.405607 update_engine[1492]: E20251106 23:50:01.405560 1492 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 6 23:50:01.405671 update_engine[1492]: I20251106 23:50:01.405618 1492 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Nov 6 23:50:01.405671 update_engine[1492]: I20251106 23:50:01.405628 1492 omaha_request_action.cc:617] Omaha request response: Nov 6 23:50:01.405739 update_engine[1492]: E20251106 23:50:01.405725 1492 omaha_request_action.cc:636] Omaha request network transfer failed. Nov 6 23:50:01.405777 update_engine[1492]: I20251106 23:50:01.405761 1492 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Nov 6 23:50:01.405777 update_engine[1492]: I20251106 23:50:01.405769 1492 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Nov 6 23:50:01.405849 update_engine[1492]: I20251106 23:50:01.405776 1492 update_attempter.cc:306] Processing Done. Nov 6 23:50:01.405849 update_engine[1492]: E20251106 23:50:01.405793 1492 update_attempter.cc:619] Update failed. Nov 6 23:50:01.405849 update_engine[1492]: I20251106 23:50:01.405802 1492 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Nov 6 23:50:01.405849 update_engine[1492]: I20251106 23:50:01.405810 1492 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Nov 6 23:50:01.405849 update_engine[1492]: I20251106 23:50:01.405818 1492 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Nov 6 23:50:01.406059 update_engine[1492]: I20251106 23:50:01.405942 1492 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Nov 6 23:50:01.406059 update_engine[1492]: I20251106 23:50:01.405980 1492 omaha_request_action.cc:271] Posting an Omaha request to disabled Nov 6 23:50:01.406059 update_engine[1492]: I20251106 23:50:01.405992 1492 omaha_request_action.cc:272] Request: Nov 6 23:50:01.406059 update_engine[1492]: Nov 6 23:50:01.406059 update_engine[1492]: Nov 6 23:50:01.406059 update_engine[1492]: Nov 6 23:50:01.406059 update_engine[1492]: Nov 6 23:50:01.406059 update_engine[1492]: Nov 6 23:50:01.406059 update_engine[1492]: Nov 6 23:50:01.406059 update_engine[1492]: I20251106 23:50:01.406003 1492 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 6 23:50:01.406431 update_engine[1492]: I20251106 23:50:01.406271 1492 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 6 23:50:01.406711 update_engine[1492]: I20251106 23:50:01.406606 1492 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 6 23:50:01.407240 locksmithd[1510]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Nov 6 23:50:01.407636 update_engine[1492]: E20251106 23:50:01.407016 1492 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 6 23:50:01.407636 update_engine[1492]: I20251106 23:50:01.407070 1492 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Nov 6 23:50:01.407636 update_engine[1492]: I20251106 23:50:01.407080 1492 omaha_request_action.cc:617] Omaha request response: Nov 6 23:50:01.407636 update_engine[1492]: I20251106 23:50:01.407090 1492 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Nov 6 23:50:01.407636 update_engine[1492]: I20251106 23:50:01.407096 1492 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Nov 6 23:50:01.407636 update_engine[1492]: I20251106 23:50:01.407102 1492 update_attempter.cc:306] Processing Done. Nov 6 23:50:01.407636 update_engine[1492]: I20251106 23:50:01.407110 1492 update_attempter.cc:310] Error event sent. Nov 6 23:50:01.407636 update_engine[1492]: I20251106 23:50:01.407122 1492 update_check_scheduler.cc:74] Next update check in 40m45s Nov 6 23:50:01.408036 locksmithd[1510]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Nov 6 23:50:07.265549 kubelet[2687]: E1106 23:50:07.264295 2687 controller.go:195] "Failed to update lease" err="Put \"https://46.62.225.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-2-4-n-9b34d37d4f?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 6 23:50:14.537617 kubelet[2687]: E1106 23:50:14.537374 2687 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{kube-apiserver-ci-4230-2-4-n-9b34d37d4f.18758fd44919dde0 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4230-2-4-n-9b34d37d4f,UID:fdb0654bad4581ed0be549f4d71437c6,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Liveness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4230-2-4-n-9b34d37d4f,},FirstTimestamp:2025-11-06 23:49:31.107687904 +0000 UTC m=+230.240571101,LastTimestamp:2025-11-06 23:49:31.107687904 +0000 UTC m=+230.240571101,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-2-4-n-9b34d37d4f,}" Nov 6 23:50:17.266042 kubelet[2687]: E1106 23:50:17.265221 2687 controller.go:195] "Failed to update lease" err="Put \"https://46.62.225.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-2-4-n-9b34d37d4f?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 6 23:50:17.266042 kubelet[2687]: I1106 23:50:17.265289 2687 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Nov 6 23:50:27.266740 kubelet[2687]: E1106 23:50:27.266653 2687 controller.go:145] "Failed to ensure lease exists, will retry" err="the server was unable to return a response in the time allotted, but may still be processing the request (get leases.coordination.k8s.io ci-4230-2-4-n-9b34d37d4f)" interval="200ms"