Aug 13 07:28:58.047821 kernel: Linux version 6.6.100-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Aug 12 22:14:58 -00 2025 Aug 13 07:28:58.047862 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 07:28:58.047876 kernel: BIOS-provided physical RAM map: Aug 13 07:28:58.047892 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Aug 13 07:28:58.047902 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Aug 13 07:28:58.047911 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Aug 13 07:28:58.047922 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Aug 13 07:28:58.047932 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Aug 13 07:28:58.047942 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Aug 13 07:28:58.047960 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Aug 13 07:28:58.047970 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Aug 13 07:28:58.047979 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Aug 13 07:28:58.048003 kernel: NX (Execute Disable) protection: active Aug 13 07:28:58.048014 kernel: APIC: Static calls initialized Aug 13 07:28:58.048026 kernel: SMBIOS 2.8 present. Aug 13 07:28:58.048045 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.16.0-3.module_el8.7.0+3346+68867adb 04/01/2014 Aug 13 07:28:58.048057 kernel: Hypervisor detected: KVM Aug 13 07:28:58.048073 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 13 07:28:58.048084 kernel: kvm-clock: using sched offset of 5071221394 cycles Aug 13 07:28:58.048095 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 13 07:28:58.048106 kernel: tsc: Detected 2799.998 MHz processor Aug 13 07:28:58.048117 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 13 07:28:58.048129 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 13 07:28:58.048139 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Aug 13 07:28:58.048150 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Aug 13 07:28:58.048161 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 13 07:28:58.048176 kernel: Using GB pages for direct mapping Aug 13 07:28:58.048187 kernel: ACPI: Early table checksum verification disabled Aug 13 07:28:58.048198 kernel: ACPI: RSDP 0x00000000000F59E0 000014 (v00 BOCHS ) Aug 13 07:28:58.048209 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:28:58.048220 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:28:58.048231 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:28:58.048241 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Aug 13 07:28:58.048252 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:28:58.048275 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:28:58.048290 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:28:58.048301 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:28:58.048311 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Aug 13 07:28:58.048322 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Aug 13 07:28:58.048344 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Aug 13 07:28:58.048360 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Aug 13 07:28:58.048370 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Aug 13 07:28:58.048385 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Aug 13 07:28:58.048407 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Aug 13 07:28:58.048417 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Aug 13 07:28:58.048432 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Aug 13 07:28:58.048443 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Aug 13 07:28:58.048464 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Aug 13 07:28:58.048474 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Aug 13 07:28:58.048485 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Aug 13 07:28:58.048500 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Aug 13 07:28:58.048511 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Aug 13 07:28:58.048521 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Aug 13 07:28:58.048531 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Aug 13 07:28:58.048542 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Aug 13 07:28:58.048551 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Aug 13 07:28:58.048562 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Aug 13 07:28:58.048584 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Aug 13 07:28:58.048599 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Aug 13 07:28:58.048679 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Aug 13 07:28:58.048696 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Aug 13 07:28:58.048708 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Aug 13 07:28:58.048719 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Aug 13 07:28:58.048731 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Aug 13 07:28:58.048743 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Aug 13 07:28:58.048754 kernel: Zone ranges: Aug 13 07:28:58.048766 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 13 07:28:58.048777 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Aug 13 07:28:58.048796 kernel: Normal empty Aug 13 07:28:58.048807 kernel: Movable zone start for each node Aug 13 07:28:58.048819 kernel: Early memory node ranges Aug 13 07:28:58.048830 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Aug 13 07:28:58.048841 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Aug 13 07:28:58.048852 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Aug 13 07:28:58.048864 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 07:28:58.048875 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Aug 13 07:28:58.048892 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Aug 13 07:28:58.048904 kernel: ACPI: PM-Timer IO Port: 0x608 Aug 13 07:28:58.048922 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 13 07:28:58.048933 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Aug 13 07:28:58.048945 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Aug 13 07:28:58.048969 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 13 07:28:58.048980 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 13 07:28:58.048991 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 13 07:28:58.049005 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 13 07:28:58.049016 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 13 07:28:58.049040 kernel: TSC deadline timer available Aug 13 07:28:58.049057 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Aug 13 07:28:58.049068 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Aug 13 07:28:58.049080 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Aug 13 07:28:58.049091 kernel: Booting paravirtualized kernel on KVM Aug 13 07:28:58.049102 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 13 07:28:58.049114 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Aug 13 07:28:58.049125 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u262144 Aug 13 07:28:58.049136 kernel: pcpu-alloc: s197096 r8192 d32280 u262144 alloc=1*2097152 Aug 13 07:28:58.049148 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Aug 13 07:28:58.049164 kernel: kvm-guest: PV spinlocks enabled Aug 13 07:28:58.049175 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Aug 13 07:28:58.049188 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 07:28:58.049200 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 07:28:58.049211 kernel: random: crng init done Aug 13 07:28:58.049222 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 07:28:58.049234 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Aug 13 07:28:58.049245 kernel: Fallback order for Node 0: 0 Aug 13 07:28:58.049261 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Aug 13 07:28:58.049278 kernel: Policy zone: DMA32 Aug 13 07:28:58.049290 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 07:28:58.049301 kernel: software IO TLB: area num 16. Aug 13 07:28:58.049313 kernel: Memory: 1901528K/2096616K available (12288K kernel code, 2295K rwdata, 22748K rodata, 42876K init, 2316K bss, 194828K reserved, 0K cma-reserved) Aug 13 07:28:58.049325 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Aug 13 07:28:58.049336 kernel: Kernel/User page tables isolation: enabled Aug 13 07:28:58.049347 kernel: ftrace: allocating 37968 entries in 149 pages Aug 13 07:28:58.049377 kernel: ftrace: allocated 149 pages with 4 groups Aug 13 07:28:58.049388 kernel: Dynamic Preempt: voluntary Aug 13 07:28:58.049399 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 13 07:28:58.049415 kernel: rcu: RCU event tracing is enabled. Aug 13 07:28:58.049439 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Aug 13 07:28:58.049449 kernel: Trampoline variant of Tasks RCU enabled. Aug 13 07:28:58.049472 kernel: Rude variant of Tasks RCU enabled. Aug 13 07:28:58.049488 kernel: Tracing variant of Tasks RCU enabled. Aug 13 07:28:58.049498 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 07:28:58.049509 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Aug 13 07:28:58.049520 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Aug 13 07:28:58.049531 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 13 07:28:58.049546 kernel: Console: colour VGA+ 80x25 Aug 13 07:28:58.049557 kernel: printk: console [tty0] enabled Aug 13 07:28:58.049568 kernel: printk: console [ttyS0] enabled Aug 13 07:28:58.049579 kernel: ACPI: Core revision 20230628 Aug 13 07:28:58.049594 kernel: APIC: Switch to symmetric I/O mode setup Aug 13 07:28:58.049643 kernel: x2apic enabled Aug 13 07:28:58.049669 kernel: APIC: Switched APIC routing to: physical x2apic Aug 13 07:28:58.049686 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Aug 13 07:28:58.049699 kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998) Aug 13 07:28:58.049711 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Aug 13 07:28:58.049723 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Aug 13 07:28:58.049735 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Aug 13 07:28:58.049747 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 13 07:28:58.049759 kernel: Spectre V2 : Mitigation: Retpolines Aug 13 07:28:58.049770 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Aug 13 07:28:58.049789 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Aug 13 07:28:58.049801 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 13 07:28:58.049813 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Aug 13 07:28:58.049825 kernel: MDS: Mitigation: Clear CPU buffers Aug 13 07:28:58.049837 kernel: MMIO Stale Data: Unknown: No mitigations Aug 13 07:28:58.049848 kernel: SRBDS: Unknown: Dependent on hypervisor status Aug 13 07:28:58.049860 kernel: ITS: Mitigation: Aligned branch/return thunks Aug 13 07:28:58.049872 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 13 07:28:58.049884 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 13 07:28:58.049896 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 13 07:28:58.049907 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 13 07:28:58.049924 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Aug 13 07:28:58.049952 kernel: Freeing SMP alternatives memory: 32K Aug 13 07:28:58.049967 kernel: pid_max: default: 32768 minimum: 301 Aug 13 07:28:58.049979 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Aug 13 07:28:58.049990 kernel: landlock: Up and running. Aug 13 07:28:58.050013 kernel: SELinux: Initializing. Aug 13 07:28:58.050023 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Aug 13 07:28:58.050034 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Aug 13 07:28:58.050044 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Aug 13 07:28:58.050055 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Aug 13 07:28:58.050078 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Aug 13 07:28:58.050095 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Aug 13 07:28:58.050106 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Aug 13 07:28:58.050117 kernel: signal: max sigframe size: 1776 Aug 13 07:28:58.050128 kernel: rcu: Hierarchical SRCU implementation. Aug 13 07:28:58.050140 kernel: rcu: Max phase no-delay instances is 400. Aug 13 07:28:58.050151 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Aug 13 07:28:58.050162 kernel: smp: Bringing up secondary CPUs ... Aug 13 07:28:58.050173 kernel: smpboot: x86: Booting SMP configuration: Aug 13 07:28:58.050184 kernel: .... node #0, CPUs: #1 Aug 13 07:28:58.050201 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Aug 13 07:28:58.050212 kernel: smp: Brought up 1 node, 2 CPUs Aug 13 07:28:58.050223 kernel: smpboot: Max logical packages: 16 Aug 13 07:28:58.050234 kernel: smpboot: Total of 2 processors activated (11199.99 BogoMIPS) Aug 13 07:28:58.050246 kernel: devtmpfs: initialized Aug 13 07:28:58.050257 kernel: x86/mm: Memory block size: 128MB Aug 13 07:28:58.050268 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 07:28:58.050292 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Aug 13 07:28:58.050303 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 07:28:58.050328 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 07:28:58.050352 kernel: audit: initializing netlink subsys (disabled) Aug 13 07:28:58.050363 kernel: audit: type=2000 audit(1755070136.121:1): state=initialized audit_enabled=0 res=1 Aug 13 07:28:58.050374 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 07:28:58.050385 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 13 07:28:58.050407 kernel: cpuidle: using governor menu Aug 13 07:28:58.050418 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 07:28:58.050428 kernel: dca service started, version 1.12.1 Aug 13 07:28:58.050439 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Aug 13 07:28:58.050459 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Aug 13 07:28:58.050470 kernel: PCI: Using configuration type 1 for base access Aug 13 07:28:58.050481 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 13 07:28:58.050492 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 07:28:58.050502 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Aug 13 07:28:58.050513 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 07:28:58.050524 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 13 07:28:58.050534 kernel: ACPI: Added _OSI(Module Device) Aug 13 07:28:58.050545 kernel: ACPI: Added _OSI(Processor Device) Aug 13 07:28:58.050565 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 07:28:58.050588 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 13 07:28:58.050599 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Aug 13 07:28:58.050610 kernel: ACPI: Interpreter enabled Aug 13 07:28:58.050621 kernel: ACPI: PM: (supports S0 S5) Aug 13 07:28:58.050667 kernel: ACPI: Using IOAPIC for interrupt routing Aug 13 07:28:58.050680 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 13 07:28:58.050693 kernel: PCI: Using E820 reservations for host bridge windows Aug 13 07:28:58.050704 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Aug 13 07:28:58.050730 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 13 07:28:58.051042 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 13 07:28:58.051253 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Aug 13 07:28:58.051430 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Aug 13 07:28:58.051448 kernel: PCI host bridge to bus 0000:00 Aug 13 07:28:58.051719 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 13 07:28:58.051900 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 13 07:28:58.052087 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 13 07:28:58.052255 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Aug 13 07:28:58.052418 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Aug 13 07:28:58.052576 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Aug 13 07:28:58.052786 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 13 07:28:58.053045 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Aug 13 07:28:58.053264 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Aug 13 07:28:58.053445 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Aug 13 07:28:58.053626 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Aug 13 07:28:58.054284 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Aug 13 07:28:58.055172 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Aug 13 07:28:58.055384 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Aug 13 07:28:58.057808 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Aug 13 07:28:58.058043 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Aug 13 07:28:58.058236 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Aug 13 07:28:58.058449 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Aug 13 07:28:58.058714 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Aug 13 07:28:58.058913 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Aug 13 07:28:58.059113 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Aug 13 07:28:58.059415 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Aug 13 07:28:58.059596 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Aug 13 07:28:58.059822 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Aug 13 07:28:58.060078 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Aug 13 07:28:58.060290 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Aug 13 07:28:58.060462 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Aug 13 07:28:58.062730 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Aug 13 07:28:58.062920 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Aug 13 07:28:58.063110 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Aug 13 07:28:58.063284 kernel: pci 0000:00:03.0: reg 0x10: [io 0xd0c0-0xd0df] Aug 13 07:28:58.063453 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Aug 13 07:28:58.065372 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Aug 13 07:28:58.065589 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Aug 13 07:28:58.065852 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Aug 13 07:28:58.066041 kernel: pci 0000:00:04.0: reg 0x10: [io 0xd000-0xd07f] Aug 13 07:28:58.066223 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Aug 13 07:28:58.066476 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Aug 13 07:28:58.066721 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Aug 13 07:28:58.066896 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Aug 13 07:28:58.067130 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Aug 13 07:28:58.067323 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xd0e0-0xd0ff] Aug 13 07:28:58.067498 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Aug 13 07:28:58.067822 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Aug 13 07:28:58.068020 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Aug 13 07:28:58.068237 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Aug 13 07:28:58.068427 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Aug 13 07:28:58.070708 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Aug 13 07:28:58.070908 kernel: pci 0000:00:02.0: bridge window [io 0xc000-0xcfff] Aug 13 07:28:58.071087 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Aug 13 07:28:58.071260 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Aug 13 07:28:58.071471 kernel: pci_bus 0000:02: extended config space not accessible Aug 13 07:28:58.071702 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Aug 13 07:28:58.071907 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Aug 13 07:28:58.072085 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Aug 13 07:28:58.072265 kernel: pci 0000:01:00.0: bridge window [io 0xc000-0xcfff] Aug 13 07:28:58.072447 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Aug 13 07:28:58.072625 kernel: pci 0000:01:00.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Aug 13 07:28:58.074886 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Aug 13 07:28:58.075100 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Aug 13 07:28:58.075310 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Aug 13 07:28:58.075500 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Aug 13 07:28:58.075720 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Aug 13 07:28:58.075922 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Aug 13 07:28:58.076137 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Aug 13 07:28:58.076327 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Aug 13 07:28:58.076504 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Aug 13 07:28:58.078787 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Aug 13 07:28:58.078973 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Aug 13 07:28:58.079146 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Aug 13 07:28:58.079349 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Aug 13 07:28:58.079526 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Aug 13 07:28:58.079737 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Aug 13 07:28:58.079908 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Aug 13 07:28:58.080078 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Aug 13 07:28:58.080298 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Aug 13 07:28:58.080477 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Aug 13 07:28:58.082139 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Aug 13 07:28:58.082346 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Aug 13 07:28:58.082510 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Aug 13 07:28:58.084685 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Aug 13 07:28:58.084861 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Aug 13 07:28:58.085031 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Aug 13 07:28:58.085067 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 13 07:28:58.085081 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 13 07:28:58.085093 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 13 07:28:58.085106 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 13 07:28:58.085118 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Aug 13 07:28:58.085130 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Aug 13 07:28:58.085143 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Aug 13 07:28:58.085155 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Aug 13 07:28:58.085167 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Aug 13 07:28:58.085190 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Aug 13 07:28:58.085203 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Aug 13 07:28:58.085215 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Aug 13 07:28:58.085227 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Aug 13 07:28:58.085240 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Aug 13 07:28:58.085252 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Aug 13 07:28:58.085264 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Aug 13 07:28:58.085276 kernel: iommu: Default domain type: Translated Aug 13 07:28:58.085288 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 13 07:28:58.085311 kernel: PCI: Using ACPI for IRQ routing Aug 13 07:28:58.085324 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 13 07:28:58.085336 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Aug 13 07:28:58.085348 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Aug 13 07:28:58.085551 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Aug 13 07:28:58.085775 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Aug 13 07:28:58.085945 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Aug 13 07:28:58.085964 kernel: vgaarb: loaded Aug 13 07:28:58.085992 kernel: clocksource: Switched to clocksource kvm-clock Aug 13 07:28:58.086011 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 07:28:58.086024 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 07:28:58.086036 kernel: pnp: PnP ACPI init Aug 13 07:28:58.086225 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Aug 13 07:28:58.086246 kernel: pnp: PnP ACPI: found 5 devices Aug 13 07:28:58.086259 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 13 07:28:58.086272 kernel: NET: Registered PF_INET protocol family Aug 13 07:28:58.086298 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 13 07:28:58.086312 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Aug 13 07:28:58.086324 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 07:28:58.086337 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Aug 13 07:28:58.086349 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Aug 13 07:28:58.086368 kernel: TCP: Hash tables configured (established 16384 bind 16384) Aug 13 07:28:58.086381 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Aug 13 07:28:58.086393 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Aug 13 07:28:58.086405 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 07:28:58.086436 kernel: NET: Registered PF_XDP protocol family Aug 13 07:28:58.086602 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Aug 13 07:28:58.086803 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Aug 13 07:28:58.086972 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Aug 13 07:28:58.087150 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Aug 13 07:28:58.087322 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Aug 13 07:28:58.087492 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Aug 13 07:28:58.089738 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Aug 13 07:28:58.089913 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x1000-0x1fff] Aug 13 07:28:58.090084 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x2000-0x2fff] Aug 13 07:28:58.090255 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x3000-0x3fff] Aug 13 07:28:58.090424 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x4000-0x4fff] Aug 13 07:28:58.090597 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x5000-0x5fff] Aug 13 07:28:58.090797 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x6000-0x6fff] Aug 13 07:28:58.090993 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x7000-0x7fff] Aug 13 07:28:58.091239 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Aug 13 07:28:58.091436 kernel: pci 0000:01:00.0: bridge window [io 0xc000-0xcfff] Aug 13 07:28:58.093665 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Aug 13 07:28:58.093851 kernel: pci 0000:01:00.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Aug 13 07:28:58.094022 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Aug 13 07:28:58.094220 kernel: pci 0000:00:02.0: bridge window [io 0xc000-0xcfff] Aug 13 07:28:58.094390 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Aug 13 07:28:58.094570 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Aug 13 07:28:58.094808 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Aug 13 07:28:58.094989 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x1fff] Aug 13 07:28:58.095167 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Aug 13 07:28:58.095339 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Aug 13 07:28:58.097738 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Aug 13 07:28:58.097930 kernel: pci 0000:00:02.2: bridge window [io 0x2000-0x2fff] Aug 13 07:28:58.098116 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Aug 13 07:28:58.098302 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Aug 13 07:28:58.098470 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Aug 13 07:28:58.098663 kernel: pci 0000:00:02.3: bridge window [io 0x3000-0x3fff] Aug 13 07:28:58.098837 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Aug 13 07:28:58.099006 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Aug 13 07:28:58.099177 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Aug 13 07:28:58.099356 kernel: pci 0000:00:02.4: bridge window [io 0x4000-0x4fff] Aug 13 07:28:58.099527 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Aug 13 07:28:58.101897 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Aug 13 07:28:58.102131 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Aug 13 07:28:58.102323 kernel: pci 0000:00:02.5: bridge window [io 0x5000-0x5fff] Aug 13 07:28:58.102499 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Aug 13 07:28:58.102723 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Aug 13 07:28:58.102927 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Aug 13 07:28:58.103108 kernel: pci 0000:00:02.6: bridge window [io 0x6000-0x6fff] Aug 13 07:28:58.103289 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Aug 13 07:28:58.103478 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Aug 13 07:28:58.105807 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Aug 13 07:28:58.105988 kernel: pci 0000:00:02.7: bridge window [io 0x7000-0x7fff] Aug 13 07:28:58.106159 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Aug 13 07:28:58.106338 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Aug 13 07:28:58.106526 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 13 07:28:58.106737 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 13 07:28:58.106895 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 13 07:28:58.107048 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Aug 13 07:28:58.107201 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Aug 13 07:28:58.107378 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Aug 13 07:28:58.107573 kernel: pci_bus 0000:01: resource 0 [io 0xc000-0xcfff] Aug 13 07:28:58.109824 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Aug 13 07:28:58.110016 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Aug 13 07:28:58.110216 kernel: pci_bus 0000:02: resource 0 [io 0xc000-0xcfff] Aug 13 07:28:58.110396 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Aug 13 07:28:58.110573 kernel: pci_bus 0000:02: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Aug 13 07:28:58.110785 kernel: pci_bus 0000:03: resource 0 [io 0x1000-0x1fff] Aug 13 07:28:58.110956 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Aug 13 07:28:58.111127 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Aug 13 07:28:58.111356 kernel: pci_bus 0000:04: resource 0 [io 0x2000-0x2fff] Aug 13 07:28:58.111533 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Aug 13 07:28:58.112271 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Aug 13 07:28:58.112493 kernel: pci_bus 0000:05: resource 0 [io 0x3000-0x3fff] Aug 13 07:28:58.112757 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Aug 13 07:28:58.112926 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Aug 13 07:28:58.113117 kernel: pci_bus 0000:06: resource 0 [io 0x4000-0x4fff] Aug 13 07:28:58.113340 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Aug 13 07:28:58.113508 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Aug 13 07:28:58.113745 kernel: pci_bus 0000:07: resource 0 [io 0x5000-0x5fff] Aug 13 07:28:58.113912 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Aug 13 07:28:58.114084 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Aug 13 07:28:58.114274 kernel: pci_bus 0000:08: resource 0 [io 0x6000-0x6fff] Aug 13 07:28:58.114465 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Aug 13 07:28:58.114637 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Aug 13 07:28:58.114871 kernel: pci_bus 0000:09: resource 0 [io 0x7000-0x7fff] Aug 13 07:28:58.115038 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Aug 13 07:28:58.115213 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Aug 13 07:28:58.115244 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Aug 13 07:28:58.115271 kernel: PCI: CLS 0 bytes, default 64 Aug 13 07:28:58.115284 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Aug 13 07:28:58.115296 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Aug 13 07:28:58.115307 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Aug 13 07:28:58.115329 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Aug 13 07:28:58.115353 kernel: Initialise system trusted keyrings Aug 13 07:28:58.115373 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Aug 13 07:28:58.115385 kernel: Key type asymmetric registered Aug 13 07:28:58.115409 kernel: Asymmetric key parser 'x509' registered Aug 13 07:28:58.115438 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Aug 13 07:28:58.115451 kernel: io scheduler mq-deadline registered Aug 13 07:28:58.115462 kernel: io scheduler kyber registered Aug 13 07:28:58.115491 kernel: io scheduler bfq registered Aug 13 07:28:58.115714 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Aug 13 07:28:58.115891 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Aug 13 07:28:58.116062 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Aug 13 07:28:58.116241 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Aug 13 07:28:58.116427 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Aug 13 07:28:58.116680 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Aug 13 07:28:58.116857 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Aug 13 07:28:58.117026 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Aug 13 07:28:58.117200 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Aug 13 07:28:58.117371 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Aug 13 07:28:58.117552 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Aug 13 07:28:58.117790 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Aug 13 07:28:58.117973 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Aug 13 07:28:58.118155 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Aug 13 07:28:58.118393 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Aug 13 07:28:58.118689 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Aug 13 07:28:58.118862 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Aug 13 07:28:58.119054 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Aug 13 07:28:58.119241 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Aug 13 07:28:58.119407 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Aug 13 07:28:58.119586 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Aug 13 07:28:58.119804 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Aug 13 07:28:58.119981 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Aug 13 07:28:58.120167 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Aug 13 07:28:58.120189 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 13 07:28:58.120203 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Aug 13 07:28:58.120216 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Aug 13 07:28:58.120229 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 07:28:58.120242 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 13 07:28:58.120255 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 13 07:28:58.120283 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 13 07:28:58.120307 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 13 07:28:58.120520 kernel: rtc_cmos 00:03: RTC can wake from S4 Aug 13 07:28:58.120541 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Aug 13 07:28:58.120782 kernel: rtc_cmos 00:03: registered as rtc0 Aug 13 07:28:58.120957 kernel: rtc_cmos 00:03: setting system clock to 2025-08-13T07:28:57 UTC (1755070137) Aug 13 07:28:58.121128 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Aug 13 07:28:58.121147 kernel: intel_pstate: CPU model not supported Aug 13 07:28:58.121176 kernel: NET: Registered PF_INET6 protocol family Aug 13 07:28:58.121189 kernel: Segment Routing with IPv6 Aug 13 07:28:58.121202 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 07:28:58.121215 kernel: NET: Registered PF_PACKET protocol family Aug 13 07:28:58.121228 kernel: Key type dns_resolver registered Aug 13 07:28:58.121249 kernel: IPI shorthand broadcast: enabled Aug 13 07:28:58.121262 kernel: sched_clock: Marking stable (1398004375, 227933933)->(1862365175, -236426867) Aug 13 07:28:58.121275 kernel: registered taskstats version 1 Aug 13 07:28:58.121288 kernel: Loading compiled-in X.509 certificates Aug 13 07:28:58.121312 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.100-flatcar: 264e720147fa8df9744bb9dc1c08171c0cb20041' Aug 13 07:28:58.121325 kernel: Key type .fscrypt registered Aug 13 07:28:58.121337 kernel: Key type fscrypt-provisioning registered Aug 13 07:28:58.121350 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 07:28:58.121363 kernel: ima: Allocated hash algorithm: sha1 Aug 13 07:28:58.121375 kernel: ima: No architecture policies found Aug 13 07:28:58.121388 kernel: clk: Disabling unused clocks Aug 13 07:28:58.121413 kernel: Freeing unused kernel image (initmem) memory: 42876K Aug 13 07:28:58.121425 kernel: Write protecting the kernel read-only data: 36864k Aug 13 07:28:58.121448 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Aug 13 07:28:58.121461 kernel: Run /init as init process Aug 13 07:28:58.121496 kernel: with arguments: Aug 13 07:28:58.121509 kernel: /init Aug 13 07:28:58.121522 kernel: with environment: Aug 13 07:28:58.121534 kernel: HOME=/ Aug 13 07:28:58.121546 kernel: TERM=linux Aug 13 07:28:58.121559 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 07:28:58.121575 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 13 07:28:58.121602 systemd[1]: Detected virtualization kvm. Aug 13 07:28:58.121629 systemd[1]: Detected architecture x86-64. Aug 13 07:28:58.121671 systemd[1]: Running in initrd. Aug 13 07:28:58.121685 systemd[1]: No hostname configured, using default hostname. Aug 13 07:28:58.121699 systemd[1]: Hostname set to . Aug 13 07:28:58.121712 systemd[1]: Initializing machine ID from VM UUID. Aug 13 07:28:58.121726 systemd[1]: Queued start job for default target initrd.target. Aug 13 07:28:58.121753 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 07:28:58.121767 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 07:28:58.121781 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 13 07:28:58.121795 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 07:28:58.121809 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 13 07:28:58.121830 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 13 07:28:58.121846 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 13 07:28:58.121872 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 13 07:28:58.121885 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 07:28:58.121899 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 07:28:58.121913 systemd[1]: Reached target paths.target - Path Units. Aug 13 07:28:58.121926 systemd[1]: Reached target slices.target - Slice Units. Aug 13 07:28:58.121940 systemd[1]: Reached target swap.target - Swaps. Aug 13 07:28:58.121953 systemd[1]: Reached target timers.target - Timer Units. Aug 13 07:28:58.121967 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 07:28:58.121980 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 07:28:58.122006 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 07:28:58.122019 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 13 07:28:58.122033 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 07:28:58.122046 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 07:28:58.122060 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 07:28:58.122073 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 07:28:58.122094 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 13 07:28:58.122108 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 07:28:58.122132 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 13 07:28:58.122154 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 07:28:58.122168 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 07:28:58.122181 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 07:28:58.122195 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:28:58.122208 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 13 07:28:58.122280 systemd-journald[202]: Collecting audit messages is disabled. Aug 13 07:28:58.122335 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 07:28:58.122349 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 07:28:58.122371 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 07:28:58.122406 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 07:28:58.122419 kernel: Bridge firewalling registered Aug 13 07:28:58.122433 systemd-journald[202]: Journal started Aug 13 07:28:58.122464 systemd-journald[202]: Runtime Journal (/run/log/journal/b5ac3163283c466b91efa3ea3ade2e8d) is 4.7M, max 38.0M, 33.2M free. Aug 13 07:28:58.062544 systemd-modules-load[203]: Inserted module 'overlay' Aug 13 07:28:58.109455 systemd-modules-load[203]: Inserted module 'br_netfilter' Aug 13 07:28:58.165711 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 07:28:58.167136 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 07:28:58.168155 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:28:58.176491 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 07:28:58.186854 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:28:58.189825 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 07:28:58.197814 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 07:28:58.205924 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 07:28:58.214362 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:28:58.221740 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 07:28:58.228074 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:28:58.233905 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 13 07:28:58.241567 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 07:28:58.248867 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 07:28:58.256431 dracut-cmdline[236]: dracut-dracut-053 Aug 13 07:28:58.263247 dracut-cmdline[236]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 07:28:58.289553 systemd-resolved[240]: Positive Trust Anchors: Aug 13 07:28:58.289578 systemd-resolved[240]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 07:28:58.289621 systemd-resolved[240]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 07:28:58.309200 systemd-resolved[240]: Defaulting to hostname 'linux'. Aug 13 07:28:58.315674 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 07:28:58.316795 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 07:28:58.403700 kernel: SCSI subsystem initialized Aug 13 07:28:58.416666 kernel: Loading iSCSI transport class v2.0-870. Aug 13 07:28:58.431678 kernel: iscsi: registered transport (tcp) Aug 13 07:28:58.460855 kernel: iscsi: registered transport (qla4xxx) Aug 13 07:28:58.460945 kernel: QLogic iSCSI HBA Driver Aug 13 07:28:58.525335 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 13 07:28:58.534897 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 13 07:28:58.580973 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 07:28:58.581096 kernel: device-mapper: uevent: version 1.0.3 Aug 13 07:28:58.583313 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 13 07:28:58.631697 kernel: raid6: sse2x4 gen() 14359 MB/s Aug 13 07:28:58.649688 kernel: raid6: sse2x2 gen() 9855 MB/s Aug 13 07:28:58.668210 kernel: raid6: sse2x1 gen() 10331 MB/s Aug 13 07:28:58.668308 kernel: raid6: using algorithm sse2x4 gen() 14359 MB/s Aug 13 07:28:58.687215 kernel: raid6: .... xor() 8280 MB/s, rmw enabled Aug 13 07:28:58.687312 kernel: raid6: using ssse3x2 recovery algorithm Aug 13 07:28:58.711667 kernel: xor: automatically using best checksumming function avx Aug 13 07:28:58.900659 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 13 07:28:58.915892 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 13 07:28:58.922934 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 07:28:58.951060 systemd-udevd[421]: Using default interface naming scheme 'v255'. Aug 13 07:28:58.958021 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 07:28:58.970071 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 13 07:28:58.989472 dracut-pre-trigger[430]: rd.md=0: removing MD RAID activation Aug 13 07:28:59.031474 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 07:28:59.037839 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 07:28:59.222143 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 07:28:59.232852 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 13 07:28:59.254696 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 13 07:28:59.260700 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 07:28:59.262590 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 07:28:59.264745 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 07:28:59.270843 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 13 07:28:59.300779 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 13 07:28:59.360831 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Aug 13 07:28:59.361181 kernel: cryptd: max_cpu_qlen set to 1000 Aug 13 07:28:59.385513 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Aug 13 07:28:59.396660 kernel: libata version 3.00 loaded. Aug 13 07:28:59.407918 kernel: AVX version of gcm_enc/dec engaged. Aug 13 07:28:59.408078 kernel: AES CTR mode by8 optimization enabled Aug 13 07:28:59.408645 kernel: ahci 0000:00:1f.2: version 3.0 Aug 13 07:28:59.409039 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Aug 13 07:28:59.411855 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Aug 13 07:28:59.414356 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Aug 13 07:28:59.419067 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 07:28:59.420121 kernel: scsi host0: ahci Aug 13 07:28:59.421016 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:28:59.453361 kernel: scsi host1: ahci Aug 13 07:28:59.454502 kernel: scsi host2: ahci Aug 13 07:28:59.454848 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 13 07:28:59.454909 kernel: GPT:17805311 != 125829119 Aug 13 07:28:59.454929 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 13 07:28:59.454946 kernel: GPT:17805311 != 125829119 Aug 13 07:28:59.454962 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 13 07:28:59.454996 kernel: scsi host3: ahci Aug 13 07:28:59.455238 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 07:28:59.455268 kernel: scsi host4: ahci Aug 13 07:28:59.455519 kernel: scsi host5: ahci Aug 13 07:28:59.456293 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 35 Aug 13 07:28:59.456316 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 35 Aug 13 07:28:59.456334 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 35 Aug 13 07:28:59.456351 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 35 Aug 13 07:28:59.456389 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 35 Aug 13 07:28:59.456408 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 35 Aug 13 07:28:59.456049 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:28:59.464486 kernel: ACPI: bus type USB registered Aug 13 07:28:59.464520 kernel: usbcore: registered new interface driver usbfs Aug 13 07:28:59.464538 kernel: usbcore: registered new interface driver hub Aug 13 07:28:59.464554 kernel: usbcore: registered new device driver usb Aug 13 07:28:59.464700 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:28:59.464943 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:28:59.465736 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:28:59.482049 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:28:59.504711 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (466) Aug 13 07:28:59.526275 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Aug 13 07:28:59.538650 kernel: BTRFS: device fsid 6f4baebc-7e60-4ee7-93a9-8bedb08a33ad devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (476) Aug 13 07:28:59.542376 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Aug 13 07:28:59.565805 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Aug 13 07:28:59.625724 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Aug 13 07:28:59.626956 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:28:59.635015 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 13 07:28:59.641909 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 13 07:28:59.644811 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:28:59.652519 disk-uuid[560]: Primary Header is updated. Aug 13 07:28:59.652519 disk-uuid[560]: Secondary Entries is updated. Aug 13 07:28:59.652519 disk-uuid[560]: Secondary Header is updated. Aug 13 07:28:59.659669 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 07:28:59.666507 kernel: GPT:disk_guids don't match. Aug 13 07:28:59.666563 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 13 07:28:59.666642 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 07:28:59.678670 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 07:28:59.680904 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:28:59.762647 kernel: ata1: SATA link down (SStatus 0 SControl 300) Aug 13 07:28:59.762737 kernel: ata2: SATA link down (SStatus 0 SControl 300) Aug 13 07:28:59.768694 kernel: ata6: SATA link down (SStatus 0 SControl 300) Aug 13 07:28:59.770638 kernel: ata3: SATA link down (SStatus 0 SControl 300) Aug 13 07:28:59.770672 kernel: ata4: SATA link down (SStatus 0 SControl 300) Aug 13 07:28:59.772683 kernel: ata5: SATA link down (SStatus 0 SControl 300) Aug 13 07:28:59.839655 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Aug 13 07:28:59.842656 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Aug 13 07:28:59.845666 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Aug 13 07:28:59.850084 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Aug 13 07:28:59.850386 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Aug 13 07:28:59.852660 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Aug 13 07:28:59.855884 kernel: hub 1-0:1.0: USB hub found Aug 13 07:28:59.856198 kernel: hub 1-0:1.0: 4 ports detected Aug 13 07:28:59.859664 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Aug 13 07:28:59.863635 kernel: hub 2-0:1.0: USB hub found Aug 13 07:28:59.863924 kernel: hub 2-0:1.0: 4 ports detected Aug 13 07:29:00.098735 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Aug 13 07:29:00.240645 kernel: hid: raw HID events driver (C) Jiri Kosina Aug 13 07:29:00.245902 kernel: usbcore: registered new interface driver usbhid Aug 13 07:29:00.245944 kernel: usbhid: USB HID core driver Aug 13 07:29:00.252688 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Aug 13 07:29:00.255661 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Aug 13 07:29:00.674713 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 07:29:00.676691 disk-uuid[561]: The operation has completed successfully. Aug 13 07:29:00.732046 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 07:29:00.732232 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 13 07:29:00.745899 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 13 07:29:00.750580 sh[588]: Success Aug 13 07:29:00.766656 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Aug 13 07:29:00.836051 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 13 07:29:00.839118 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 13 07:29:00.840110 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 13 07:29:00.870242 kernel: BTRFS info (device dm-0): first mount of filesystem 6f4baebc-7e60-4ee7-93a9-8bedb08a33ad Aug 13 07:29:00.870315 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:29:00.872287 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 13 07:29:00.875522 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 13 07:29:00.875648 kernel: BTRFS info (device dm-0): using free space tree Aug 13 07:29:00.885802 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 13 07:29:00.888195 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 13 07:29:00.901982 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 13 07:29:00.904994 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 13 07:29:00.919646 kernel: BTRFS info (device vda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:29:00.923230 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:29:00.923283 kernel: BTRFS info (device vda6): using free space tree Aug 13 07:29:00.929656 kernel: BTRFS info (device vda6): auto enabling async discard Aug 13 07:29:00.944008 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 13 07:29:00.946317 kernel: BTRFS info (device vda6): last unmount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:29:00.964130 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 13 07:29:00.974667 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 13 07:29:01.045033 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 07:29:01.053881 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 07:29:01.091607 systemd-networkd[770]: lo: Link UP Aug 13 07:29:01.091674 systemd-networkd[770]: lo: Gained carrier Aug 13 07:29:01.095772 systemd-networkd[770]: Enumeration completed Aug 13 07:29:01.096322 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:29:01.096328 systemd-networkd[770]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 07:29:01.096744 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 07:29:01.097540 systemd[1]: Reached target network.target - Network. Aug 13 07:29:01.101475 systemd-networkd[770]: eth0: Link UP Aug 13 07:29:01.101484 systemd-networkd[770]: eth0: Gained carrier Aug 13 07:29:01.101497 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:29:01.169406 systemd-networkd[770]: eth0: DHCPv4 address 10.243.76.66/30, gateway 10.243.76.65 acquired from 10.243.76.65 Aug 13 07:29:01.199184 ignition[691]: Ignition 2.19.0 Aug 13 07:29:01.199205 ignition[691]: Stage: fetch-offline Aug 13 07:29:01.199294 ignition[691]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:29:01.199335 ignition[691]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Aug 13 07:29:01.201846 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 07:29:01.199477 ignition[691]: parsed url from cmdline: "" Aug 13 07:29:01.199484 ignition[691]: no config URL provided Aug 13 07:29:01.199494 ignition[691]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 07:29:01.199510 ignition[691]: no config at "/usr/lib/ignition/user.ign" Aug 13 07:29:01.199518 ignition[691]: failed to fetch config: resource requires networking Aug 13 07:29:01.199913 ignition[691]: Ignition finished successfully Aug 13 07:29:01.209985 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Aug 13 07:29:01.250499 ignition[780]: Ignition 2.19.0 Aug 13 07:29:01.250529 ignition[780]: Stage: fetch Aug 13 07:29:01.250838 ignition[780]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:29:01.250859 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Aug 13 07:29:01.251025 ignition[780]: parsed url from cmdline: "" Aug 13 07:29:01.251031 ignition[780]: no config URL provided Aug 13 07:29:01.251041 ignition[780]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 07:29:01.251064 ignition[780]: no config at "/usr/lib/ignition/user.ign" Aug 13 07:29:01.251310 ignition[780]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Aug 13 07:29:01.251334 ignition[780]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Aug 13 07:29:01.251382 ignition[780]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Aug 13 07:29:01.269499 ignition[780]: GET result: OK Aug 13 07:29:01.270219 ignition[780]: parsing config with SHA512: 223f7d51370c5aacfbe8a4ef76d81cf9db84a9af9ff9e47f1e5a89a931233de839d6ea081f0beb65b743564d635be6dcfdcf6067d3afeac722daebb91456e381 Aug 13 07:29:01.285295 unknown[780]: fetched base config from "system" Aug 13 07:29:01.285832 ignition[780]: fetch: fetch complete Aug 13 07:29:01.285314 unknown[780]: fetched base config from "system" Aug 13 07:29:01.285841 ignition[780]: fetch: fetch passed Aug 13 07:29:01.285365 unknown[780]: fetched user config from "openstack" Aug 13 07:29:01.285917 ignition[780]: Ignition finished successfully Aug 13 07:29:01.288414 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Aug 13 07:29:01.295813 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 13 07:29:01.319112 ignition[786]: Ignition 2.19.0 Aug 13 07:29:01.319134 ignition[786]: Stage: kargs Aug 13 07:29:01.319378 ignition[786]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:29:01.319399 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Aug 13 07:29:01.320673 ignition[786]: kargs: kargs passed Aug 13 07:29:01.320755 ignition[786]: Ignition finished successfully Aug 13 07:29:01.324097 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 13 07:29:01.336856 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 13 07:29:01.354164 ignition[792]: Ignition 2.19.0 Aug 13 07:29:01.354190 ignition[792]: Stage: disks Aug 13 07:29:01.354496 ignition[792]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:29:01.354516 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Aug 13 07:29:01.359176 ignition[792]: disks: disks passed Aug 13 07:29:01.359908 ignition[792]: Ignition finished successfully Aug 13 07:29:01.361015 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 13 07:29:01.362406 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 13 07:29:01.363221 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 07:29:01.364795 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 07:29:01.366337 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 07:29:01.367785 systemd[1]: Reached target basic.target - Basic System. Aug 13 07:29:01.375843 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 13 07:29:01.394641 systemd-fsck[800]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Aug 13 07:29:01.398108 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 13 07:29:01.405784 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 13 07:29:01.524632 kernel: EXT4-fs (vda9): mounted filesystem 98cc0201-e9ec-4d2c-8a62-5b521bf9317d r/w with ordered data mode. Quota mode: none. Aug 13 07:29:01.525506 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 13 07:29:01.526916 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 13 07:29:01.533747 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 07:29:01.537731 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 13 07:29:01.538844 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 13 07:29:01.541808 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Aug 13 07:29:01.544062 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 07:29:01.544102 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 07:29:01.553645 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (808) Aug 13 07:29:01.554514 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 13 07:29:01.563261 kernel: BTRFS info (device vda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:29:01.563299 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:29:01.563320 kernel: BTRFS info (device vda6): using free space tree Aug 13 07:29:01.568636 kernel: BTRFS info (device vda6): auto enabling async discard Aug 13 07:29:01.569891 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 13 07:29:01.575274 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 07:29:01.746791 initrd-setup-root[836]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 07:29:01.756689 initrd-setup-root[843]: cut: /sysroot/etc/group: No such file or directory Aug 13 07:29:01.762017 initrd-setup-root[850]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 07:29:01.767657 initrd-setup-root[857]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 07:29:01.872961 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 13 07:29:01.930780 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 13 07:29:01.934833 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 13 07:29:01.946253 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 13 07:29:01.948358 kernel: BTRFS info (device vda6): last unmount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:29:01.987689 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 13 07:29:01.997404 ignition[924]: INFO : Ignition 2.19.0 Aug 13 07:29:01.997404 ignition[924]: INFO : Stage: mount Aug 13 07:29:01.997404 ignition[924]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:29:01.997404 ignition[924]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Aug 13 07:29:02.001656 ignition[924]: INFO : mount: mount passed Aug 13 07:29:02.002363 ignition[924]: INFO : Ignition finished successfully Aug 13 07:29:02.003005 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 13 07:29:02.766854 systemd-networkd[770]: eth0: Gained IPv6LL Aug 13 07:29:04.275090 systemd-networkd[770]: eth0: Ignoring DHCPv6 address 2a02:1348:17c:d310:24:19ff:fef3:4c42/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17c:d310:24:19ff:fef3:4c42/64 assigned by NDisc. Aug 13 07:29:04.275114 systemd-networkd[770]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Aug 13 07:29:08.780743 coreos-metadata[810]: Aug 13 07:29:08.780 WARN failed to locate config-drive, using the metadata service API instead Aug 13 07:29:08.802732 coreos-metadata[810]: Aug 13 07:29:08.802 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Aug 13 07:29:08.817258 coreos-metadata[810]: Aug 13 07:29:08.817 INFO Fetch successful Aug 13 07:29:08.819243 coreos-metadata[810]: Aug 13 07:29:08.817 INFO wrote hostname srv-qvhwp.gb1.brightbox.com to /sysroot/etc/hostname Aug 13 07:29:08.820659 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Aug 13 07:29:08.820836 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Aug 13 07:29:08.829777 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 13 07:29:08.846913 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 07:29:08.858654 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (942) Aug 13 07:29:08.864664 kernel: BTRFS info (device vda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:29:08.864725 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:29:08.864746 kernel: BTRFS info (device vda6): using free space tree Aug 13 07:29:08.870657 kernel: BTRFS info (device vda6): auto enabling async discard Aug 13 07:29:08.873316 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 07:29:08.911662 ignition[960]: INFO : Ignition 2.19.0 Aug 13 07:29:08.911662 ignition[960]: INFO : Stage: files Aug 13 07:29:08.913523 ignition[960]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:29:08.913523 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Aug 13 07:29:08.913523 ignition[960]: DEBUG : files: compiled without relabeling support, skipping Aug 13 07:29:08.916259 ignition[960]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 07:29:08.916259 ignition[960]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 07:29:08.918174 ignition[960]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 07:29:08.918174 ignition[960]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 07:29:08.920008 ignition[960]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 07:29:08.920008 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Aug 13 07:29:08.920008 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Aug 13 07:29:08.918282 unknown[960]: wrote ssh authorized keys file for user: core Aug 13 07:29:09.061471 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 13 07:29:09.375575 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Aug 13 07:29:09.375575 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 13 07:29:09.378167 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Aug 13 07:29:09.926171 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Aug 13 07:29:10.293118 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 13 07:29:10.293118 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Aug 13 07:29:10.295463 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 07:29:10.295463 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 07:29:10.295463 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 07:29:10.295463 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 07:29:10.295463 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 07:29:10.295463 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 07:29:10.295463 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 07:29:10.295463 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 07:29:10.295463 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 07:29:10.295463 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 07:29:10.306665 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 07:29:10.306665 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 07:29:10.306665 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Aug 13 07:29:10.575499 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Aug 13 07:29:11.696344 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 07:29:11.696344 ignition[960]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Aug 13 07:29:11.704999 ignition[960]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 07:29:11.704999 ignition[960]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 07:29:11.704999 ignition[960]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Aug 13 07:29:11.704999 ignition[960]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Aug 13 07:29:11.704999 ignition[960]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 07:29:11.704999 ignition[960]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 07:29:11.704999 ignition[960]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 07:29:11.704999 ignition[960]: INFO : files: files passed Aug 13 07:29:11.704999 ignition[960]: INFO : Ignition finished successfully Aug 13 07:29:11.706762 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 13 07:29:11.718946 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 13 07:29:11.723557 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 13 07:29:11.743839 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 07:29:11.744088 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 13 07:29:11.764956 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:29:11.764956 initrd-setup-root-after-ignition[988]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:29:11.767869 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:29:11.769559 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 07:29:11.771058 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 13 07:29:11.776912 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 13 07:29:11.824864 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 07:29:11.825046 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 13 07:29:11.826842 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 13 07:29:11.828159 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 13 07:29:11.829824 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 13 07:29:11.835853 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 13 07:29:11.856860 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 07:29:11.865860 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 13 07:29:11.878274 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 13 07:29:11.879382 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 07:29:11.881078 systemd[1]: Stopped target timers.target - Timer Units. Aug 13 07:29:11.882499 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 07:29:11.882707 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 07:29:11.884560 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 13 07:29:11.885497 systemd[1]: Stopped target basic.target - Basic System. Aug 13 07:29:11.886968 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 13 07:29:11.888295 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 07:29:11.889691 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 13 07:29:11.891261 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 13 07:29:11.892941 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 07:29:11.894482 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 13 07:29:11.895915 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 13 07:29:11.897473 systemd[1]: Stopped target swap.target - Swaps. Aug 13 07:29:11.898848 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 07:29:11.899080 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 13 07:29:11.900750 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 13 07:29:11.901854 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 07:29:11.903341 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 13 07:29:11.904700 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 07:29:11.905875 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 07:29:11.906053 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 13 07:29:11.907909 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 07:29:11.908086 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 07:29:11.909925 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 07:29:11.910079 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 13 07:29:11.920991 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 13 07:29:11.923673 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 07:29:11.923879 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 07:29:11.929789 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 13 07:29:11.931206 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 07:29:11.931421 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 07:29:11.934159 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 07:29:11.934353 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 07:29:11.945863 ignition[1012]: INFO : Ignition 2.19.0 Aug 13 07:29:11.945863 ignition[1012]: INFO : Stage: umount Aug 13 07:29:11.948645 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 07:29:11.953304 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:29:11.953304 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Aug 13 07:29:11.953304 ignition[1012]: INFO : umount: umount passed Aug 13 07:29:11.953304 ignition[1012]: INFO : Ignition finished successfully Aug 13 07:29:11.948831 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 13 07:29:11.952571 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 07:29:11.953076 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 13 07:29:11.954096 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 07:29:11.954175 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 13 07:29:11.956744 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 13 07:29:11.956813 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Aug 13 07:29:11.959315 systemd[1]: Stopped target network.target - Network. Aug 13 07:29:11.960562 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 07:29:11.961689 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 07:29:11.963026 systemd[1]: Stopped target paths.target - Path Units. Aug 13 07:29:11.964511 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 07:29:11.967862 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 07:29:11.969006 systemd[1]: Stopped target slices.target - Slice Units. Aug 13 07:29:11.969646 systemd[1]: Stopped target sockets.target - Socket Units. Aug 13 07:29:11.973821 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 07:29:11.973915 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 07:29:11.975214 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 07:29:11.975291 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 07:29:11.977360 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 07:29:11.977445 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 13 07:29:11.980118 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 13 07:29:11.980220 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 13 07:29:11.981236 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 13 07:29:11.983991 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 13 07:29:11.986998 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 07:29:11.987786 systemd-networkd[770]: eth0: DHCPv6 lease lost Aug 13 07:29:11.987993 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 07:29:11.988157 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 13 07:29:11.990295 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 07:29:11.991212 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 13 07:29:11.992533 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 07:29:11.992894 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 13 07:29:11.996631 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 07:29:11.996808 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 13 07:29:12.004135 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 07:29:12.004232 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 13 07:29:12.005127 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 07:29:12.005206 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 13 07:29:12.013781 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 13 07:29:12.014895 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 07:29:12.014977 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 07:29:12.015802 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 07:29:12.015869 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:29:12.016566 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 07:29:12.016655 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 13 07:29:12.017963 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 13 07:29:12.018027 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 07:29:12.019878 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 07:29:12.032335 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 07:29:12.032542 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 13 07:29:12.035304 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 07:29:12.035572 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 07:29:12.038914 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 07:29:12.038993 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 13 07:29:12.039808 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 07:29:12.039866 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 07:29:12.041299 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 07:29:12.041384 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 13 07:29:12.043522 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 07:29:12.043589 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 13 07:29:12.045103 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 07:29:12.045197 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:29:12.052867 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 13 07:29:12.053602 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 07:29:12.053700 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 07:29:12.055397 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:29:12.055469 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:29:12.066048 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 07:29:12.066237 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 13 07:29:12.068092 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 13 07:29:12.078272 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 13 07:29:12.087452 systemd[1]: Switching root. Aug 13 07:29:12.120734 systemd-journald[202]: Journal stopped Aug 13 07:29:13.633930 systemd-journald[202]: Received SIGTERM from PID 1 (systemd). Aug 13 07:29:13.635230 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 07:29:13.635300 kernel: SELinux: policy capability open_perms=1 Aug 13 07:29:13.635323 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 07:29:13.635361 kernel: SELinux: policy capability always_check_network=0 Aug 13 07:29:13.635403 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 07:29:13.635438 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 07:29:13.635471 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 07:29:13.635496 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 07:29:13.635545 kernel: audit: type=1403 audit(1755070152.387:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 13 07:29:13.635586 systemd[1]: Successfully loaded SELinux policy in 73.692ms. Aug 13 07:29:13.636716 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.301ms. Aug 13 07:29:13.636765 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 13 07:29:13.636829 systemd[1]: Detected virtualization kvm. Aug 13 07:29:13.636862 systemd[1]: Detected architecture x86-64. Aug 13 07:29:13.636889 systemd[1]: Detected first boot. Aug 13 07:29:13.636936 systemd[1]: Hostname set to . Aug 13 07:29:13.636958 systemd[1]: Initializing machine ID from VM UUID. Aug 13 07:29:13.636978 zram_generator::config[1065]: No configuration found. Aug 13 07:29:13.637030 systemd[1]: Populated /etc with preset unit settings. Aug 13 07:29:13.637052 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 13 07:29:13.637086 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 13 07:29:13.637131 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 13 07:29:13.637155 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 13 07:29:13.637175 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 13 07:29:13.637203 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 13 07:29:13.637225 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 13 07:29:13.637268 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 13 07:29:13.637319 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 13 07:29:13.637341 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 13 07:29:13.637378 systemd[1]: Created slice user.slice - User and Session Slice. Aug 13 07:29:13.637408 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 07:29:13.637429 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 07:29:13.637450 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 13 07:29:13.637470 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 13 07:29:13.637490 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 13 07:29:13.637511 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 07:29:13.637530 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 13 07:29:13.637565 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 07:29:13.637595 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 13 07:29:13.638682 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 13 07:29:13.638733 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 13 07:29:13.638782 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 13 07:29:13.638821 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 07:29:13.638843 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 07:29:13.638864 systemd[1]: Reached target slices.target - Slice Units. Aug 13 07:29:13.638883 systemd[1]: Reached target swap.target - Swaps. Aug 13 07:29:13.638902 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 13 07:29:13.638921 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 13 07:29:13.638941 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 07:29:13.638970 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 07:29:13.639009 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 07:29:13.639030 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 13 07:29:13.639059 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 13 07:29:13.639079 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 13 07:29:13.639100 systemd[1]: Mounting media.mount - External Media Directory... Aug 13 07:29:13.639119 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:29:13.639139 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 13 07:29:13.639159 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 13 07:29:13.639192 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 13 07:29:13.639214 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 07:29:13.639235 systemd[1]: Reached target machines.target - Containers. Aug 13 07:29:13.639255 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 13 07:29:13.639274 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:29:13.639306 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 07:29:13.639327 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 13 07:29:13.639347 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 07:29:13.639366 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 07:29:13.639402 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 07:29:13.639432 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 13 07:29:13.639454 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 07:29:13.639474 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 07:29:13.639494 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 13 07:29:13.639522 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 13 07:29:13.639544 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 13 07:29:13.639571 systemd[1]: Stopped systemd-fsck-usr.service. Aug 13 07:29:13.639607 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 07:29:13.639952 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 07:29:13.639998 kernel: ACPI: bus type drm_connector registered Aug 13 07:29:13.640043 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 07:29:13.640065 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 13 07:29:13.640084 kernel: loop: module loaded Aug 13 07:29:13.640103 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 07:29:13.640123 systemd[1]: verity-setup.service: Deactivated successfully. Aug 13 07:29:13.640143 systemd[1]: Stopped verity-setup.service. Aug 13 07:29:13.640179 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:29:13.640201 kernel: fuse: init (API version 7.39) Aug 13 07:29:13.640221 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 13 07:29:13.640240 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 13 07:29:13.640260 systemd[1]: Mounted media.mount - External Media Directory. Aug 13 07:29:13.640289 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 13 07:29:13.640327 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 13 07:29:13.640363 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 13 07:29:13.640385 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 13 07:29:13.640405 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 07:29:13.640425 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 07:29:13.640484 systemd-journald[1151]: Collecting audit messages is disabled. Aug 13 07:29:13.640533 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 13 07:29:13.640557 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 07:29:13.640595 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 07:29:13.641681 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 07:29:13.641734 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 07:29:13.641772 systemd-journald[1151]: Journal started Aug 13 07:29:13.641834 systemd-journald[1151]: Runtime Journal (/run/log/journal/b5ac3163283c466b91efa3ea3ade2e8d) is 4.7M, max 38.0M, 33.2M free. Aug 13 07:29:13.197390 systemd[1]: Queued start job for default target multi-user.target. Aug 13 07:29:13.224417 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Aug 13 07:29:13.644308 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 07:29:13.644347 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 07:29:13.225190 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 13 07:29:13.648738 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 07:29:13.649713 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 07:29:13.649956 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 13 07:29:13.651052 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 07:29:13.651262 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 07:29:13.652329 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 07:29:13.653374 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 07:29:13.654613 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 13 07:29:13.669909 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 07:29:13.684717 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 13 07:29:13.694700 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 13 07:29:13.695527 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 07:29:13.695577 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 07:29:13.697605 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Aug 13 07:29:13.704797 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 13 07:29:13.708596 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 13 07:29:13.709548 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:29:13.721811 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 13 07:29:13.730961 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 13 07:29:13.732071 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 07:29:13.736824 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 13 07:29:13.737645 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 07:29:13.740876 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 07:29:13.749921 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 13 07:29:13.753758 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 13 07:29:13.760126 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 13 07:29:13.761100 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 13 07:29:13.764247 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 13 07:29:13.768711 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 13 07:29:13.772963 kernel: loop0: detected capacity change from 0 to 142488 Aug 13 07:29:13.776558 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 13 07:29:13.836928 systemd-journald[1151]: Time spent on flushing to /var/log/journal/b5ac3163283c466b91efa3ea3ade2e8d is 171.118ms for 1152 entries. Aug 13 07:29:13.836928 systemd-journald[1151]: System Journal (/var/log/journal/b5ac3163283c466b91efa3ea3ade2e8d) is 8.0M, max 584.8M, 576.8M free. Aug 13 07:29:14.067979 systemd-journald[1151]: Received client request to flush runtime journal. Aug 13 07:29:14.068066 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 07:29:14.068097 kernel: loop1: detected capacity change from 0 to 140768 Aug 13 07:29:14.068129 kernel: loop2: detected capacity change from 0 to 224512 Aug 13 07:29:13.839972 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Aug 13 07:29:13.962695 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 07:29:13.965680 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Aug 13 07:29:13.974122 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 07:29:13.983765 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Aug 13 07:29:14.033213 udevadm[1199]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Aug 13 07:29:14.063678 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:29:14.071395 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 13 07:29:14.079666 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 13 07:29:14.092386 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 07:29:14.117647 kernel: loop3: detected capacity change from 0 to 8 Aug 13 07:29:14.176903 kernel: loop4: detected capacity change from 0 to 142488 Aug 13 07:29:14.173734 systemd-tmpfiles[1208]: ACLs are not supported, ignoring. Aug 13 07:29:14.173754 systemd-tmpfiles[1208]: ACLs are not supported, ignoring. Aug 13 07:29:14.212646 kernel: loop5: detected capacity change from 0 to 140768 Aug 13 07:29:14.213369 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 07:29:14.241409 kernel: loop6: detected capacity change from 0 to 224512 Aug 13 07:29:14.265720 kernel: loop7: detected capacity change from 0 to 8 Aug 13 07:29:14.271036 (sd-merge)[1212]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Aug 13 07:29:14.271912 (sd-merge)[1212]: Merged extensions into '/usr'. Aug 13 07:29:14.282875 systemd[1]: Reloading requested from client PID 1188 ('systemd-sysext') (unit systemd-sysext.service)... Aug 13 07:29:14.282916 systemd[1]: Reloading... Aug 13 07:29:14.601401 zram_generator::config[1238]: No configuration found. Aug 13 07:29:14.730531 ldconfig[1183]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 07:29:14.944498 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:29:15.010393 systemd[1]: Reloading finished in 723 ms. Aug 13 07:29:15.048698 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 13 07:29:15.051310 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 13 07:29:15.064892 systemd[1]: Starting ensure-sysext.service... Aug 13 07:29:15.075788 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 07:29:15.099752 systemd[1]: Reloading requested from client PID 1295 ('systemctl') (unit ensure-sysext.service)... Aug 13 07:29:15.099777 systemd[1]: Reloading... Aug 13 07:29:15.119079 systemd-tmpfiles[1296]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 07:29:15.119842 systemd-tmpfiles[1296]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 13 07:29:15.122044 systemd-tmpfiles[1296]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 07:29:15.122977 systemd-tmpfiles[1296]: ACLs are not supported, ignoring. Aug 13 07:29:15.123240 systemd-tmpfiles[1296]: ACLs are not supported, ignoring. Aug 13 07:29:15.134948 systemd-tmpfiles[1296]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 07:29:15.134966 systemd-tmpfiles[1296]: Skipping /boot Aug 13 07:29:15.168847 systemd-tmpfiles[1296]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 07:29:15.168866 systemd-tmpfiles[1296]: Skipping /boot Aug 13 07:29:15.238661 zram_generator::config[1323]: No configuration found. Aug 13 07:29:15.407926 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:29:15.473285 systemd[1]: Reloading finished in 372 ms. Aug 13 07:29:15.495141 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 13 07:29:15.500263 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 07:29:15.514950 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 13 07:29:15.522844 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 13 07:29:15.525567 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 13 07:29:15.530961 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 07:29:15.535877 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 07:29:15.539836 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 13 07:29:15.549796 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:29:15.550084 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:29:15.555973 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 07:29:15.561919 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 07:29:15.566999 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 07:29:15.567968 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:29:15.568177 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:29:15.576789 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:29:15.577136 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:29:15.577464 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:29:15.588420 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 13 07:29:15.589740 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:29:15.596566 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 13 07:29:15.600871 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:29:15.601156 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:29:15.607899 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 07:29:15.608849 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:29:15.608949 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:29:15.610734 systemd[1]: Finished ensure-sysext.service. Aug 13 07:29:15.612907 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 13 07:29:15.623864 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Aug 13 07:29:15.630897 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 13 07:29:15.640466 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 07:29:15.640744 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 07:29:15.642058 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 07:29:15.662225 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 07:29:15.662784 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 07:29:15.669094 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 07:29:15.669363 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 07:29:15.679106 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 07:29:15.681172 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 07:29:15.682378 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 07:29:15.693207 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 13 07:29:15.706070 systemd-udevd[1387]: Using default interface naming scheme 'v255'. Aug 13 07:29:15.713264 augenrules[1417]: No rules Aug 13 07:29:15.717307 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 13 07:29:15.718638 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 13 07:29:15.758764 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 13 07:29:15.760942 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 07:29:15.766259 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 07:29:15.779857 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 07:29:15.831866 systemd-resolved[1386]: Positive Trust Anchors: Aug 13 07:29:15.832386 systemd-resolved[1386]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 07:29:15.832505 systemd-resolved[1386]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 07:29:15.841372 systemd-resolved[1386]: Using system hostname 'srv-qvhwp.gb1.brightbox.com'. Aug 13 07:29:15.844578 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 07:29:15.846786 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 07:29:15.868306 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Aug 13 07:29:15.869351 systemd[1]: Reached target time-set.target - System Time Set. Aug 13 07:29:15.933793 systemd-networkd[1434]: lo: Link UP Aug 13 07:29:15.933806 systemd-networkd[1434]: lo: Gained carrier Aug 13 07:29:15.935015 systemd-networkd[1434]: Enumeration completed Aug 13 07:29:15.935161 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 07:29:15.936835 systemd[1]: Reached target network.target - Network. Aug 13 07:29:15.949868 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 13 07:29:15.951826 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Aug 13 07:29:16.002697 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1438) Aug 13 07:29:16.078586 systemd-networkd[1434]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:29:16.078599 systemd-networkd[1434]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 07:29:16.080828 systemd-networkd[1434]: eth0: Link UP Aug 13 07:29:16.080851 systemd-networkd[1434]: eth0: Gained carrier Aug 13 07:29:16.080868 systemd-networkd[1434]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:29:16.098651 kernel: mousedev: PS/2 mouse device common for all mice Aug 13 07:29:16.109708 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Aug 13 07:29:16.126664 kernel: ACPI: button: Power Button [PWRF] Aug 13 07:29:16.128470 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 13 07:29:16.128997 systemd-networkd[1434]: eth0: DHCPv4 address 10.243.76.66/30, gateway 10.243.76.65 acquired from 10.243.76.65 Aug 13 07:29:16.131160 systemd-timesyncd[1403]: Network configuration changed, trying to establish connection. Aug 13 07:29:16.137201 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 13 07:29:16.168357 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 13 07:29:16.186718 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Aug 13 07:29:16.192780 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Aug 13 07:29:16.193166 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Aug 13 07:29:16.198685 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Aug 13 07:29:16.246979 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:29:16.551091 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Aug 13 07:29:16.581152 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:29:16.592961 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Aug 13 07:29:16.610179 lvm[1468]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 07:29:16.648120 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Aug 13 07:29:16.649341 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 07:29:16.650096 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 07:29:16.651089 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 13 07:29:16.652103 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 13 07:29:16.653413 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 13 07:29:16.654323 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 13 07:29:16.655110 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 13 07:29:16.655974 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 07:29:16.656039 systemd[1]: Reached target paths.target - Path Units. Aug 13 07:29:16.656695 systemd[1]: Reached target timers.target - Timer Units. Aug 13 07:29:16.660047 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 13 07:29:16.662827 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 13 07:29:16.669241 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 13 07:29:16.672096 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Aug 13 07:29:16.673577 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 13 07:29:16.674469 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 07:29:16.675181 systemd[1]: Reached target basic.target - Basic System. Aug 13 07:29:16.675894 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 13 07:29:16.675941 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 13 07:29:16.682856 systemd[1]: Starting containerd.service - containerd container runtime... Aug 13 07:29:16.691864 lvm[1472]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 07:29:16.695924 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Aug 13 07:29:16.704766 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 13 07:29:16.712837 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 13 07:29:16.718867 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 13 07:29:16.721727 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 13 07:29:16.726904 jq[1477]: false Aug 13 07:29:16.725810 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 13 07:29:16.734798 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 13 07:29:16.748887 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 13 07:29:16.754369 dbus-daemon[1475]: [system] SELinux support is enabled Aug 13 07:29:16.759875 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 13 07:29:16.768298 dbus-daemon[1475]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1434 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Aug 13 07:29:16.772853 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 13 07:29:16.774540 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 13 07:29:16.775306 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 13 07:29:16.777383 systemd[1]: Starting update-engine.service - Update Engine... Aug 13 07:29:16.781949 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 13 07:29:16.785532 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 13 07:29:16.791700 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Aug 13 07:29:16.801220 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 07:29:16.802780 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 13 07:29:16.812314 jq[1487]: true Aug 13 07:29:16.817982 dbus-daemon[1475]: [system] Successfully activated service 'org.freedesktop.systemd1' Aug 13 07:29:16.828806 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 07:29:16.828861 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 13 07:29:16.843195 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Aug 13 07:29:16.844009 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 07:29:16.844051 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 13 07:29:16.845764 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 07:29:16.847034 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 13 07:29:16.856093 jq[1492]: true Aug 13 07:29:16.870219 (ntainerd)[1500]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 13 07:29:16.902645 tar[1493]: linux-amd64/LICENSE Aug 13 07:29:16.902645 tar[1493]: linux-amd64/helm Aug 13 07:29:16.934982 extend-filesystems[1479]: Found loop4 Aug 13 07:29:16.936090 update_engine[1486]: I20250813 07:29:16.932750 1486 main.cc:92] Flatcar Update Engine starting Aug 13 07:29:16.932111 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 07:29:16.933730 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 13 07:29:16.941708 extend-filesystems[1479]: Found loop5 Aug 13 07:29:16.941708 extend-filesystems[1479]: Found loop6 Aug 13 07:29:16.941708 extend-filesystems[1479]: Found loop7 Aug 13 07:29:16.941708 extend-filesystems[1479]: Found vda Aug 13 07:29:16.941708 extend-filesystems[1479]: Found vda1 Aug 13 07:29:16.941708 extend-filesystems[1479]: Found vda2 Aug 13 07:29:16.941708 extend-filesystems[1479]: Found vda3 Aug 13 07:29:16.941708 extend-filesystems[1479]: Found usr Aug 13 07:29:16.941708 extend-filesystems[1479]: Found vda4 Aug 13 07:29:16.941708 extend-filesystems[1479]: Found vda6 Aug 13 07:29:16.941708 extend-filesystems[1479]: Found vda7 Aug 13 07:29:16.941708 extend-filesystems[1479]: Found vda9 Aug 13 07:29:16.941708 extend-filesystems[1479]: Checking size of /dev/vda9 Aug 13 07:29:16.944055 systemd[1]: Started update-engine.service - Update Engine. Aug 13 07:29:17.001509 update_engine[1486]: I20250813 07:29:16.944725 1486 update_check_scheduler.cc:74] Next update check in 2m33s Aug 13 07:29:16.956372 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 13 07:29:16.996596 systemd-logind[1485]: Watching system buttons on /dev/input/event2 (Power Button) Aug 13 07:29:16.997559 systemd-logind[1485]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 13 07:29:16.999010 systemd-logind[1485]: New seat seat0. Aug 13 07:29:17.001764 systemd[1]: Started systemd-logind.service - User Login Management. Aug 13 07:29:17.035649 extend-filesystems[1479]: Resized partition /dev/vda9 Aug 13 07:29:17.039726 extend-filesystems[1526]: resize2fs 1.47.1 (20-May-2024) Aug 13 07:29:17.053900 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Aug 13 07:29:17.053949 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1441) Aug 13 07:29:17.231331 systemd-networkd[1434]: eth0: Gained IPv6LL Aug 13 07:29:17.237337 systemd-timesyncd[1403]: Network configuration changed, trying to establish connection. Aug 13 07:29:17.271931 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 13 07:29:17.276611 systemd[1]: Reached target network-online.target - Network is Online. Aug 13 07:29:17.290369 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:29:17.294865 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 13 07:29:17.349388 bash[1533]: Updated "/home/core/.ssh/authorized_keys" Aug 13 07:29:17.350034 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 13 07:29:17.360843 systemd[1]: Starting sshkeys.service... Aug 13 07:29:17.455595 dbus-daemon[1475]: [system] Successfully activated service 'org.freedesktop.hostname1' Aug 13 07:29:17.459399 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Aug 13 07:29:17.471103 dbus-daemon[1475]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1499 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Aug 13 07:29:17.603257 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Aug 13 07:29:17.548961 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 13 07:29:17.659969 extend-filesystems[1526]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Aug 13 07:29:17.659969 extend-filesystems[1526]: old_desc_blocks = 1, new_desc_blocks = 8 Aug 13 07:29:17.659969 extend-filesystems[1526]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Aug 13 07:29:17.561119 systemd[1]: Starting polkit.service - Authorization Manager... Aug 13 07:29:17.663447 extend-filesystems[1479]: Resized filesystem in /dev/vda9 Aug 13 07:29:17.585412 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Aug 13 07:29:17.595965 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Aug 13 07:29:17.644685 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 07:29:17.645949 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 13 07:29:17.683354 polkitd[1551]: Started polkitd version 121 Aug 13 07:29:17.720522 polkitd[1551]: Loading rules from directory /etc/polkit-1/rules.d Aug 13 07:29:17.720669 polkitd[1551]: Loading rules from directory /usr/share/polkit-1/rules.d Aug 13 07:29:17.732844 polkitd[1551]: Finished loading, compiling and executing 2 rules Aug 13 07:29:17.735403 dbus-daemon[1475]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Aug 13 07:29:17.735669 systemd[1]: Started polkit.service - Authorization Manager. Aug 13 07:29:17.736923 polkitd[1551]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Aug 13 07:29:17.767475 containerd[1500]: time="2025-08-13T07:29:17.767339702Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Aug 13 07:29:17.789260 systemd-hostnamed[1499]: Hostname set to (static) Aug 13 07:29:17.800756 systemd-timesyncd[1403]: Network configuration changed, trying to establish connection. Aug 13 07:29:17.801053 systemd-networkd[1434]: eth0: Ignoring DHCPv6 address 2a02:1348:17c:d310:24:19ff:fef3:4c42/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17c:d310:24:19ff:fef3:4c42/64 assigned by NDisc. Aug 13 07:29:17.801178 systemd-networkd[1434]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Aug 13 07:29:17.877888 containerd[1500]: time="2025-08-13T07:29:17.877828952Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:29:17.888646 containerd[1500]: time="2025-08-13T07:29:17.886782378Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.100-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:29:17.888646 containerd[1500]: time="2025-08-13T07:29:17.886832437Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 13 07:29:17.888646 containerd[1500]: time="2025-08-13T07:29:17.886857096Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 13 07:29:17.888646 containerd[1500]: time="2025-08-13T07:29:17.887131036Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Aug 13 07:29:17.888646 containerd[1500]: time="2025-08-13T07:29:17.887162954Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Aug 13 07:29:17.888646 containerd[1500]: time="2025-08-13T07:29:17.887275637Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:29:17.888646 containerd[1500]: time="2025-08-13T07:29:17.887299658Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:29:17.888646 containerd[1500]: time="2025-08-13T07:29:17.887520566Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:29:17.888646 containerd[1500]: time="2025-08-13T07:29:17.887543515Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 13 07:29:17.888646 containerd[1500]: time="2025-08-13T07:29:17.887564279Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:29:17.888646 containerd[1500]: time="2025-08-13T07:29:17.887579842Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 13 07:29:17.889097 containerd[1500]: time="2025-08-13T07:29:17.887762112Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:29:17.889097 containerd[1500]: time="2025-08-13T07:29:17.888158786Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:29:17.889097 containerd[1500]: time="2025-08-13T07:29:17.888339798Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:29:17.889097 containerd[1500]: time="2025-08-13T07:29:17.888362964Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 13 07:29:17.889097 containerd[1500]: time="2025-08-13T07:29:17.888505739Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 13 07:29:17.889097 containerd[1500]: time="2025-08-13T07:29:17.888584865Z" level=info msg="metadata content store policy set" policy=shared Aug 13 07:29:17.892220 locksmithd[1515]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 07:29:17.905027 containerd[1500]: time="2025-08-13T07:29:17.904278573Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 13 07:29:17.905027 containerd[1500]: time="2025-08-13T07:29:17.904393222Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 13 07:29:17.905027 containerd[1500]: time="2025-08-13T07:29:17.904421254Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Aug 13 07:29:17.905027 containerd[1500]: time="2025-08-13T07:29:17.904495843Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Aug 13 07:29:17.905027 containerd[1500]: time="2025-08-13T07:29:17.904523315Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 13 07:29:17.905027 containerd[1500]: time="2025-08-13T07:29:17.904779193Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 13 07:29:17.905556 containerd[1500]: time="2025-08-13T07:29:17.905528362Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 13 07:29:17.907641 containerd[1500]: time="2025-08-13T07:29:17.906916562Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Aug 13 07:29:17.907641 containerd[1500]: time="2025-08-13T07:29:17.906948665Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Aug 13 07:29:17.907641 containerd[1500]: time="2025-08-13T07:29:17.906969307Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Aug 13 07:29:17.907641 containerd[1500]: time="2025-08-13T07:29:17.906990790Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 13 07:29:17.907641 containerd[1500]: time="2025-08-13T07:29:17.907012531Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 13 07:29:17.907641 containerd[1500]: time="2025-08-13T07:29:17.907031293Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 13 07:29:17.907641 containerd[1500]: time="2025-08-13T07:29:17.907051711Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 13 07:29:17.907641 containerd[1500]: time="2025-08-13T07:29:17.907071958Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 13 07:29:17.907641 containerd[1500]: time="2025-08-13T07:29:17.907109959Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 13 07:29:17.907641 containerd[1500]: time="2025-08-13T07:29:17.907143106Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 13 07:29:17.907641 containerd[1500]: time="2025-08-13T07:29:17.907173638Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 13 07:29:17.907641 containerd[1500]: time="2025-08-13T07:29:17.907236120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 13 07:29:17.907641 containerd[1500]: time="2025-08-13T07:29:17.907261091Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 13 07:29:17.907641 containerd[1500]: time="2025-08-13T07:29:17.907279954Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 13 07:29:17.908159 containerd[1500]: time="2025-08-13T07:29:17.907300510Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 13 07:29:17.908159 containerd[1500]: time="2025-08-13T07:29:17.907319263Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 13 07:29:17.908159 containerd[1500]: time="2025-08-13T07:29:17.907340398Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 13 07:29:17.908159 containerd[1500]: time="2025-08-13T07:29:17.907359242Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 13 07:29:17.908159 containerd[1500]: time="2025-08-13T07:29:17.907378479Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 13 07:29:17.908159 containerd[1500]: time="2025-08-13T07:29:17.907408431Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Aug 13 07:29:17.908159 containerd[1500]: time="2025-08-13T07:29:17.907443112Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Aug 13 07:29:17.908159 containerd[1500]: time="2025-08-13T07:29:17.907465154Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 13 07:29:17.908159 containerd[1500]: time="2025-08-13T07:29:17.907486519Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Aug 13 07:29:17.908159 containerd[1500]: time="2025-08-13T07:29:17.907505727Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 13 07:29:17.908159 containerd[1500]: time="2025-08-13T07:29:17.907526729Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Aug 13 07:29:17.908159 containerd[1500]: time="2025-08-13T07:29:17.907565588Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Aug 13 07:29:17.908159 containerd[1500]: time="2025-08-13T07:29:17.907588404Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 13 07:29:17.911450 containerd[1500]: time="2025-08-13T07:29:17.907610432Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 13 07:29:17.911450 containerd[1500]: time="2025-08-13T07:29:17.908973126Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 13 07:29:17.911450 containerd[1500]: time="2025-08-13T07:29:17.909007350Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Aug 13 07:29:17.911450 containerd[1500]: time="2025-08-13T07:29:17.909026051Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 13 07:29:17.911450 containerd[1500]: time="2025-08-13T07:29:17.909044693Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Aug 13 07:29:17.911450 containerd[1500]: time="2025-08-13T07:29:17.909060361Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 13 07:29:17.911450 containerd[1500]: time="2025-08-13T07:29:17.909084593Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Aug 13 07:29:17.911450 containerd[1500]: time="2025-08-13T07:29:17.909105694Z" level=info msg="NRI interface is disabled by configuration." Aug 13 07:29:17.911450 containerd[1500]: time="2025-08-13T07:29:17.909122650Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 13 07:29:17.911846 containerd[1500]: time="2025-08-13T07:29:17.909553477Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 13 07:29:17.911846 containerd[1500]: time="2025-08-13T07:29:17.909652206Z" level=info msg="Connect containerd service" Aug 13 07:29:17.911846 containerd[1500]: time="2025-08-13T07:29:17.909730155Z" level=info msg="using legacy CRI server" Aug 13 07:29:17.911846 containerd[1500]: time="2025-08-13T07:29:17.909746764Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 13 07:29:17.911846 containerd[1500]: time="2025-08-13T07:29:17.909898067Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 13 07:29:17.914605 containerd[1500]: time="2025-08-13T07:29:17.914571190Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 07:29:17.972092 sshd_keygen[1501]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 07:29:17.978510 containerd[1500]: time="2025-08-13T07:29:17.978433120Z" level=info msg="Start subscribing containerd event" Aug 13 07:29:17.981121 containerd[1500]: time="2025-08-13T07:29:17.978698273Z" level=info msg="Start recovering state" Aug 13 07:29:17.981121 containerd[1500]: time="2025-08-13T07:29:17.978871901Z" level=info msg="Start event monitor" Aug 13 07:29:17.981121 containerd[1500]: time="2025-08-13T07:29:17.978903582Z" level=info msg="Start snapshots syncer" Aug 13 07:29:17.981121 containerd[1500]: time="2025-08-13T07:29:17.978994801Z" level=info msg="Start cni network conf syncer for default" Aug 13 07:29:17.981121 containerd[1500]: time="2025-08-13T07:29:17.979022293Z" level=info msg="Start streaming server" Aug 13 07:29:17.981121 containerd[1500]: time="2025-08-13T07:29:17.979613706Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 07:29:17.981121 containerd[1500]: time="2025-08-13T07:29:17.979729031Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 07:29:17.981121 containerd[1500]: time="2025-08-13T07:29:17.979841586Z" level=info msg="containerd successfully booted in 0.214180s" Aug 13 07:29:17.980014 systemd[1]: Started containerd.service - containerd container runtime. Aug 13 07:29:18.116517 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 13 07:29:18.129881 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 13 07:29:18.158462 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 07:29:18.161332 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 13 07:29:18.187699 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 13 07:29:18.239434 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 13 07:29:18.250272 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 13 07:29:18.260220 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 13 07:29:18.261438 systemd[1]: Reached target getty.target - Login Prompts. Aug 13 07:29:18.745002 tar[1493]: linux-amd64/README.md Aug 13 07:29:18.767822 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 13 07:29:18.960229 systemd-timesyncd[1403]: Network configuration changed, trying to establish connection. Aug 13 07:29:19.397603 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:29:19.407879 (kubelet)[1600]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 07:29:20.211037 kubelet[1600]: E0813 07:29:20.210941 1600 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 07:29:20.212871 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 07:29:20.213112 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 07:29:20.213607 systemd[1]: kubelet.service: Consumed 1.752s CPU time. Aug 13 07:29:21.120829 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 13 07:29:21.135151 systemd[1]: Started sshd@0-10.243.76.66:22-139.178.68.195:45432.service - OpenSSH per-connection server daemon (139.178.68.195:45432). Aug 13 07:29:22.035717 sshd[1610]: Accepted publickey for core from 139.178.68.195 port 45432 ssh2: RSA SHA256:OaWZFdeXPh6CYYASI1PRTz4egRCVAyEUFgarVyGxwBQ Aug 13 07:29:22.038612 sshd[1610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:29:22.054357 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 13 07:29:22.070417 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 13 07:29:22.076502 systemd-logind[1485]: New session 1 of user core. Aug 13 07:29:22.091878 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 13 07:29:22.100116 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 13 07:29:22.116256 (systemd)[1614]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 07:29:22.253094 systemd[1614]: Queued start job for default target default.target. Aug 13 07:29:22.265093 systemd[1614]: Created slice app.slice - User Application Slice. Aug 13 07:29:22.265137 systemd[1614]: Reached target paths.target - Paths. Aug 13 07:29:22.265159 systemd[1614]: Reached target timers.target - Timers. Aug 13 07:29:22.267353 systemd[1614]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 13 07:29:22.282640 systemd[1614]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 13 07:29:22.282835 systemd[1614]: Reached target sockets.target - Sockets. Aug 13 07:29:22.282859 systemd[1614]: Reached target basic.target - Basic System. Aug 13 07:29:22.282927 systemd[1614]: Reached target default.target - Main User Target. Aug 13 07:29:22.283022 systemd[1614]: Startup finished in 158ms. Aug 13 07:29:22.283143 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 13 07:29:22.296915 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 13 07:29:22.944140 systemd[1]: Started sshd@1-10.243.76.66:22-139.178.68.195:45446.service - OpenSSH per-connection server daemon (139.178.68.195:45446). Aug 13 07:29:23.332150 login[1590]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Aug 13 07:29:23.337361 login[1589]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Aug 13 07:29:23.341434 systemd-logind[1485]: New session 2 of user core. Aug 13 07:29:23.349939 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 13 07:29:23.354659 systemd-logind[1485]: New session 3 of user core. Aug 13 07:29:23.362889 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 13 07:29:23.833850 sshd[1625]: Accepted publickey for core from 139.178.68.195 port 45446 ssh2: RSA SHA256:OaWZFdeXPh6CYYASI1PRTz4egRCVAyEUFgarVyGxwBQ Aug 13 07:29:23.836002 sshd[1625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:29:23.842019 systemd-logind[1485]: New session 4 of user core. Aug 13 07:29:23.851933 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 13 07:29:23.915168 coreos-metadata[1474]: Aug 13 07:29:23.915 WARN failed to locate config-drive, using the metadata service API instead Aug 13 07:29:23.941424 coreos-metadata[1474]: Aug 13 07:29:23.941 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Aug 13 07:29:23.947928 coreos-metadata[1474]: Aug 13 07:29:23.947 INFO Fetch failed with 404: resource not found Aug 13 07:29:23.948076 coreos-metadata[1474]: Aug 13 07:29:23.947 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Aug 13 07:29:23.948698 coreos-metadata[1474]: Aug 13 07:29:23.948 INFO Fetch successful Aug 13 07:29:23.948798 coreos-metadata[1474]: Aug 13 07:29:23.948 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Aug 13 07:29:23.960668 coreos-metadata[1474]: Aug 13 07:29:23.960 INFO Fetch successful Aug 13 07:29:23.960668 coreos-metadata[1474]: Aug 13 07:29:23.960 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Aug 13 07:29:23.974654 coreos-metadata[1474]: Aug 13 07:29:23.974 INFO Fetch successful Aug 13 07:29:23.974654 coreos-metadata[1474]: Aug 13 07:29:23.974 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Aug 13 07:29:23.991406 coreos-metadata[1474]: Aug 13 07:29:23.991 INFO Fetch successful Aug 13 07:29:23.991406 coreos-metadata[1474]: Aug 13 07:29:23.991 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Aug 13 07:29:24.011411 coreos-metadata[1474]: Aug 13 07:29:24.011 INFO Fetch successful Aug 13 07:29:24.057892 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Aug 13 07:29:24.058966 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 13 07:29:24.459077 sshd[1625]: pam_unix(sshd:session): session closed for user core Aug 13 07:29:24.465638 systemd[1]: sshd@1-10.243.76.66:22-139.178.68.195:45446.service: Deactivated successfully. Aug 13 07:29:24.468609 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 07:29:24.469869 systemd-logind[1485]: Session 4 logged out. Waiting for processes to exit. Aug 13 07:29:24.471548 systemd-logind[1485]: Removed session 4. Aug 13 07:29:24.623171 systemd[1]: Started sshd@2-10.243.76.66:22-139.178.68.195:45456.service - OpenSSH per-connection server daemon (139.178.68.195:45456). Aug 13 07:29:24.739231 coreos-metadata[1552]: Aug 13 07:29:24.739 WARN failed to locate config-drive, using the metadata service API instead Aug 13 07:29:24.762763 coreos-metadata[1552]: Aug 13 07:29:24.762 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Aug 13 07:29:24.791042 coreos-metadata[1552]: Aug 13 07:29:24.790 INFO Fetch successful Aug 13 07:29:24.791207 coreos-metadata[1552]: Aug 13 07:29:24.791 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Aug 13 07:29:24.818858 coreos-metadata[1552]: Aug 13 07:29:24.818 INFO Fetch successful Aug 13 07:29:24.821120 unknown[1552]: wrote ssh authorized keys file for user: core Aug 13 07:29:24.840716 update-ssh-keys[1670]: Updated "/home/core/.ssh/authorized_keys" Aug 13 07:29:24.841310 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Aug 13 07:29:24.844542 systemd[1]: Finished sshkeys.service. Aug 13 07:29:24.846097 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 13 07:29:24.848723 systemd[1]: Startup finished in 1.573s (kernel) + 14.611s (initrd) + 12.531s (userspace) = 28.716s. Aug 13 07:29:25.516475 sshd[1665]: Accepted publickey for core from 139.178.68.195 port 45456 ssh2: RSA SHA256:OaWZFdeXPh6CYYASI1PRTz4egRCVAyEUFgarVyGxwBQ Aug 13 07:29:25.518672 sshd[1665]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:29:25.525860 systemd-logind[1485]: New session 5 of user core. Aug 13 07:29:25.532898 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 13 07:29:26.139725 sshd[1665]: pam_unix(sshd:session): session closed for user core Aug 13 07:29:26.143468 systemd[1]: sshd@2-10.243.76.66:22-139.178.68.195:45456.service: Deactivated successfully. Aug 13 07:29:26.145493 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 07:29:26.147793 systemd-logind[1485]: Session 5 logged out. Waiting for processes to exit. Aug 13 07:29:26.149224 systemd-logind[1485]: Removed session 5. Aug 13 07:29:30.463829 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 07:29:30.482993 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:29:30.774862 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:29:30.786083 (kubelet)[1685]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 07:29:30.864234 kubelet[1685]: E0813 07:29:30.864138 1685 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 07:29:30.867967 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 07:29:30.868208 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 07:29:36.305332 systemd[1]: Started sshd@3-10.243.76.66:22-139.178.68.195:48596.service - OpenSSH per-connection server daemon (139.178.68.195:48596). Aug 13 07:29:37.213345 sshd[1693]: Accepted publickey for core from 139.178.68.195 port 48596 ssh2: RSA SHA256:OaWZFdeXPh6CYYASI1PRTz4egRCVAyEUFgarVyGxwBQ Aug 13 07:29:37.215680 sshd[1693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:29:37.222579 systemd-logind[1485]: New session 6 of user core. Aug 13 07:29:37.237854 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 13 07:29:37.837762 sshd[1693]: pam_unix(sshd:session): session closed for user core Aug 13 07:29:37.842071 systemd-logind[1485]: Session 6 logged out. Waiting for processes to exit. Aug 13 07:29:37.843314 systemd[1]: sshd@3-10.243.76.66:22-139.178.68.195:48596.service: Deactivated successfully. Aug 13 07:29:37.845872 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 07:29:37.847227 systemd-logind[1485]: Removed session 6. Aug 13 07:29:37.997976 systemd[1]: Started sshd@4-10.243.76.66:22-139.178.68.195:48612.service - OpenSSH per-connection server daemon (139.178.68.195:48612). Aug 13 07:29:38.891496 sshd[1700]: Accepted publickey for core from 139.178.68.195 port 48612 ssh2: RSA SHA256:OaWZFdeXPh6CYYASI1PRTz4egRCVAyEUFgarVyGxwBQ Aug 13 07:29:38.894215 sshd[1700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:29:38.901918 systemd-logind[1485]: New session 7 of user core. Aug 13 07:29:38.910823 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 13 07:29:39.517517 sshd[1700]: pam_unix(sshd:session): session closed for user core Aug 13 07:29:39.524569 systemd[1]: sshd@4-10.243.76.66:22-139.178.68.195:48612.service: Deactivated successfully. Aug 13 07:29:39.527915 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 07:29:39.529390 systemd-logind[1485]: Session 7 logged out. Waiting for processes to exit. Aug 13 07:29:39.531367 systemd-logind[1485]: Removed session 7. Aug 13 07:29:39.682021 systemd[1]: Started sshd@5-10.243.76.66:22-139.178.68.195:48622.service - OpenSSH per-connection server daemon (139.178.68.195:48622). Aug 13 07:29:40.584783 sshd[1707]: Accepted publickey for core from 139.178.68.195 port 48622 ssh2: RSA SHA256:OaWZFdeXPh6CYYASI1PRTz4egRCVAyEUFgarVyGxwBQ Aug 13 07:29:40.586993 sshd[1707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:29:40.594039 systemd-logind[1485]: New session 8 of user core. Aug 13 07:29:40.602883 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 13 07:29:40.972037 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 13 07:29:40.977858 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:29:41.159402 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:29:41.172125 (kubelet)[1719]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 07:29:41.210960 sshd[1707]: pam_unix(sshd:session): session closed for user core Aug 13 07:29:41.217509 systemd[1]: sshd@5-10.243.76.66:22-139.178.68.195:48622.service: Deactivated successfully. Aug 13 07:29:41.221747 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 07:29:41.222945 systemd-logind[1485]: Session 8 logged out. Waiting for processes to exit. Aug 13 07:29:41.226428 systemd-logind[1485]: Removed session 8. Aug 13 07:29:41.259021 kubelet[1719]: E0813 07:29:41.258881 1719 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 07:29:41.261175 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 07:29:41.261431 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 07:29:41.367974 systemd[1]: Started sshd@6-10.243.76.66:22-139.178.68.195:58724.service - OpenSSH per-connection server daemon (139.178.68.195:58724). Aug 13 07:29:42.263100 sshd[1729]: Accepted publickey for core from 139.178.68.195 port 58724 ssh2: RSA SHA256:OaWZFdeXPh6CYYASI1PRTz4egRCVAyEUFgarVyGxwBQ Aug 13 07:29:42.265231 sshd[1729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:29:42.272270 systemd-logind[1485]: New session 9 of user core. Aug 13 07:29:42.279851 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 13 07:29:42.753531 sudo[1732]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 13 07:29:42.754777 sudo[1732]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:29:42.769251 sudo[1732]: pam_unix(sudo:session): session closed for user root Aug 13 07:29:42.913692 sshd[1729]: pam_unix(sshd:session): session closed for user core Aug 13 07:29:42.917973 systemd[1]: sshd@6-10.243.76.66:22-139.178.68.195:58724.service: Deactivated successfully. Aug 13 07:29:42.920136 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 07:29:42.921881 systemd-logind[1485]: Session 9 logged out. Waiting for processes to exit. Aug 13 07:29:42.923325 systemd-logind[1485]: Removed session 9. Aug 13 07:29:43.079758 systemd[1]: Started sshd@7-10.243.76.66:22-139.178.68.195:58730.service - OpenSSH per-connection server daemon (139.178.68.195:58730). Aug 13 07:29:43.972532 sshd[1737]: Accepted publickey for core from 139.178.68.195 port 58730 ssh2: RSA SHA256:OaWZFdeXPh6CYYASI1PRTz4egRCVAyEUFgarVyGxwBQ Aug 13 07:29:43.974705 sshd[1737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:29:43.981232 systemd-logind[1485]: New session 10 of user core. Aug 13 07:29:43.991822 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 13 07:29:44.457245 sudo[1741]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 13 07:29:44.458518 sudo[1741]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:29:44.465270 sudo[1741]: pam_unix(sudo:session): session closed for user root Aug 13 07:29:44.473443 sudo[1740]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Aug 13 07:29:44.473933 sudo[1740]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:29:44.498075 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Aug 13 07:29:44.501274 auditctl[1744]: No rules Aug 13 07:29:44.503272 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 07:29:44.503655 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Aug 13 07:29:44.506843 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 13 07:29:44.559687 augenrules[1762]: No rules Aug 13 07:29:44.561660 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 13 07:29:44.563425 sudo[1740]: pam_unix(sudo:session): session closed for user root Aug 13 07:29:44.709043 sshd[1737]: pam_unix(sshd:session): session closed for user core Aug 13 07:29:44.713694 systemd[1]: sshd@7-10.243.76.66:22-139.178.68.195:58730.service: Deactivated successfully. Aug 13 07:29:44.716199 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 07:29:44.717356 systemd-logind[1485]: Session 10 logged out. Waiting for processes to exit. Aug 13 07:29:44.719834 systemd-logind[1485]: Removed session 10. Aug 13 07:29:44.867971 systemd[1]: Started sshd@8-10.243.76.66:22-139.178.68.195:58734.service - OpenSSH per-connection server daemon (139.178.68.195:58734). Aug 13 07:29:45.763523 sshd[1770]: Accepted publickey for core from 139.178.68.195 port 58734 ssh2: RSA SHA256:OaWZFdeXPh6CYYASI1PRTz4egRCVAyEUFgarVyGxwBQ Aug 13 07:29:45.766475 sshd[1770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:29:45.772856 systemd-logind[1485]: New session 11 of user core. Aug 13 07:29:45.779876 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 13 07:29:46.244784 sudo[1773]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 07:29:46.245279 sudo[1773]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:29:46.931931 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 13 07:29:46.938362 (dockerd)[1789]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 13 07:29:47.661054 dockerd[1789]: time="2025-08-13T07:29:47.660957963Z" level=info msg="Starting up" Aug 13 07:29:47.827125 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2464815519-merged.mount: Deactivated successfully. Aug 13 07:29:47.859772 dockerd[1789]: time="2025-08-13T07:29:47.859712211Z" level=info msg="Loading containers: start." Aug 13 07:29:47.873297 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Aug 13 07:29:48.030771 kernel: Initializing XFRM netlink socket Aug 13 07:29:48.082215 systemd-timesyncd[1403]: Network configuration changed, trying to establish connection. Aug 13 07:29:48.149835 systemd-networkd[1434]: docker0: Link UP Aug 13 07:29:48.165498 dockerd[1789]: time="2025-08-13T07:29:48.165337676Z" level=info msg="Loading containers: done." Aug 13 07:29:48.218156 dockerd[1789]: time="2025-08-13T07:29:48.218060026Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 07:29:48.218372 dockerd[1789]: time="2025-08-13T07:29:48.218238632Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Aug 13 07:29:48.218422 dockerd[1789]: time="2025-08-13T07:29:48.218388801Z" level=info msg="Daemon has completed initialization" Aug 13 07:29:48.272329 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 13 07:29:48.273606 dockerd[1789]: time="2025-08-13T07:29:48.272036819Z" level=info msg="API listen on /run/docker.sock" Aug 13 07:29:48.371107 systemd-timesyncd[1403]: Contacted time server [2a01:7e00::f03c:91ff:fe89:410f]:123 (2.flatcar.pool.ntp.org). Aug 13 07:29:48.371210 systemd-timesyncd[1403]: Initial clock synchronization to Wed 2025-08-13 07:29:48.588667 UTC. Aug 13 07:29:49.041968 containerd[1500]: time="2025-08-13T07:29:49.041831466Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" Aug 13 07:29:49.969504 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1877286210.mount: Deactivated successfully. Aug 13 07:29:51.473296 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Aug 13 07:29:51.484932 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:29:51.910958 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:29:51.925369 (kubelet)[1996]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 07:29:52.115437 kubelet[1996]: E0813 07:29:52.115265 1996 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 07:29:52.119155 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 07:29:52.120055 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 07:29:52.336474 containerd[1500]: time="2025-08-13T07:29:52.336347827Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:29:52.338223 containerd[1500]: time="2025-08-13T07:29:52.337614658Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.4: active requests=0, bytes read=28682887" Aug 13 07:29:52.340661 containerd[1500]: time="2025-08-13T07:29:52.339880799Z" level=info msg="ImageCreate event name:\"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:29:52.345705 containerd[1500]: time="2025-08-13T07:29:52.345664739Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:29:52.347611 containerd[1500]: time="2025-08-13T07:29:52.347561965Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.4\" with image id \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\", size \"28679679\" in 3.305572264s" Aug 13 07:29:52.347864 containerd[1500]: time="2025-08-13T07:29:52.347823076Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\"" Aug 13 07:29:52.350405 containerd[1500]: time="2025-08-13T07:29:52.350330053Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" Aug 13 07:29:55.100343 containerd[1500]: time="2025-08-13T07:29:55.098672197Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:29:55.102592 containerd[1500]: time="2025-08-13T07:29:55.102540178Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.4: active requests=0, bytes read=24779597" Aug 13 07:29:55.104258 containerd[1500]: time="2025-08-13T07:29:55.104223125Z" level=info msg="ImageCreate event name:\"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:29:55.108085 containerd[1500]: time="2025-08-13T07:29:55.108043448Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:29:55.109988 containerd[1500]: time="2025-08-13T07:29:55.109950855Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.4\" with image id \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\", size \"26267962\" in 2.759165092s" Aug 13 07:29:55.110135 containerd[1500]: time="2025-08-13T07:29:55.110106721Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\"" Aug 13 07:29:55.111566 containerd[1500]: time="2025-08-13T07:29:55.111527730Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" Aug 13 07:29:57.799752 containerd[1500]: time="2025-08-13T07:29:57.798106560Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:29:57.802530 containerd[1500]: time="2025-08-13T07:29:57.802436283Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.4: active requests=0, bytes read=19169946" Aug 13 07:29:57.803408 containerd[1500]: time="2025-08-13T07:29:57.803338909Z" level=info msg="ImageCreate event name:\"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:29:57.808907 containerd[1500]: time="2025-08-13T07:29:57.808842874Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:29:57.811278 containerd[1500]: time="2025-08-13T07:29:57.810597310Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.4\" with image id \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\", size \"20658329\" in 2.699005103s" Aug 13 07:29:57.811278 containerd[1500]: time="2025-08-13T07:29:57.810739405Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\"" Aug 13 07:29:57.812909 containerd[1500]: time="2025-08-13T07:29:57.812855865Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" Aug 13 07:29:59.879178 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount206116188.mount: Deactivated successfully. Aug 13 07:30:00.804504 containerd[1500]: time="2025-08-13T07:30:00.804398856Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:30:00.806160 containerd[1500]: time="2025-08-13T07:30:00.805898175Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.4: active requests=0, bytes read=30917864" Aug 13 07:30:00.807141 containerd[1500]: time="2025-08-13T07:30:00.807064400Z" level=info msg="ImageCreate event name:\"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:30:00.809956 containerd[1500]: time="2025-08-13T07:30:00.809895903Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:30:00.811279 containerd[1500]: time="2025-08-13T07:30:00.811069547Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.4\" with image id \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\", repo tag \"registry.k8s.io/kube-proxy:v1.32.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\", size \"30916875\" in 2.99816293s" Aug 13 07:30:00.811279 containerd[1500]: time="2025-08-13T07:30:00.811121023Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\"" Aug 13 07:30:00.813203 containerd[1500]: time="2025-08-13T07:30:00.813112958Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 07:30:01.878152 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1712211854.mount: Deactivated successfully. Aug 13 07:30:02.149854 update_engine[1486]: I20250813 07:30:02.146978 1486 update_attempter.cc:509] Updating boot flags... Aug 13 07:30:02.171752 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Aug 13 07:30:02.184028 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:30:02.261667 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2042) Aug 13 07:30:02.422763 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2043) Aug 13 07:30:02.572699 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2043) Aug 13 07:30:02.897918 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:30:02.912245 (kubelet)[2055]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 07:30:03.033814 kubelet[2055]: E0813 07:30:03.033719 2055 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 07:30:03.036605 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 07:30:03.036890 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 07:30:03.848664 containerd[1500]: time="2025-08-13T07:30:03.847074304Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:30:03.848664 containerd[1500]: time="2025-08-13T07:30:03.848824806Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" Aug 13 07:30:03.849940 containerd[1500]: time="2025-08-13T07:30:03.849489588Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:30:03.855937 containerd[1500]: time="2025-08-13T07:30:03.855858491Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:30:03.858904 containerd[1500]: time="2025-08-13T07:30:03.858504893Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 3.045340367s" Aug 13 07:30:03.858904 containerd[1500]: time="2025-08-13T07:30:03.858614159Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 13 07:30:03.862912 containerd[1500]: time="2025-08-13T07:30:03.862817781Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 07:30:04.895480 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2507891006.mount: Deactivated successfully. Aug 13 07:30:04.902719 containerd[1500]: time="2025-08-13T07:30:04.902662478Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:30:04.904212 containerd[1500]: time="2025-08-13T07:30:04.903943778Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Aug 13 07:30:04.905094 containerd[1500]: time="2025-08-13T07:30:04.905047673Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:30:04.908356 containerd[1500]: time="2025-08-13T07:30:04.908316546Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:30:04.909868 containerd[1500]: time="2025-08-13T07:30:04.909833684Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.046927586s" Aug 13 07:30:04.910145 containerd[1500]: time="2025-08-13T07:30:04.910015648Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 13 07:30:04.911083 containerd[1500]: time="2025-08-13T07:30:04.910820984Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Aug 13 07:30:06.494850 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount787336971.mount: Deactivated successfully. Aug 13 07:30:10.467192 containerd[1500]: time="2025-08-13T07:30:10.466935641Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:30:10.470814 containerd[1500]: time="2025-08-13T07:30:10.470746042Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551368" Aug 13 07:30:10.470907 containerd[1500]: time="2025-08-13T07:30:10.470856907Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:30:10.477294 containerd[1500]: time="2025-08-13T07:30:10.477223177Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:30:10.480350 containerd[1500]: time="2025-08-13T07:30:10.479132426Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 5.567656106s" Aug 13 07:30:10.480350 containerd[1500]: time="2025-08-13T07:30:10.479206829Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Aug 13 07:30:13.222595 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Aug 13 07:30:13.234742 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:30:13.573109 (kubelet)[2188]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 07:30:13.573333 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:30:13.635666 kubelet[2188]: E0813 07:30:13.635575 2188 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 07:30:13.638112 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 07:30:13.638408 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 07:30:14.933409 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:30:14.944032 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:30:14.991470 systemd[1]: Reloading requested from client PID 2202 ('systemctl') (unit session-11.scope)... Aug 13 07:30:14.991518 systemd[1]: Reloading... Aug 13 07:30:15.181683 zram_generator::config[2241]: No configuration found. Aug 13 07:30:15.330577 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:30:15.442601 systemd[1]: Reloading finished in 450 ms. Aug 13 07:30:15.519662 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 07:30:15.519988 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:30:15.523914 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:30:15.793904 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:30:15.808179 (kubelet)[2309]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 07:30:15.868032 kubelet[2309]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:30:15.868032 kubelet[2309]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 07:30:15.868032 kubelet[2309]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:30:15.868773 kubelet[2309]: I0813 07:30:15.868132 2309 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 07:30:16.241566 kubelet[2309]: I0813 07:30:16.241476 2309 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Aug 13 07:30:16.241566 kubelet[2309]: I0813 07:30:16.241519 2309 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 07:30:16.241933 kubelet[2309]: I0813 07:30:16.241900 2309 server.go:954] "Client rotation is on, will bootstrap in background" Aug 13 07:30:16.275844 kubelet[2309]: E0813 07:30:16.275780 2309 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.243.76.66:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.243.76.66:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:30:16.278197 kubelet[2309]: I0813 07:30:16.278127 2309 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 07:30:16.294055 kubelet[2309]: E0813 07:30:16.293954 2309 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 07:30:16.294055 kubelet[2309]: I0813 07:30:16.294042 2309 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 07:30:16.303915 kubelet[2309]: I0813 07:30:16.303840 2309 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 07:30:16.308206 kubelet[2309]: I0813 07:30:16.307780 2309 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 07:30:16.308206 kubelet[2309]: I0813 07:30:16.307839 2309 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-qvhwp.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 07:30:16.310235 kubelet[2309]: I0813 07:30:16.309885 2309 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 07:30:16.310235 kubelet[2309]: I0813 07:30:16.309909 2309 container_manager_linux.go:304] "Creating device plugin manager" Aug 13 07:30:16.311194 kubelet[2309]: I0813 07:30:16.311171 2309 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:30:16.318055 kubelet[2309]: I0813 07:30:16.318030 2309 kubelet.go:446] "Attempting to sync node with API server" Aug 13 07:30:16.318216 kubelet[2309]: I0813 07:30:16.318190 2309 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 07:30:16.318364 kubelet[2309]: I0813 07:30:16.318343 2309 kubelet.go:352] "Adding apiserver pod source" Aug 13 07:30:16.318490 kubelet[2309]: I0813 07:30:16.318470 2309 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 07:30:16.324179 kubelet[2309]: W0813 07:30:16.324101 2309 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.243.76.66:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-qvhwp.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.243.76.66:6443: connect: connection refused Aug 13 07:30:16.324272 kubelet[2309]: E0813 07:30:16.324221 2309 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.243.76.66:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-qvhwp.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.243.76.66:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:30:16.324896 kubelet[2309]: W0813 07:30:16.324853 2309 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.243.76.66:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.243.76.66:6443: connect: connection refused Aug 13 07:30:16.324985 kubelet[2309]: E0813 07:30:16.324915 2309 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.243.76.66:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.243.76.66:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:30:16.328027 kubelet[2309]: I0813 07:30:16.327967 2309 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Aug 13 07:30:16.332379 kubelet[2309]: I0813 07:30:16.332352 2309 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 07:30:16.333355 kubelet[2309]: W0813 07:30:16.333333 2309 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 07:30:16.337699 kubelet[2309]: I0813 07:30:16.337672 2309 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 07:30:16.338080 kubelet[2309]: I0813 07:30:16.337842 2309 server.go:1287] "Started kubelet" Aug 13 07:30:16.341546 kubelet[2309]: I0813 07:30:16.341475 2309 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 07:30:16.346975 kubelet[2309]: I0813 07:30:16.346939 2309 server.go:479] "Adding debug handlers to kubelet server" Aug 13 07:30:16.348400 kubelet[2309]: I0813 07:30:16.347728 2309 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 07:30:16.349659 kubelet[2309]: I0813 07:30:16.348881 2309 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 07:30:16.351141 kubelet[2309]: E0813 07:30:16.348253 2309 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.243.76.66:6443/api/v1/namespaces/default/events\": dial tcp 10.243.76.66:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-qvhwp.gb1.brightbox.com.185b431020aee1a1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-qvhwp.gb1.brightbox.com,UID:srv-qvhwp.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-qvhwp.gb1.brightbox.com,},FirstTimestamp:2025-08-13 07:30:16.337809825 +0000 UTC m=+0.524496605,LastTimestamp:2025-08-13 07:30:16.337809825 +0000 UTC m=+0.524496605,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-qvhwp.gb1.brightbox.com,}" Aug 13 07:30:16.357575 kubelet[2309]: I0813 07:30:16.357335 2309 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 07:30:16.358777 kubelet[2309]: I0813 07:30:16.358568 2309 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 07:30:16.367044 kubelet[2309]: I0813 07:30:16.366998 2309 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 07:30:16.367424 kubelet[2309]: E0813 07:30:16.367398 2309 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-qvhwp.gb1.brightbox.com\" not found" Aug 13 07:30:16.368158 kubelet[2309]: I0813 07:30:16.367939 2309 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 07:30:16.368158 kubelet[2309]: I0813 07:30:16.368028 2309 reconciler.go:26] "Reconciler: start to sync state" Aug 13 07:30:16.369058 kubelet[2309]: W0813 07:30:16.369015 2309 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.243.76.66:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.243.76.66:6443: connect: connection refused Aug 13 07:30:16.369783 kubelet[2309]: E0813 07:30:16.369728 2309 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.243.76.66:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.243.76.66:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:30:16.369890 kubelet[2309]: E0813 07:30:16.369857 2309 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.243.76.66:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-qvhwp.gb1.brightbox.com?timeout=10s\": dial tcp 10.243.76.66:6443: connect: connection refused" interval="200ms" Aug 13 07:30:16.370987 kubelet[2309]: E0813 07:30:16.370891 2309 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 07:30:16.380940 kubelet[2309]: I0813 07:30:16.380488 2309 factory.go:221] Registration of the containerd container factory successfully Aug 13 07:30:16.380940 kubelet[2309]: I0813 07:30:16.380513 2309 factory.go:221] Registration of the systemd container factory successfully Aug 13 07:30:16.380940 kubelet[2309]: I0813 07:30:16.380653 2309 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 07:30:16.386514 kubelet[2309]: I0813 07:30:16.386456 2309 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 07:30:16.388716 kubelet[2309]: I0813 07:30:16.388165 2309 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 07:30:16.388716 kubelet[2309]: I0813 07:30:16.388221 2309 status_manager.go:227] "Starting to sync pod status with apiserver" Aug 13 07:30:16.388716 kubelet[2309]: I0813 07:30:16.388260 2309 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 07:30:16.388716 kubelet[2309]: I0813 07:30:16.388272 2309 kubelet.go:2382] "Starting kubelet main sync loop" Aug 13 07:30:16.388716 kubelet[2309]: E0813 07:30:16.388356 2309 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 07:30:16.399820 kubelet[2309]: W0813 07:30:16.399770 2309 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.243.76.66:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.243.76.66:6443: connect: connection refused Aug 13 07:30:16.400027 kubelet[2309]: E0813 07:30:16.399993 2309 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.243.76.66:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.243.76.66:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:30:16.438503 kubelet[2309]: I0813 07:30:16.438468 2309 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 07:30:16.438503 kubelet[2309]: I0813 07:30:16.438497 2309 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 07:30:16.438796 kubelet[2309]: I0813 07:30:16.438531 2309 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:30:16.443790 kubelet[2309]: I0813 07:30:16.443747 2309 policy_none.go:49] "None policy: Start" Aug 13 07:30:16.443790 kubelet[2309]: I0813 07:30:16.443791 2309 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 07:30:16.443980 kubelet[2309]: I0813 07:30:16.443821 2309 state_mem.go:35] "Initializing new in-memory state store" Aug 13 07:30:16.452901 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 13 07:30:16.467580 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 13 07:30:16.468221 kubelet[2309]: E0813 07:30:16.468172 2309 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-qvhwp.gb1.brightbox.com\" not found" Aug 13 07:30:16.473306 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 13 07:30:16.482639 kubelet[2309]: I0813 07:30:16.481916 2309 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 07:30:16.482639 kubelet[2309]: I0813 07:30:16.482200 2309 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 07:30:16.482639 kubelet[2309]: I0813 07:30:16.482228 2309 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 07:30:16.483223 kubelet[2309]: I0813 07:30:16.483195 2309 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 07:30:16.485383 kubelet[2309]: E0813 07:30:16.485348 2309 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 07:30:16.485463 kubelet[2309]: E0813 07:30:16.485428 2309 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-qvhwp.gb1.brightbox.com\" not found" Aug 13 07:30:16.504825 systemd[1]: Created slice kubepods-burstable-podab9f095b963224bb96a5853dfa534140.slice - libcontainer container kubepods-burstable-podab9f095b963224bb96a5853dfa534140.slice. Aug 13 07:30:16.525283 kubelet[2309]: E0813 07:30:16.525223 2309 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-qvhwp.gb1.brightbox.com\" not found" node="srv-qvhwp.gb1.brightbox.com" Aug 13 07:30:16.530704 systemd[1]: Created slice kubepods-burstable-poddba51d400647deca0fbe5bca1a9f85da.slice - libcontainer container kubepods-burstable-poddba51d400647deca0fbe5bca1a9f85da.slice. Aug 13 07:30:16.535388 kubelet[2309]: E0813 07:30:16.535022 2309 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-qvhwp.gb1.brightbox.com\" not found" node="srv-qvhwp.gb1.brightbox.com" Aug 13 07:30:16.537477 systemd[1]: Created slice kubepods-burstable-pod067414cd9ab50bed4b9b0d40d88b27e5.slice - libcontainer container kubepods-burstable-pod067414cd9ab50bed4b9b0d40d88b27e5.slice. Aug 13 07:30:16.539939 kubelet[2309]: E0813 07:30:16.539902 2309 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-qvhwp.gb1.brightbox.com\" not found" node="srv-qvhwp.gb1.brightbox.com" Aug 13 07:30:16.571523 kubelet[2309]: E0813 07:30:16.571361 2309 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.243.76.66:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-qvhwp.gb1.brightbox.com?timeout=10s\": dial tcp 10.243.76.66:6443: connect: connection refused" interval="400ms" Aug 13 07:30:16.585741 kubelet[2309]: I0813 07:30:16.585664 2309 kubelet_node_status.go:75] "Attempting to register node" node="srv-qvhwp.gb1.brightbox.com" Aug 13 07:30:16.586128 kubelet[2309]: E0813 07:30:16.586085 2309 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.243.76.66:6443/api/v1/nodes\": dial tcp 10.243.76.66:6443: connect: connection refused" node="srv-qvhwp.gb1.brightbox.com" Aug 13 07:30:16.670270 kubelet[2309]: I0813 07:30:16.669819 2309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ab9f095b963224bb96a5853dfa534140-ca-certs\") pod \"kube-apiserver-srv-qvhwp.gb1.brightbox.com\" (UID: \"ab9f095b963224bb96a5853dfa534140\") " pod="kube-system/kube-apiserver-srv-qvhwp.gb1.brightbox.com" Aug 13 07:30:16.670270 kubelet[2309]: I0813 07:30:16.669915 2309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ab9f095b963224bb96a5853dfa534140-k8s-certs\") pod \"kube-apiserver-srv-qvhwp.gb1.brightbox.com\" (UID: \"ab9f095b963224bb96a5853dfa534140\") " pod="kube-system/kube-apiserver-srv-qvhwp.gb1.brightbox.com" Aug 13 07:30:16.670270 kubelet[2309]: I0813 07:30:16.669951 2309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dba51d400647deca0fbe5bca1a9f85da-kubeconfig\") pod \"kube-scheduler-srv-qvhwp.gb1.brightbox.com\" (UID: \"dba51d400647deca0fbe5bca1a9f85da\") " pod="kube-system/kube-scheduler-srv-qvhwp.gb1.brightbox.com" Aug 13 07:30:16.670270 kubelet[2309]: I0813 07:30:16.669980 2309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ab9f095b963224bb96a5853dfa534140-usr-share-ca-certificates\") pod \"kube-apiserver-srv-qvhwp.gb1.brightbox.com\" (UID: \"ab9f095b963224bb96a5853dfa534140\") " pod="kube-system/kube-apiserver-srv-qvhwp.gb1.brightbox.com" Aug 13 07:30:16.670270 kubelet[2309]: I0813 07:30:16.670007 2309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/067414cd9ab50bed4b9b0d40d88b27e5-ca-certs\") pod \"kube-controller-manager-srv-qvhwp.gb1.brightbox.com\" (UID: \"067414cd9ab50bed4b9b0d40d88b27e5\") " pod="kube-system/kube-controller-manager-srv-qvhwp.gb1.brightbox.com" Aug 13 07:30:16.670754 kubelet[2309]: I0813 07:30:16.670033 2309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/067414cd9ab50bed4b9b0d40d88b27e5-flexvolume-dir\") pod \"kube-controller-manager-srv-qvhwp.gb1.brightbox.com\" (UID: \"067414cd9ab50bed4b9b0d40d88b27e5\") " pod="kube-system/kube-controller-manager-srv-qvhwp.gb1.brightbox.com" Aug 13 07:30:16.670754 kubelet[2309]: I0813 07:30:16.670058 2309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/067414cd9ab50bed4b9b0d40d88b27e5-k8s-certs\") pod \"kube-controller-manager-srv-qvhwp.gb1.brightbox.com\" (UID: \"067414cd9ab50bed4b9b0d40d88b27e5\") " pod="kube-system/kube-controller-manager-srv-qvhwp.gb1.brightbox.com" Aug 13 07:30:16.670754 kubelet[2309]: I0813 07:30:16.670083 2309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/067414cd9ab50bed4b9b0d40d88b27e5-kubeconfig\") pod \"kube-controller-manager-srv-qvhwp.gb1.brightbox.com\" (UID: \"067414cd9ab50bed4b9b0d40d88b27e5\") " pod="kube-system/kube-controller-manager-srv-qvhwp.gb1.brightbox.com" Aug 13 07:30:16.670754 kubelet[2309]: I0813 07:30:16.670109 2309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/067414cd9ab50bed4b9b0d40d88b27e5-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-qvhwp.gb1.brightbox.com\" (UID: \"067414cd9ab50bed4b9b0d40d88b27e5\") " pod="kube-system/kube-controller-manager-srv-qvhwp.gb1.brightbox.com" Aug 13 07:30:16.791874 kubelet[2309]: I0813 07:30:16.790873 2309 kubelet_node_status.go:75] "Attempting to register node" node="srv-qvhwp.gb1.brightbox.com" Aug 13 07:30:16.792262 kubelet[2309]: E0813 07:30:16.792199 2309 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.243.76.66:6443/api/v1/nodes\": dial tcp 10.243.76.66:6443: connect: connection refused" node="srv-qvhwp.gb1.brightbox.com" Aug 13 07:30:16.830855 containerd[1500]: time="2025-08-13T07:30:16.830747909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-qvhwp.gb1.brightbox.com,Uid:ab9f095b963224bb96a5853dfa534140,Namespace:kube-system,Attempt:0,}" Aug 13 07:30:16.844924 containerd[1500]: time="2025-08-13T07:30:16.844834663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-qvhwp.gb1.brightbox.com,Uid:067414cd9ab50bed4b9b0d40d88b27e5,Namespace:kube-system,Attempt:0,}" Aug 13 07:30:16.845318 containerd[1500]: time="2025-08-13T07:30:16.845277681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-qvhwp.gb1.brightbox.com,Uid:dba51d400647deca0fbe5bca1a9f85da,Namespace:kube-system,Attempt:0,}" Aug 13 07:30:16.967055 systemd[1]: Started sshd@9-10.243.76.66:22-103.203.48.238:37454.service - OpenSSH per-connection server daemon (103.203.48.238:37454). Aug 13 07:30:16.979862 kubelet[2309]: E0813 07:30:16.979807 2309 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.243.76.66:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-qvhwp.gb1.brightbox.com?timeout=10s\": dial tcp 10.243.76.66:6443: connect: connection refused" interval="800ms" Aug 13 07:30:17.195737 kubelet[2309]: I0813 07:30:17.195593 2309 kubelet_node_status.go:75] "Attempting to register node" node="srv-qvhwp.gb1.brightbox.com" Aug 13 07:30:17.196562 kubelet[2309]: E0813 07:30:17.196431 2309 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.243.76.66:6443/api/v1/nodes\": dial tcp 10.243.76.66:6443: connect: connection refused" node="srv-qvhwp.gb1.brightbox.com" Aug 13 07:30:17.249835 kubelet[2309]: W0813 07:30:17.249487 2309 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.243.76.66:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-qvhwp.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.243.76.66:6443: connect: connection refused Aug 13 07:30:17.249835 kubelet[2309]: E0813 07:30:17.249735 2309 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.243.76.66:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-qvhwp.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.243.76.66:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:30:17.465407 kubelet[2309]: W0813 07:30:17.465103 2309 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.243.76.66:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.243.76.66:6443: connect: connection refused Aug 13 07:30:17.465407 kubelet[2309]: E0813 07:30:17.465245 2309 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.243.76.66:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.243.76.66:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:30:17.781553 kubelet[2309]: E0813 07:30:17.781466 2309 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.243.76.66:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-qvhwp.gb1.brightbox.com?timeout=10s\": dial tcp 10.243.76.66:6443: connect: connection refused" interval="1.6s" Aug 13 07:30:17.877437 kubelet[2309]: W0813 07:30:17.877333 2309 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.243.76.66:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.243.76.66:6443: connect: connection refused Aug 13 07:30:17.877437 kubelet[2309]: E0813 07:30:17.877430 2309 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.243.76.66:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.243.76.66:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:30:17.966685 kubelet[2309]: W0813 07:30:17.966532 2309 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.243.76.66:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.243.76.66:6443: connect: connection refused Aug 13 07:30:17.966685 kubelet[2309]: E0813 07:30:17.966663 2309 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.243.76.66:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.243.76.66:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:30:18.000493 kubelet[2309]: I0813 07:30:18.000414 2309 kubelet_node_status.go:75] "Attempting to register node" node="srv-qvhwp.gb1.brightbox.com" Aug 13 07:30:18.001275 kubelet[2309]: E0813 07:30:18.001209 2309 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.243.76.66:6443/api/v1/nodes\": dial tcp 10.243.76.66:6443: connect: connection refused" node="srv-qvhwp.gb1.brightbox.com" Aug 13 07:30:18.023248 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount463626735.mount: Deactivated successfully. Aug 13 07:30:18.035354 containerd[1500]: time="2025-08-13T07:30:18.035209665Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:30:18.038182 containerd[1500]: time="2025-08-13T07:30:18.038104391Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Aug 13 07:30:18.039061 containerd[1500]: time="2025-08-13T07:30:18.039013566Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:30:18.040323 containerd[1500]: time="2025-08-13T07:30:18.040287290Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:30:18.041794 containerd[1500]: time="2025-08-13T07:30:18.041744593Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 07:30:18.042056 containerd[1500]: time="2025-08-13T07:30:18.042013087Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 07:30:18.042774 containerd[1500]: time="2025-08-13T07:30:18.042424552Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:30:18.048137 containerd[1500]: time="2025-08-13T07:30:18.048052644Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:30:18.050389 containerd[1500]: time="2025-08-13T07:30:18.049503480Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.204144299s" Aug 13 07:30:18.061114 containerd[1500]: time="2025-08-13T07:30:18.061018504Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.216074899s" Aug 13 07:30:18.063223 containerd[1500]: time="2025-08-13T07:30:18.063133451Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.231573791s" Aug 13 07:30:18.294946 containerd[1500]: time="2025-08-13T07:30:18.294093876Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:30:18.294946 containerd[1500]: time="2025-08-13T07:30:18.294177637Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:30:18.294946 containerd[1500]: time="2025-08-13T07:30:18.294194873Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:30:18.294946 containerd[1500]: time="2025-08-13T07:30:18.294321339Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:30:18.297307 containerd[1500]: time="2025-08-13T07:30:18.297093050Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:30:18.297307 containerd[1500]: time="2025-08-13T07:30:18.297173239Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:30:18.297307 containerd[1500]: time="2025-08-13T07:30:18.297194076Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:30:18.297706 containerd[1500]: time="2025-08-13T07:30:18.297301385Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:30:18.310100 containerd[1500]: time="2025-08-13T07:30:18.307697559Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:30:18.310100 containerd[1500]: time="2025-08-13T07:30:18.309795708Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:30:18.310100 containerd[1500]: time="2025-08-13T07:30:18.309814304Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:30:18.310100 containerd[1500]: time="2025-08-13T07:30:18.309946954Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:30:18.362042 systemd[1]: Started cri-containerd-5faf630966b6f32148390dfb5a0f79e4490f2138ebf3e979f70c4bc21e068427.scope - libcontainer container 5faf630966b6f32148390dfb5a0f79e4490f2138ebf3e979f70c4bc21e068427. Aug 13 07:30:18.365808 systemd[1]: Started cri-containerd-c6a8a5c2b9aff3ac9d34b9dcc4a478fea59c67672e4540edc2035920cb3e63ef.scope - libcontainer container c6a8a5c2b9aff3ac9d34b9dcc4a478fea59c67672e4540edc2035920cb3e63ef. Aug 13 07:30:18.382919 systemd[1]: Started cri-containerd-632f494548be030695f02d03f1aec991475bd22a5c0e67d9bc6e0cc2e8acbbe0.scope - libcontainer container 632f494548be030695f02d03f1aec991475bd22a5c0e67d9bc6e0cc2e8acbbe0. Aug 13 07:30:18.437939 kubelet[2309]: E0813 07:30:18.437855 2309 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.243.76.66:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.243.76.66:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:30:18.469831 sshd[2372]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=103.203.48.238 user=root Aug 13 07:30:18.504390 containerd[1500]: time="2025-08-13T07:30:18.504322059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-qvhwp.gb1.brightbox.com,Uid:ab9f095b963224bb96a5853dfa534140,Namespace:kube-system,Attempt:0,} returns sandbox id \"5faf630966b6f32148390dfb5a0f79e4490f2138ebf3e979f70c4bc21e068427\"" Aug 13 07:30:18.512103 containerd[1500]: time="2025-08-13T07:30:18.511798799Z" level=info msg="CreateContainer within sandbox \"5faf630966b6f32148390dfb5a0f79e4490f2138ebf3e979f70c4bc21e068427\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 07:30:18.513969 containerd[1500]: time="2025-08-13T07:30:18.513688256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-qvhwp.gb1.brightbox.com,Uid:067414cd9ab50bed4b9b0d40d88b27e5,Namespace:kube-system,Attempt:0,} returns sandbox id \"c6a8a5c2b9aff3ac9d34b9dcc4a478fea59c67672e4540edc2035920cb3e63ef\"" Aug 13 07:30:18.523727 containerd[1500]: time="2025-08-13T07:30:18.523669544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-qvhwp.gb1.brightbox.com,Uid:dba51d400647deca0fbe5bca1a9f85da,Namespace:kube-system,Attempt:0,} returns sandbox id \"632f494548be030695f02d03f1aec991475bd22a5c0e67d9bc6e0cc2e8acbbe0\"" Aug 13 07:30:18.524043 containerd[1500]: time="2025-08-13T07:30:18.523986244Z" level=info msg="CreateContainer within sandbox \"c6a8a5c2b9aff3ac9d34b9dcc4a478fea59c67672e4540edc2035920cb3e63ef\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 07:30:18.529545 containerd[1500]: time="2025-08-13T07:30:18.529404612Z" level=info msg="CreateContainer within sandbox \"632f494548be030695f02d03f1aec991475bd22a5c0e67d9bc6e0cc2e8acbbe0\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 07:30:18.546565 containerd[1500]: time="2025-08-13T07:30:18.544031014Z" level=info msg="CreateContainer within sandbox \"5faf630966b6f32148390dfb5a0f79e4490f2138ebf3e979f70c4bc21e068427\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1691b6e5ecbd593c9bd4c9a4fefe9662e2c4b3d62c3657368d623486189fa21e\"" Aug 13 07:30:18.547213 containerd[1500]: time="2025-08-13T07:30:18.547170667Z" level=info msg="StartContainer for \"1691b6e5ecbd593c9bd4c9a4fefe9662e2c4b3d62c3657368d623486189fa21e\"" Aug 13 07:30:18.561679 containerd[1500]: time="2025-08-13T07:30:18.561603821Z" level=info msg="CreateContainer within sandbox \"c6a8a5c2b9aff3ac9d34b9dcc4a478fea59c67672e4540edc2035920cb3e63ef\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b1a384ba2fc319a0539d259e706680cb89f11192ea9891f12b6d6219b4bdc58c\"" Aug 13 07:30:18.562255 containerd[1500]: time="2025-08-13T07:30:18.562220429Z" level=info msg="StartContainer for \"b1a384ba2fc319a0539d259e706680cb89f11192ea9891f12b6d6219b4bdc58c\"" Aug 13 07:30:18.572969 containerd[1500]: time="2025-08-13T07:30:18.572673641Z" level=info msg="CreateContainer within sandbox \"632f494548be030695f02d03f1aec991475bd22a5c0e67d9bc6e0cc2e8acbbe0\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"93f6397bc8f67439190bfff0181b436b2a33608dfe47eb743cc196e523039867\"" Aug 13 07:30:18.573805 containerd[1500]: time="2025-08-13T07:30:18.573707356Z" level=info msg="StartContainer for \"93f6397bc8f67439190bfff0181b436b2a33608dfe47eb743cc196e523039867\"" Aug 13 07:30:18.597839 systemd[1]: Started cri-containerd-1691b6e5ecbd593c9bd4c9a4fefe9662e2c4b3d62c3657368d623486189fa21e.scope - libcontainer container 1691b6e5ecbd593c9bd4c9a4fefe9662e2c4b3d62c3657368d623486189fa21e. Aug 13 07:30:18.617083 systemd[1]: Started cri-containerd-b1a384ba2fc319a0539d259e706680cb89f11192ea9891f12b6d6219b4bdc58c.scope - libcontainer container b1a384ba2fc319a0539d259e706680cb89f11192ea9891f12b6d6219b4bdc58c. Aug 13 07:30:18.654906 systemd[1]: Started cri-containerd-93f6397bc8f67439190bfff0181b436b2a33608dfe47eb743cc196e523039867.scope - libcontainer container 93f6397bc8f67439190bfff0181b436b2a33608dfe47eb743cc196e523039867. Aug 13 07:30:18.735937 containerd[1500]: time="2025-08-13T07:30:18.735781917Z" level=info msg="StartContainer for \"b1a384ba2fc319a0539d259e706680cb89f11192ea9891f12b6d6219b4bdc58c\" returns successfully" Aug 13 07:30:18.736112 containerd[1500]: time="2025-08-13T07:30:18.735785360Z" level=info msg="StartContainer for \"1691b6e5ecbd593c9bd4c9a4fefe9662e2c4b3d62c3657368d623486189fa21e\" returns successfully" Aug 13 07:30:18.777109 containerd[1500]: time="2025-08-13T07:30:18.777049529Z" level=info msg="StartContainer for \"93f6397bc8f67439190bfff0181b436b2a33608dfe47eb743cc196e523039867\" returns successfully" Aug 13 07:30:19.328417 kubelet[2309]: W0813 07:30:19.328311 2309 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.243.76.66:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.243.76.66:6443: connect: connection refused Aug 13 07:30:19.329092 kubelet[2309]: E0813 07:30:19.329024 2309 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.243.76.66:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.243.76.66:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:30:19.382430 kubelet[2309]: E0813 07:30:19.382355 2309 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.243.76.66:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-qvhwp.gb1.brightbox.com?timeout=10s\": dial tcp 10.243.76.66:6443: connect: connection refused" interval="3.2s" Aug 13 07:30:19.429142 kubelet[2309]: E0813 07:30:19.428480 2309 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-qvhwp.gb1.brightbox.com\" not found" node="srv-qvhwp.gb1.brightbox.com" Aug 13 07:30:19.434640 kubelet[2309]: E0813 07:30:19.432969 2309 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-qvhwp.gb1.brightbox.com\" not found" node="srv-qvhwp.gb1.brightbox.com" Aug 13 07:30:19.438376 kubelet[2309]: E0813 07:30:19.438122 2309 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-qvhwp.gb1.brightbox.com\" not found" node="srv-qvhwp.gb1.brightbox.com" Aug 13 07:30:19.611478 kubelet[2309]: I0813 07:30:19.607343 2309 kubelet_node_status.go:75] "Attempting to register node" node="srv-qvhwp.gb1.brightbox.com" Aug 13 07:30:20.444252 kubelet[2309]: E0813 07:30:20.443314 2309 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-qvhwp.gb1.brightbox.com\" not found" node="srv-qvhwp.gb1.brightbox.com" Aug 13 07:30:20.444252 kubelet[2309]: E0813 07:30:20.444003 2309 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-qvhwp.gb1.brightbox.com\" not found" node="srv-qvhwp.gb1.brightbox.com" Aug 13 07:30:20.586667 sshd[2343]: PAM: Permission denied for root from 103.203.48.238 Aug 13 07:30:20.988803 sshd[2587]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=103.203.48.238 user=root Aug 13 07:30:21.446847 kubelet[2309]: E0813 07:30:21.446201 2309 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-qvhwp.gb1.brightbox.com\" not found" node="srv-qvhwp.gb1.brightbox.com" Aug 13 07:30:22.305377 kubelet[2309]: E0813 07:30:22.305293 2309 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-qvhwp.gb1.brightbox.com\" not found" node="srv-qvhwp.gb1.brightbox.com" Aug 13 07:30:22.314239 kubelet[2309]: I0813 07:30:22.313963 2309 kubelet_node_status.go:78] "Successfully registered node" node="srv-qvhwp.gb1.brightbox.com" Aug 13 07:30:22.329270 kubelet[2309]: I0813 07:30:22.329220 2309 apiserver.go:52] "Watching apiserver" Aug 13 07:30:22.359988 kubelet[2309]: E0813 07:30:22.359795 2309 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{srv-qvhwp.gb1.brightbox.com.185b431020aee1a1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-qvhwp.gb1.brightbox.com,UID:srv-qvhwp.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-qvhwp.gb1.brightbox.com,},FirstTimestamp:2025-08-13 07:30:16.337809825 +0000 UTC m=+0.524496605,LastTimestamp:2025-08-13 07:30:16.337809825 +0000 UTC m=+0.524496605,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-qvhwp.gb1.brightbox.com,}" Aug 13 07:30:22.369069 kubelet[2309]: I0813 07:30:22.368544 2309 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 13 07:30:22.369069 kubelet[2309]: I0813 07:30:22.368598 2309 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-qvhwp.gb1.brightbox.com" Aug 13 07:30:22.388807 kubelet[2309]: E0813 07:30:22.388399 2309 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-qvhwp.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-qvhwp.gb1.brightbox.com" Aug 13 07:30:22.388807 kubelet[2309]: I0813 07:30:22.388463 2309 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-qvhwp.gb1.brightbox.com" Aug 13 07:30:22.395346 kubelet[2309]: E0813 07:30:22.395305 2309 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-qvhwp.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-qvhwp.gb1.brightbox.com" Aug 13 07:30:22.395496 kubelet[2309]: I0813 07:30:22.395367 2309 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-qvhwp.gb1.brightbox.com" Aug 13 07:30:22.398533 kubelet[2309]: E0813 07:30:22.398504 2309 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-qvhwp.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-qvhwp.gb1.brightbox.com" Aug 13 07:30:23.044587 sshd[2343]: PAM: Permission denied for root from 103.203.48.238 Aug 13 07:30:23.449584 sshd[2588]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=103.203.48.238 user=root Aug 13 07:30:24.290151 systemd[1]: Reloading requested from client PID 2590 ('systemctl') (unit session-11.scope)... Aug 13 07:30:24.290201 systemd[1]: Reloading... Aug 13 07:30:24.428663 zram_generator::config[2631]: No configuration found. Aug 13 07:30:24.616329 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:30:24.671651 kubelet[2309]: I0813 07:30:24.669128 2309 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-qvhwp.gb1.brightbox.com" Aug 13 07:30:24.692096 kubelet[2309]: W0813 07:30:24.691716 2309 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 07:30:24.748134 systemd[1]: Reloading finished in 457 ms. Aug 13 07:30:24.814757 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:30:24.837498 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 07:30:24.838613 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:30:24.838978 systemd[1]: kubelet.service: Consumed 1.121s CPU time, 130.6M memory peak, 0B memory swap peak. Aug 13 07:30:24.853740 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:30:25.217948 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:30:25.232235 (kubelet)[2695]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 07:30:25.350446 kubelet[2695]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:30:25.350446 kubelet[2695]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 07:30:25.350446 kubelet[2695]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:30:25.350446 kubelet[2695]: I0813 07:30:25.348809 2695 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 07:30:25.369304 kubelet[2695]: I0813 07:30:25.366560 2695 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Aug 13 07:30:25.369304 kubelet[2695]: I0813 07:30:25.366600 2695 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 07:30:25.371609 kubelet[2695]: I0813 07:30:25.370305 2695 server.go:954] "Client rotation is on, will bootstrap in background" Aug 13 07:30:25.380655 kubelet[2695]: I0813 07:30:25.380438 2695 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 13 07:30:25.402638 kubelet[2695]: I0813 07:30:25.401793 2695 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 07:30:25.410725 sudo[2709]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Aug 13 07:30:25.411340 sudo[2709]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Aug 13 07:30:25.422709 kubelet[2695]: E0813 07:30:25.422092 2695 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 07:30:25.422709 kubelet[2695]: I0813 07:30:25.422177 2695 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 07:30:25.431395 kubelet[2695]: I0813 07:30:25.431306 2695 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 07:30:25.432487 kubelet[2695]: I0813 07:30:25.431833 2695 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 07:30:25.432487 kubelet[2695]: I0813 07:30:25.431874 2695 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-qvhwp.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 07:30:25.432487 kubelet[2695]: I0813 07:30:25.432242 2695 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 07:30:25.432487 kubelet[2695]: I0813 07:30:25.432264 2695 container_manager_linux.go:304] "Creating device plugin manager" Aug 13 07:30:25.433826 kubelet[2695]: I0813 07:30:25.433440 2695 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:30:25.435142 kubelet[2695]: I0813 07:30:25.433965 2695 kubelet.go:446] "Attempting to sync node with API server" Aug 13 07:30:25.436732 kubelet[2695]: I0813 07:30:25.436709 2695 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 07:30:25.438649 kubelet[2695]: I0813 07:30:25.436891 2695 kubelet.go:352] "Adding apiserver pod source" Aug 13 07:30:25.438779 kubelet[2695]: I0813 07:30:25.438758 2695 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 07:30:25.462553 kubelet[2695]: I0813 07:30:25.460074 2695 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Aug 13 07:30:25.462553 kubelet[2695]: I0813 07:30:25.460898 2695 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 07:30:25.466051 kubelet[2695]: I0813 07:30:25.464985 2695 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 07:30:25.466051 kubelet[2695]: I0813 07:30:25.465057 2695 server.go:1287] "Started kubelet" Aug 13 07:30:25.503016 kubelet[2695]: I0813 07:30:25.502981 2695 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 07:30:25.508040 kubelet[2695]: I0813 07:30:25.507907 2695 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 07:30:25.509425 kubelet[2695]: I0813 07:30:25.509401 2695 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 07:30:25.515546 kubelet[2695]: I0813 07:30:25.490525 2695 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 07:30:25.518352 kubelet[2695]: I0813 07:30:25.517349 2695 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 07:30:25.523948 kubelet[2695]: I0813 07:30:25.523919 2695 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 07:30:25.524489 kubelet[2695]: E0813 07:30:25.524461 2695 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-qvhwp.gb1.brightbox.com\" not found" Aug 13 07:30:25.530377 kubelet[2695]: I0813 07:30:25.530070 2695 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 07:30:25.533602 kubelet[2695]: I0813 07:30:25.533578 2695 reconciler.go:26] "Reconciler: start to sync state" Aug 13 07:30:25.534921 kubelet[2695]: I0813 07:30:25.534897 2695 server.go:479] "Adding debug handlers to kubelet server" Aug 13 07:30:25.537732 kubelet[2695]: I0813 07:30:25.535354 2695 factory.go:221] Registration of the systemd container factory successfully Aug 13 07:30:25.538006 kubelet[2695]: I0813 07:30:25.537963 2695 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 07:30:25.542031 kubelet[2695]: I0813 07:30:25.542005 2695 factory.go:221] Registration of the containerd container factory successfully Aug 13 07:30:25.558563 kubelet[2695]: E0813 07:30:25.558519 2695 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 07:30:25.588815 sshd[2343]: PAM: Permission denied for root from 103.203.48.238 Aug 13 07:30:25.595248 kubelet[2695]: I0813 07:30:25.595164 2695 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 07:30:25.598305 kubelet[2695]: I0813 07:30:25.598279 2695 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 07:30:25.599592 kubelet[2695]: I0813 07:30:25.599566 2695 status_manager.go:227] "Starting to sync pod status with apiserver" Aug 13 07:30:25.606168 kubelet[2695]: I0813 07:30:25.606132 2695 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 07:30:25.606168 kubelet[2695]: I0813 07:30:25.606156 2695 kubelet.go:2382] "Starting kubelet main sync loop" Aug 13 07:30:25.610815 kubelet[2695]: E0813 07:30:25.609662 2695 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 07:30:25.707326 kubelet[2695]: I0813 07:30:25.707119 2695 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 07:30:25.707326 kubelet[2695]: I0813 07:30:25.707148 2695 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 07:30:25.707326 kubelet[2695]: I0813 07:30:25.707203 2695 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:30:25.707640 kubelet[2695]: I0813 07:30:25.707494 2695 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 07:30:25.707640 kubelet[2695]: I0813 07:30:25.707514 2695 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 07:30:25.707640 kubelet[2695]: I0813 07:30:25.707559 2695 policy_none.go:49] "None policy: Start" Aug 13 07:30:25.707833 kubelet[2695]: I0813 07:30:25.707612 2695 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 07:30:25.707833 kubelet[2695]: I0813 07:30:25.707735 2695 state_mem.go:35] "Initializing new in-memory state store" Aug 13 07:30:25.709391 kubelet[2695]: I0813 07:30:25.707991 2695 state_mem.go:75] "Updated machine memory state" Aug 13 07:30:25.710754 kubelet[2695]: E0813 07:30:25.710111 2695 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 13 07:30:25.716562 kubelet[2695]: I0813 07:30:25.716528 2695 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 07:30:25.717004 kubelet[2695]: I0813 07:30:25.716841 2695 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 07:30:25.717004 kubelet[2695]: I0813 07:30:25.716873 2695 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 07:30:25.725753 kubelet[2695]: I0813 07:30:25.725133 2695 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 07:30:25.728641 kubelet[2695]: E0813 07:30:25.728175 2695 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 07:30:25.789579 sshd[2343]: Received disconnect from 103.203.48.238 port 37454:11: [preauth] Aug 13 07:30:25.789579 sshd[2343]: Disconnected from authenticating user root 103.203.48.238 port 37454 [preauth] Aug 13 07:30:25.794444 systemd[1]: sshd@9-10.243.76.66:22-103.203.48.238:37454.service: Deactivated successfully. Aug 13 07:30:25.848160 kubelet[2695]: I0813 07:30:25.848090 2695 kubelet_node_status.go:75] "Attempting to register node" node="srv-qvhwp.gb1.brightbox.com" Aug 13 07:30:25.859455 kubelet[2695]: I0813 07:30:25.859172 2695 kubelet_node_status.go:124] "Node was previously registered" node="srv-qvhwp.gb1.brightbox.com" Aug 13 07:30:25.859455 kubelet[2695]: I0813 07:30:25.859291 2695 kubelet_node_status.go:78] "Successfully registered node" node="srv-qvhwp.gb1.brightbox.com" Aug 13 07:30:25.913511 kubelet[2695]: I0813 07:30:25.912683 2695 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-qvhwp.gb1.brightbox.com" Aug 13 07:30:25.913511 kubelet[2695]: I0813 07:30:25.912937 2695 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-qvhwp.gb1.brightbox.com" Aug 13 07:30:25.914719 kubelet[2695]: I0813 07:30:25.914509 2695 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-qvhwp.gb1.brightbox.com" Aug 13 07:30:25.936551 kubelet[2695]: W0813 07:30:25.936471 2695 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 07:30:25.941249 kubelet[2695]: I0813 07:30:25.939828 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ab9f095b963224bb96a5853dfa534140-ca-certs\") pod \"kube-apiserver-srv-qvhwp.gb1.brightbox.com\" (UID: \"ab9f095b963224bb96a5853dfa534140\") " pod="kube-system/kube-apiserver-srv-qvhwp.gb1.brightbox.com" Aug 13 07:30:25.944030 kubelet[2695]: I0813 07:30:25.941503 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ab9f095b963224bb96a5853dfa534140-usr-share-ca-certificates\") pod \"kube-apiserver-srv-qvhwp.gb1.brightbox.com\" (UID: \"ab9f095b963224bb96a5853dfa534140\") " pod="kube-system/kube-apiserver-srv-qvhwp.gb1.brightbox.com" Aug 13 07:30:25.944030 kubelet[2695]: I0813 07:30:25.941553 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dba51d400647deca0fbe5bca1a9f85da-kubeconfig\") pod \"kube-scheduler-srv-qvhwp.gb1.brightbox.com\" (UID: \"dba51d400647deca0fbe5bca1a9f85da\") " pod="kube-system/kube-scheduler-srv-qvhwp.gb1.brightbox.com" Aug 13 07:30:25.944030 kubelet[2695]: I0813 07:30:25.941583 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/067414cd9ab50bed4b9b0d40d88b27e5-k8s-certs\") pod \"kube-controller-manager-srv-qvhwp.gb1.brightbox.com\" (UID: \"067414cd9ab50bed4b9b0d40d88b27e5\") " pod="kube-system/kube-controller-manager-srv-qvhwp.gb1.brightbox.com" Aug 13 07:30:25.944030 kubelet[2695]: I0813 07:30:25.941667 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/067414cd9ab50bed4b9b0d40d88b27e5-kubeconfig\") pod \"kube-controller-manager-srv-qvhwp.gb1.brightbox.com\" (UID: \"067414cd9ab50bed4b9b0d40d88b27e5\") " pod="kube-system/kube-controller-manager-srv-qvhwp.gb1.brightbox.com" Aug 13 07:30:25.944030 kubelet[2695]: I0813 07:30:25.941703 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/067414cd9ab50bed4b9b0d40d88b27e5-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-qvhwp.gb1.brightbox.com\" (UID: \"067414cd9ab50bed4b9b0d40d88b27e5\") " pod="kube-system/kube-controller-manager-srv-qvhwp.gb1.brightbox.com" Aug 13 07:30:25.944394 kubelet[2695]: I0813 07:30:25.941741 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ab9f095b963224bb96a5853dfa534140-k8s-certs\") pod \"kube-apiserver-srv-qvhwp.gb1.brightbox.com\" (UID: \"ab9f095b963224bb96a5853dfa534140\") " pod="kube-system/kube-apiserver-srv-qvhwp.gb1.brightbox.com" Aug 13 07:30:25.944394 kubelet[2695]: W0813 07:30:25.941755 2695 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 07:30:25.944394 kubelet[2695]: I0813 07:30:25.941767 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/067414cd9ab50bed4b9b0d40d88b27e5-ca-certs\") pod \"kube-controller-manager-srv-qvhwp.gb1.brightbox.com\" (UID: \"067414cd9ab50bed4b9b0d40d88b27e5\") " pod="kube-system/kube-controller-manager-srv-qvhwp.gb1.brightbox.com" Aug 13 07:30:25.944394 kubelet[2695]: I0813 07:30:25.941806 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/067414cd9ab50bed4b9b0d40d88b27e5-flexvolume-dir\") pod \"kube-controller-manager-srv-qvhwp.gb1.brightbox.com\" (UID: \"067414cd9ab50bed4b9b0d40d88b27e5\") " pod="kube-system/kube-controller-manager-srv-qvhwp.gb1.brightbox.com" Aug 13 07:30:25.944394 kubelet[2695]: E0813 07:30:25.941830 2695 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-qvhwp.gb1.brightbox.com\" already exists" pod="kube-system/kube-controller-manager-srv-qvhwp.gb1.brightbox.com" Aug 13 07:30:25.944394 kubelet[2695]: W0813 07:30:25.943932 2695 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 07:30:26.016784 systemd[1]: Started sshd@10-10.243.76.66:22-103.203.48.238:62496.service - OpenSSH per-connection server daemon (103.203.48.238:62496). Aug 13 07:30:26.344185 sudo[2709]: pam_unix(sudo:session): session closed for user root Aug 13 07:30:26.440760 kubelet[2695]: I0813 07:30:26.439964 2695 apiserver.go:52] "Watching apiserver" Aug 13 07:30:26.533572 kubelet[2695]: I0813 07:30:26.533507 2695 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 13 07:30:26.602010 kubelet[2695]: I0813 07:30:26.601714 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-qvhwp.gb1.brightbox.com" podStartSLOduration=1.6016767060000001 podStartE2EDuration="1.601676706s" podCreationTimestamp="2025-08-13 07:30:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:30:26.587865371 +0000 UTC m=+1.341869231" watchObservedRunningTime="2025-08-13 07:30:26.601676706 +0000 UTC m=+1.355680553" Aug 13 07:30:26.629728 kubelet[2695]: I0813 07:30:26.627528 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-qvhwp.gb1.brightbox.com" podStartSLOduration=2.627506367 podStartE2EDuration="2.627506367s" podCreationTimestamp="2025-08-13 07:30:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:30:26.603043081 +0000 UTC m=+1.357046941" watchObservedRunningTime="2025-08-13 07:30:26.627506367 +0000 UTC m=+1.381510215" Aug 13 07:30:26.659890 kubelet[2695]: I0813 07:30:26.659809 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-qvhwp.gb1.brightbox.com" podStartSLOduration=1.659781986 podStartE2EDuration="1.659781986s" podCreationTimestamp="2025-08-13 07:30:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:30:26.628956942 +0000 UTC m=+1.382960790" watchObservedRunningTime="2025-08-13 07:30:26.659781986 +0000 UTC m=+1.413785832" Aug 13 07:30:27.497318 sshd[2748]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=103.203.48.238 user=root Aug 13 07:30:28.217817 sudo[1773]: pam_unix(sudo:session): session closed for user root Aug 13 07:30:28.365445 sshd[1770]: pam_unix(sshd:session): session closed for user core Aug 13 07:30:28.372715 systemd[1]: sshd@8-10.243.76.66:22-139.178.68.195:58734.service: Deactivated successfully. Aug 13 07:30:28.376376 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 07:30:28.377324 systemd[1]: session-11.scope: Consumed 7.331s CPU time, 143.9M memory peak, 0B memory swap peak. Aug 13 07:30:28.378560 systemd-logind[1485]: Session 11 logged out. Waiting for processes to exit. Aug 13 07:30:28.382347 systemd-logind[1485]: Removed session 11. Aug 13 07:30:29.513936 sshd[2739]: PAM: Permission denied for root from 103.203.48.238 Aug 13 07:30:29.916021 sshd[2775]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=103.203.48.238 user=root Aug 13 07:30:29.922055 kubelet[2695]: I0813 07:30:29.921959 2695 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 07:30:29.923141 kubelet[2695]: I0813 07:30:29.922815 2695 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 07:30:29.923319 containerd[1500]: time="2025-08-13T07:30:29.922480002Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 07:30:30.941061 systemd[1]: Created slice kubepods-besteffort-pod0888cb1a_f07b_4952_a67f_e57ddadf4a03.slice - libcontainer container kubepods-besteffort-pod0888cb1a_f07b_4952_a67f_e57ddadf4a03.slice. Aug 13 07:30:30.964775 systemd[1]: Created slice kubepods-burstable-pod3bcb37ff_a9d3_4466_b9c6_b6edd611b777.slice - libcontainer container kubepods-burstable-pod3bcb37ff_a9d3_4466_b9c6_b6edd611b777.slice. Aug 13 07:30:30.972063 kubelet[2695]: I0813 07:30:30.972010 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0888cb1a-f07b-4952-a67f-e57ddadf4a03-kube-proxy\") pod \"kube-proxy-n484n\" (UID: \"0888cb1a-f07b-4952-a67f-e57ddadf4a03\") " pod="kube-system/kube-proxy-n484n" Aug 13 07:30:30.972063 kubelet[2695]: I0813 07:30:30.972063 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3bcb37ff-a9d3-4466-b9c6-b6edd611b777-xtables-lock\") pod \"cilium-d7nfx\" (UID: \"3bcb37ff-a9d3-4466-b9c6-b6edd611b777\") " pod="kube-system/cilium-d7nfx" Aug 13 07:30:30.972723 kubelet[2695]: I0813 07:30:30.972117 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3bcb37ff-a9d3-4466-b9c6-b6edd611b777-clustermesh-secrets\") pod \"cilium-d7nfx\" (UID: \"3bcb37ff-a9d3-4466-b9c6-b6edd611b777\") " pod="kube-system/cilium-d7nfx" Aug 13 07:30:30.972723 kubelet[2695]: I0813 07:30:30.972149 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3bcb37ff-a9d3-4466-b9c6-b6edd611b777-cilium-run\") pod \"cilium-d7nfx\" (UID: \"3bcb37ff-a9d3-4466-b9c6-b6edd611b777\") " pod="kube-system/cilium-d7nfx" Aug 13 07:30:30.972723 kubelet[2695]: I0813 07:30:30.972186 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3bcb37ff-a9d3-4466-b9c6-b6edd611b777-lib-modules\") pod \"cilium-d7nfx\" (UID: \"3bcb37ff-a9d3-4466-b9c6-b6edd611b777\") " pod="kube-system/cilium-d7nfx" Aug 13 07:30:30.972723 kubelet[2695]: I0813 07:30:30.972210 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3bcb37ff-a9d3-4466-b9c6-b6edd611b777-host-proc-sys-net\") pod \"cilium-d7nfx\" (UID: \"3bcb37ff-a9d3-4466-b9c6-b6edd611b777\") " pod="kube-system/cilium-d7nfx" Aug 13 07:30:30.972723 kubelet[2695]: I0813 07:30:30.972234 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3bcb37ff-a9d3-4466-b9c6-b6edd611b777-etc-cni-netd\") pod \"cilium-d7nfx\" (UID: \"3bcb37ff-a9d3-4466-b9c6-b6edd611b777\") " pod="kube-system/cilium-d7nfx" Aug 13 07:30:30.972723 kubelet[2695]: I0813 07:30:30.972278 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0888cb1a-f07b-4952-a67f-e57ddadf4a03-lib-modules\") pod \"kube-proxy-n484n\" (UID: \"0888cb1a-f07b-4952-a67f-e57ddadf4a03\") " pod="kube-system/kube-proxy-n484n" Aug 13 07:30:30.973088 kubelet[2695]: I0813 07:30:30.972322 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkz7g\" (UniqueName: \"kubernetes.io/projected/0888cb1a-f07b-4952-a67f-e57ddadf4a03-kube-api-access-vkz7g\") pod \"kube-proxy-n484n\" (UID: \"0888cb1a-f07b-4952-a67f-e57ddadf4a03\") " pod="kube-system/kube-proxy-n484n" Aug 13 07:30:30.973088 kubelet[2695]: I0813 07:30:30.972359 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3bcb37ff-a9d3-4466-b9c6-b6edd611b777-hubble-tls\") pod \"cilium-d7nfx\" (UID: \"3bcb37ff-a9d3-4466-b9c6-b6edd611b777\") " pod="kube-system/cilium-d7nfx" Aug 13 07:30:30.973088 kubelet[2695]: I0813 07:30:30.972392 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3bcb37ff-a9d3-4466-b9c6-b6edd611b777-bpf-maps\") pod \"cilium-d7nfx\" (UID: \"3bcb37ff-a9d3-4466-b9c6-b6edd611b777\") " pod="kube-system/cilium-d7nfx" Aug 13 07:30:30.973088 kubelet[2695]: I0813 07:30:30.972418 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3bcb37ff-a9d3-4466-b9c6-b6edd611b777-cni-path\") pod \"cilium-d7nfx\" (UID: \"3bcb37ff-a9d3-4466-b9c6-b6edd611b777\") " pod="kube-system/cilium-d7nfx" Aug 13 07:30:30.973088 kubelet[2695]: I0813 07:30:30.972447 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3bcb37ff-a9d3-4466-b9c6-b6edd611b777-cilium-cgroup\") pod \"cilium-d7nfx\" (UID: \"3bcb37ff-a9d3-4466-b9c6-b6edd611b777\") " pod="kube-system/cilium-d7nfx" Aug 13 07:30:30.973088 kubelet[2695]: I0813 07:30:30.972472 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmqj2\" (UniqueName: \"kubernetes.io/projected/3bcb37ff-a9d3-4466-b9c6-b6edd611b777-kube-api-access-gmqj2\") pod \"cilium-d7nfx\" (UID: \"3bcb37ff-a9d3-4466-b9c6-b6edd611b777\") " pod="kube-system/cilium-d7nfx" Aug 13 07:30:30.973440 kubelet[2695]: I0813 07:30:30.972499 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3bcb37ff-a9d3-4466-b9c6-b6edd611b777-hostproc\") pod \"cilium-d7nfx\" (UID: \"3bcb37ff-a9d3-4466-b9c6-b6edd611b777\") " pod="kube-system/cilium-d7nfx" Aug 13 07:30:30.973440 kubelet[2695]: I0813 07:30:30.972525 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0888cb1a-f07b-4952-a67f-e57ddadf4a03-xtables-lock\") pod \"kube-proxy-n484n\" (UID: \"0888cb1a-f07b-4952-a67f-e57ddadf4a03\") " pod="kube-system/kube-proxy-n484n" Aug 13 07:30:30.973440 kubelet[2695]: I0813 07:30:30.972551 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3bcb37ff-a9d3-4466-b9c6-b6edd611b777-cilium-config-path\") pod \"cilium-d7nfx\" (UID: \"3bcb37ff-a9d3-4466-b9c6-b6edd611b777\") " pod="kube-system/cilium-d7nfx" Aug 13 07:30:30.973440 kubelet[2695]: I0813 07:30:30.972579 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3bcb37ff-a9d3-4466-b9c6-b6edd611b777-host-proc-sys-kernel\") pod \"cilium-d7nfx\" (UID: \"3bcb37ff-a9d3-4466-b9c6-b6edd611b777\") " pod="kube-system/cilium-d7nfx" Aug 13 07:30:31.024878 systemd[1]: Created slice kubepods-besteffort-pod71dc097a_e994_4252_bb5d_63cd78f0f615.slice - libcontainer container kubepods-besteffort-pod71dc097a_e994_4252_bb5d_63cd78f0f615.slice. Aug 13 07:30:31.073295 kubelet[2695]: I0813 07:30:31.073221 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlblh\" (UniqueName: \"kubernetes.io/projected/71dc097a-e994-4252-bb5d-63cd78f0f615-kube-api-access-nlblh\") pod \"cilium-operator-6c4d7847fc-8n5zc\" (UID: \"71dc097a-e994-4252-bb5d-63cd78f0f615\") " pod="kube-system/cilium-operator-6c4d7847fc-8n5zc" Aug 13 07:30:31.073491 kubelet[2695]: I0813 07:30:31.073401 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/71dc097a-e994-4252-bb5d-63cd78f0f615-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-8n5zc\" (UID: \"71dc097a-e994-4252-bb5d-63cd78f0f615\") " pod="kube-system/cilium-operator-6c4d7847fc-8n5zc" Aug 13 07:30:31.262252 containerd[1500]: time="2025-08-13T07:30:31.262092993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-n484n,Uid:0888cb1a-f07b-4952-a67f-e57ddadf4a03,Namespace:kube-system,Attempt:0,}" Aug 13 07:30:31.271126 containerd[1500]: time="2025-08-13T07:30:31.270674668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-d7nfx,Uid:3bcb37ff-a9d3-4466-b9c6-b6edd611b777,Namespace:kube-system,Attempt:0,}" Aug 13 07:30:31.318412 containerd[1500]: time="2025-08-13T07:30:31.318227804Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:30:31.318412 containerd[1500]: time="2025-08-13T07:30:31.318363952Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:30:31.319241 containerd[1500]: time="2025-08-13T07:30:31.318390627Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:30:31.319241 containerd[1500]: time="2025-08-13T07:30:31.318888591Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:30:31.332781 containerd[1500]: time="2025-08-13T07:30:31.332729375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-8n5zc,Uid:71dc097a-e994-4252-bb5d-63cd78f0f615,Namespace:kube-system,Attempt:0,}" Aug 13 07:30:31.361938 systemd[1]: Started cri-containerd-af46c67213133ae3a4e09d1bc00a61b4b01d0b896b1f2015b34be5aa4294e073.scope - libcontainer container af46c67213133ae3a4e09d1bc00a61b4b01d0b896b1f2015b34be5aa4294e073. Aug 13 07:30:31.363303 containerd[1500]: time="2025-08-13T07:30:31.362585492Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:30:31.363303 containerd[1500]: time="2025-08-13T07:30:31.362728303Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:30:31.363303 containerd[1500]: time="2025-08-13T07:30:31.362753418Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:30:31.363989 containerd[1500]: time="2025-08-13T07:30:31.363644654Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:30:31.417978 systemd[1]: Started cri-containerd-3498313fb013bbd0b1206ee335ccbdd35bafe28f01d33014b4ebfbd5631129a8.scope - libcontainer container 3498313fb013bbd0b1206ee335ccbdd35bafe28f01d33014b4ebfbd5631129a8. Aug 13 07:30:31.422377 containerd[1500]: time="2025-08-13T07:30:31.421245034Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:30:31.422377 containerd[1500]: time="2025-08-13T07:30:31.421336797Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:30:31.422377 containerd[1500]: time="2025-08-13T07:30:31.421359886Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:30:31.422377 containerd[1500]: time="2025-08-13T07:30:31.421510743Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:30:31.456727 containerd[1500]: time="2025-08-13T07:30:31.455368636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-n484n,Uid:0888cb1a-f07b-4952-a67f-e57ddadf4a03,Namespace:kube-system,Attempt:0,} returns sandbox id \"af46c67213133ae3a4e09d1bc00a61b4b01d0b896b1f2015b34be5aa4294e073\"" Aug 13 07:30:31.460722 systemd[1]: Started cri-containerd-2e0c1280fef920037ed348568abf405484bc9fe1ef8d1b9fc47fd7d1fcd82db0.scope - libcontainer container 2e0c1280fef920037ed348568abf405484bc9fe1ef8d1b9fc47fd7d1fcd82db0. Aug 13 07:30:31.474813 containerd[1500]: time="2025-08-13T07:30:31.474744902Z" level=info msg="CreateContainer within sandbox \"af46c67213133ae3a4e09d1bc00a61b4b01d0b896b1f2015b34be5aa4294e073\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 07:30:31.511306 containerd[1500]: time="2025-08-13T07:30:31.511232891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-d7nfx,Uid:3bcb37ff-a9d3-4466-b9c6-b6edd611b777,Namespace:kube-system,Attempt:0,} returns sandbox id \"3498313fb013bbd0b1206ee335ccbdd35bafe28f01d33014b4ebfbd5631129a8\"" Aug 13 07:30:31.518445 containerd[1500]: time="2025-08-13T07:30:31.518044120Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Aug 13 07:30:31.538960 containerd[1500]: time="2025-08-13T07:30:31.537505314Z" level=info msg="CreateContainer within sandbox \"af46c67213133ae3a4e09d1bc00a61b4b01d0b896b1f2015b34be5aa4294e073\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"80ebbb5d0777880b907d0264e1e3a4aa380861caf70321a853b61eaa031202c4\"" Aug 13 07:30:31.539428 containerd[1500]: time="2025-08-13T07:30:31.539390000Z" level=info msg="StartContainer for \"80ebbb5d0777880b907d0264e1e3a4aa380861caf70321a853b61eaa031202c4\"" Aug 13 07:30:31.580443 containerd[1500]: time="2025-08-13T07:30:31.580389776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-8n5zc,Uid:71dc097a-e994-4252-bb5d-63cd78f0f615,Namespace:kube-system,Attempt:0,} returns sandbox id \"2e0c1280fef920037ed348568abf405484bc9fe1ef8d1b9fc47fd7d1fcd82db0\"" Aug 13 07:30:31.597870 systemd[1]: Started cri-containerd-80ebbb5d0777880b907d0264e1e3a4aa380861caf70321a853b61eaa031202c4.scope - libcontainer container 80ebbb5d0777880b907d0264e1e3a4aa380861caf70321a853b61eaa031202c4. Aug 13 07:30:31.656372 containerd[1500]: time="2025-08-13T07:30:31.656308242Z" level=info msg="StartContainer for \"80ebbb5d0777880b907d0264e1e3a4aa380861caf70321a853b61eaa031202c4\" returns successfully" Aug 13 07:30:31.685963 kubelet[2695]: I0813 07:30:31.685863 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-n484n" podStartSLOduration=1.685793854 podStartE2EDuration="1.685793854s" podCreationTimestamp="2025-08-13 07:30:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:30:31.685647734 +0000 UTC m=+6.439651593" watchObservedRunningTime="2025-08-13 07:30:31.685793854 +0000 UTC m=+6.439797700" Aug 13 07:30:31.877501 sshd[2739]: PAM: Permission denied for root from 103.203.48.238 Aug 13 07:30:32.284914 sshd[2947]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=103.203.48.238 user=root Aug 13 07:30:33.991695 sshd[2739]: PAM: Permission denied for root from 103.203.48.238 Aug 13 07:30:34.194549 sshd[2739]: Received disconnect from 103.203.48.238 port 62496:11: [preauth] Aug 13 07:30:34.194549 sshd[2739]: Disconnected from authenticating user root 103.203.48.238 port 62496 [preauth] Aug 13 07:30:34.202798 systemd[1]: sshd@10-10.243.76.66:22-103.203.48.238:62496.service: Deactivated successfully. Aug 13 07:30:34.413232 systemd[1]: Started sshd@11-10.243.76.66:22-103.203.48.238:25446.service - OpenSSH per-connection server daemon (103.203.48.238:25446). Aug 13 07:30:35.969167 sshd[3079]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=103.203.48.238 user=root Aug 13 07:30:38.087965 sshd[3073]: PAM: Permission denied for root from 103.203.48.238 Aug 13 07:30:38.452161 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1876034016.mount: Deactivated successfully. Aug 13 07:30:38.492123 sshd[3082]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=103.203.48.238 user=root Aug 13 07:30:40.354001 sshd[3073]: PAM: Permission denied for root from 103.203.48.238 Aug 13 07:30:40.758959 sshd[3103]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=103.203.48.238 user=root Aug 13 07:30:41.759819 containerd[1500]: time="2025-08-13T07:30:41.759402539Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:30:41.763218 containerd[1500]: time="2025-08-13T07:30:41.763070924Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:30:41.763218 containerd[1500]: time="2025-08-13T07:30:41.763146764Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Aug 13 07:30:41.765995 containerd[1500]: time="2025-08-13T07:30:41.765955293Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.247839278s" Aug 13 07:30:41.766332 containerd[1500]: time="2025-08-13T07:30:41.766155209Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Aug 13 07:30:41.770492 containerd[1500]: time="2025-08-13T07:30:41.769316404Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Aug 13 07:30:41.770492 containerd[1500]: time="2025-08-13T07:30:41.770282677Z" level=info msg="CreateContainer within sandbox \"3498313fb013bbd0b1206ee335ccbdd35bafe28f01d33014b4ebfbd5631129a8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 07:30:41.823359 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount870869867.mount: Deactivated successfully. Aug 13 07:30:41.826692 containerd[1500]: time="2025-08-13T07:30:41.826640378Z" level=info msg="CreateContainer within sandbox \"3498313fb013bbd0b1206ee335ccbdd35bafe28f01d33014b4ebfbd5631129a8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3e5702c9b3e52d09c2f38f29dddba99cce3e1f00e3b51a61e516a31d0463b1b2\"" Aug 13 07:30:41.828263 containerd[1500]: time="2025-08-13T07:30:41.828067749Z" level=info msg="StartContainer for \"3e5702c9b3e52d09c2f38f29dddba99cce3e1f00e3b51a61e516a31d0463b1b2\"" Aug 13 07:30:41.943343 systemd[1]: run-containerd-runc-k8s.io-3e5702c9b3e52d09c2f38f29dddba99cce3e1f00e3b51a61e516a31d0463b1b2-runc.EspASv.mount: Deactivated successfully. Aug 13 07:30:41.958974 systemd[1]: Started cri-containerd-3e5702c9b3e52d09c2f38f29dddba99cce3e1f00e3b51a61e516a31d0463b1b2.scope - libcontainer container 3e5702c9b3e52d09c2f38f29dddba99cce3e1f00e3b51a61e516a31d0463b1b2. Aug 13 07:30:42.005090 containerd[1500]: time="2025-08-13T07:30:42.005030347Z" level=info msg="StartContainer for \"3e5702c9b3e52d09c2f38f29dddba99cce3e1f00e3b51a61e516a31d0463b1b2\" returns successfully" Aug 13 07:30:42.022460 systemd[1]: cri-containerd-3e5702c9b3e52d09c2f38f29dddba99cce3e1f00e3b51a61e516a31d0463b1b2.scope: Deactivated successfully. Aug 13 07:30:42.228087 sshd[3073]: PAM: Permission denied for root from 103.203.48.238 Aug 13 07:30:42.302246 containerd[1500]: time="2025-08-13T07:30:42.285661179Z" level=info msg="shim disconnected" id=3e5702c9b3e52d09c2f38f29dddba99cce3e1f00e3b51a61e516a31d0463b1b2 namespace=k8s.io Aug 13 07:30:42.302246 containerd[1500]: time="2025-08-13T07:30:42.301961713Z" level=warning msg="cleaning up after shim disconnected" id=3e5702c9b3e52d09c2f38f29dddba99cce3e1f00e3b51a61e516a31d0463b1b2 namespace=k8s.io Aug 13 07:30:42.302246 containerd[1500]: time="2025-08-13T07:30:42.301994352Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:30:42.440642 sshd[3073]: Received disconnect from 103.203.48.238 port 25446:11: [preauth] Aug 13 07:30:42.440642 sshd[3073]: Disconnected from authenticating user root 103.203.48.238 port 25446 [preauth] Aug 13 07:30:42.445305 systemd[1]: sshd@11-10.243.76.66:22-103.203.48.238:25446.service: Deactivated successfully. Aug 13 07:30:42.722481 containerd[1500]: time="2025-08-13T07:30:42.722287966Z" level=info msg="CreateContainer within sandbox \"3498313fb013bbd0b1206ee335ccbdd35bafe28f01d33014b4ebfbd5631129a8\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 13 07:30:42.747864 containerd[1500]: time="2025-08-13T07:30:42.747457652Z" level=info msg="CreateContainer within sandbox \"3498313fb013bbd0b1206ee335ccbdd35bafe28f01d33014b4ebfbd5631129a8\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"425fba5f46b702c1e79008b200aa9ea18090c8c32f8c4864c97ae3547546d52a\"" Aug 13 07:30:42.752143 containerd[1500]: time="2025-08-13T07:30:42.751815363Z" level=info msg="StartContainer for \"425fba5f46b702c1e79008b200aa9ea18090c8c32f8c4864c97ae3547546d52a\"" Aug 13 07:30:42.801905 systemd[1]: Started cri-containerd-425fba5f46b702c1e79008b200aa9ea18090c8c32f8c4864c97ae3547546d52a.scope - libcontainer container 425fba5f46b702c1e79008b200aa9ea18090c8c32f8c4864c97ae3547546d52a. Aug 13 07:30:42.823248 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3e5702c9b3e52d09c2f38f29dddba99cce3e1f00e3b51a61e516a31d0463b1b2-rootfs.mount: Deactivated successfully. Aug 13 07:30:42.850938 containerd[1500]: time="2025-08-13T07:30:42.850844160Z" level=info msg="StartContainer for \"425fba5f46b702c1e79008b200aa9ea18090c8c32f8c4864c97ae3547546d52a\" returns successfully" Aug 13 07:30:42.870834 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 07:30:42.871437 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:30:42.871803 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Aug 13 07:30:42.882733 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 07:30:42.888775 systemd[1]: cri-containerd-425fba5f46b702c1e79008b200aa9ea18090c8c32f8c4864c97ae3547546d52a.scope: Deactivated successfully. Aug 13 07:30:42.924749 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-425fba5f46b702c1e79008b200aa9ea18090c8c32f8c4864c97ae3547546d52a-rootfs.mount: Deactivated successfully. Aug 13 07:30:42.931158 containerd[1500]: time="2025-08-13T07:30:42.930971429Z" level=info msg="shim disconnected" id=425fba5f46b702c1e79008b200aa9ea18090c8c32f8c4864c97ae3547546d52a namespace=k8s.io Aug 13 07:30:42.931158 containerd[1500]: time="2025-08-13T07:30:42.931060515Z" level=warning msg="cleaning up after shim disconnected" id=425fba5f46b702c1e79008b200aa9ea18090c8c32f8c4864c97ae3547546d52a namespace=k8s.io Aug 13 07:30:42.931158 containerd[1500]: time="2025-08-13T07:30:42.931080319Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:30:42.950231 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:30:43.513792 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount772959098.mount: Deactivated successfully. Aug 13 07:30:43.739404 containerd[1500]: time="2025-08-13T07:30:43.739184958Z" level=info msg="CreateContainer within sandbox \"3498313fb013bbd0b1206ee335ccbdd35bafe28f01d33014b4ebfbd5631129a8\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 13 07:30:43.782355 containerd[1500]: time="2025-08-13T07:30:43.782013030Z" level=info msg="CreateContainer within sandbox \"3498313fb013bbd0b1206ee335ccbdd35bafe28f01d33014b4ebfbd5631129a8\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"13e44ddfeeb4de6ed5750e8364a9ef7ecb69ecde1374a83a703b3a4275e27014\"" Aug 13 07:30:43.787409 containerd[1500]: time="2025-08-13T07:30:43.787372617Z" level=info msg="StartContainer for \"13e44ddfeeb4de6ed5750e8364a9ef7ecb69ecde1374a83a703b3a4275e27014\"" Aug 13 07:30:43.819451 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2149091783.mount: Deactivated successfully. Aug 13 07:30:43.868968 systemd[1]: Started cri-containerd-13e44ddfeeb4de6ed5750e8364a9ef7ecb69ecde1374a83a703b3a4275e27014.scope - libcontainer container 13e44ddfeeb4de6ed5750e8364a9ef7ecb69ecde1374a83a703b3a4275e27014. Aug 13 07:30:43.934674 containerd[1500]: time="2025-08-13T07:30:43.934607269Z" level=info msg="StartContainer for \"13e44ddfeeb4de6ed5750e8364a9ef7ecb69ecde1374a83a703b3a4275e27014\" returns successfully" Aug 13 07:30:43.944314 systemd[1]: cri-containerd-13e44ddfeeb4de6ed5750e8364a9ef7ecb69ecde1374a83a703b3a4275e27014.scope: Deactivated successfully. Aug 13 07:30:43.986340 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-13e44ddfeeb4de6ed5750e8364a9ef7ecb69ecde1374a83a703b3a4275e27014-rootfs.mount: Deactivated successfully. Aug 13 07:30:44.040074 containerd[1500]: time="2025-08-13T07:30:44.039604087Z" level=info msg="shim disconnected" id=13e44ddfeeb4de6ed5750e8364a9ef7ecb69ecde1374a83a703b3a4275e27014 namespace=k8s.io Aug 13 07:30:44.040074 containerd[1500]: time="2025-08-13T07:30:44.039743302Z" level=warning msg="cleaning up after shim disconnected" id=13e44ddfeeb4de6ed5750e8364a9ef7ecb69ecde1374a83a703b3a4275e27014 namespace=k8s.io Aug 13 07:30:44.040074 containerd[1500]: time="2025-08-13T07:30:44.039768470Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:30:44.532968 containerd[1500]: time="2025-08-13T07:30:44.532913507Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:30:44.535506 containerd[1500]: time="2025-08-13T07:30:44.535147754Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Aug 13 07:30:44.540568 containerd[1500]: time="2025-08-13T07:30:44.540340576Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:30:44.545639 containerd[1500]: time="2025-08-13T07:30:44.545462153Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.775231479s" Aug 13 07:30:44.545639 containerd[1500]: time="2025-08-13T07:30:44.545527788Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Aug 13 07:30:44.551923 containerd[1500]: time="2025-08-13T07:30:44.551869288Z" level=info msg="CreateContainer within sandbox \"2e0c1280fef920037ed348568abf405484bc9fe1ef8d1b9fc47fd7d1fcd82db0\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Aug 13 07:30:44.573969 containerd[1500]: time="2025-08-13T07:30:44.573861077Z" level=info msg="CreateContainer within sandbox \"2e0c1280fef920037ed348568abf405484bc9fe1ef8d1b9fc47fd7d1fcd82db0\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"72c1c88e7d3c75cb71c30221ea69d82a27e49e9cb2df632df1c7d3846530b8e8\"" Aug 13 07:30:44.577931 containerd[1500]: time="2025-08-13T07:30:44.577841514Z" level=info msg="StartContainer for \"72c1c88e7d3c75cb71c30221ea69d82a27e49e9cb2df632df1c7d3846530b8e8\"" Aug 13 07:30:44.627948 systemd[1]: Started cri-containerd-72c1c88e7d3c75cb71c30221ea69d82a27e49e9cb2df632df1c7d3846530b8e8.scope - libcontainer container 72c1c88e7d3c75cb71c30221ea69d82a27e49e9cb2df632df1c7d3846530b8e8. Aug 13 07:30:44.673560 containerd[1500]: time="2025-08-13T07:30:44.673496301Z" level=info msg="StartContainer for \"72c1c88e7d3c75cb71c30221ea69d82a27e49e9cb2df632df1c7d3846530b8e8\" returns successfully" Aug 13 07:30:44.751725 containerd[1500]: time="2025-08-13T07:30:44.751579602Z" level=info msg="CreateContainer within sandbox \"3498313fb013bbd0b1206ee335ccbdd35bafe28f01d33014b4ebfbd5631129a8\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 13 07:30:44.778395 kubelet[2695]: I0813 07:30:44.777425 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-8n5zc" podStartSLOduration=1.815350761 podStartE2EDuration="14.777329691s" podCreationTimestamp="2025-08-13 07:30:30 +0000 UTC" firstStartedPulling="2025-08-13 07:30:31.584945917 +0000 UTC m=+6.338949755" lastFinishedPulling="2025-08-13 07:30:44.546924852 +0000 UTC m=+19.300928685" observedRunningTime="2025-08-13 07:30:44.776019179 +0000 UTC m=+19.530023032" watchObservedRunningTime="2025-08-13 07:30:44.777329691 +0000 UTC m=+19.531333537" Aug 13 07:30:44.791034 containerd[1500]: time="2025-08-13T07:30:44.790740154Z" level=info msg="CreateContainer within sandbox \"3498313fb013bbd0b1206ee335ccbdd35bafe28f01d33014b4ebfbd5631129a8\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2d1ecf8ad3a73c52a9d809c8331560d2458c4c894f9511b681a4cbf5046f26ad\"" Aug 13 07:30:44.794700 containerd[1500]: time="2025-08-13T07:30:44.794658867Z" level=info msg="StartContainer for \"2d1ecf8ad3a73c52a9d809c8331560d2458c4c894f9511b681a4cbf5046f26ad\"" Aug 13 07:30:44.882234 systemd[1]: run-containerd-runc-k8s.io-2d1ecf8ad3a73c52a9d809c8331560d2458c4c894f9511b681a4cbf5046f26ad-runc.qhOwG3.mount: Deactivated successfully. Aug 13 07:30:44.891832 systemd[1]: Started cri-containerd-2d1ecf8ad3a73c52a9d809c8331560d2458c4c894f9511b681a4cbf5046f26ad.scope - libcontainer container 2d1ecf8ad3a73c52a9d809c8331560d2458c4c894f9511b681a4cbf5046f26ad. Aug 13 07:30:44.951058 containerd[1500]: time="2025-08-13T07:30:44.950999360Z" level=info msg="StartContainer for \"2d1ecf8ad3a73c52a9d809c8331560d2458c4c894f9511b681a4cbf5046f26ad\" returns successfully" Aug 13 07:30:44.954487 systemd[1]: cri-containerd-2d1ecf8ad3a73c52a9d809c8331560d2458c4c894f9511b681a4cbf5046f26ad.scope: Deactivated successfully. Aug 13 07:30:45.009984 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2d1ecf8ad3a73c52a9d809c8331560d2458c4c894f9511b681a4cbf5046f26ad-rootfs.mount: Deactivated successfully. Aug 13 07:30:45.103365 containerd[1500]: time="2025-08-13T07:30:45.102903611Z" level=info msg="shim disconnected" id=2d1ecf8ad3a73c52a9d809c8331560d2458c4c894f9511b681a4cbf5046f26ad namespace=k8s.io Aug 13 07:30:45.103365 containerd[1500]: time="2025-08-13T07:30:45.103010548Z" level=warning msg="cleaning up after shim disconnected" id=2d1ecf8ad3a73c52a9d809c8331560d2458c4c894f9511b681a4cbf5046f26ad namespace=k8s.io Aug 13 07:30:45.103365 containerd[1500]: time="2025-08-13T07:30:45.103027763Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:30:45.762827 containerd[1500]: time="2025-08-13T07:30:45.762771752Z" level=info msg="CreateContainer within sandbox \"3498313fb013bbd0b1206ee335ccbdd35bafe28f01d33014b4ebfbd5631129a8\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 13 07:30:45.790730 containerd[1500]: time="2025-08-13T07:30:45.789947229Z" level=info msg="CreateContainer within sandbox \"3498313fb013bbd0b1206ee335ccbdd35bafe28f01d33014b4ebfbd5631129a8\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"82d08721fcb9751c4fffb7b9d060546f11715adb79057c5c9d61be61c06c33bc\"" Aug 13 07:30:45.791652 containerd[1500]: time="2025-08-13T07:30:45.791614933Z" level=info msg="StartContainer for \"82d08721fcb9751c4fffb7b9d060546f11715adb79057c5c9d61be61c06c33bc\"" Aug 13 07:30:45.894851 systemd[1]: Started cri-containerd-82d08721fcb9751c4fffb7b9d060546f11715adb79057c5c9d61be61c06c33bc.scope - libcontainer container 82d08721fcb9751c4fffb7b9d060546f11715adb79057c5c9d61be61c06c33bc. Aug 13 07:30:46.074564 containerd[1500]: time="2025-08-13T07:30:46.072068578Z" level=info msg="StartContainer for \"82d08721fcb9751c4fffb7b9d060546f11715adb79057c5c9d61be61c06c33bc\" returns successfully" Aug 13 07:30:46.207911 systemd[1]: run-containerd-runc-k8s.io-82d08721fcb9751c4fffb7b9d060546f11715adb79057c5c9d61be61c06c33bc-runc.ZfurX1.mount: Deactivated successfully. Aug 13 07:30:46.589262 kubelet[2695]: I0813 07:30:46.589214 2695 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Aug 13 07:30:46.650693 systemd[1]: Created slice kubepods-burstable-pod4203209e_a41f_4ca5_97dd_9b5ed5baa2ae.slice - libcontainer container kubepods-burstable-pod4203209e_a41f_4ca5_97dd_9b5ed5baa2ae.slice. Aug 13 07:30:46.658553 systemd[1]: Created slice kubepods-burstable-pode7c66e4e_8c5c_417e_97e8_b54e154daa7e.slice - libcontainer container kubepods-burstable-pode7c66e4e_8c5c_417e_97e8_b54e154daa7e.slice. Aug 13 07:30:46.709281 kubelet[2695]: I0813 07:30:46.709221 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4203209e-a41f-4ca5-97dd-9b5ed5baa2ae-config-volume\") pod \"coredns-668d6bf9bc-bdq5b\" (UID: \"4203209e-a41f-4ca5-97dd-9b5ed5baa2ae\") " pod="kube-system/coredns-668d6bf9bc-bdq5b" Aug 13 07:30:46.709462 kubelet[2695]: I0813 07:30:46.709296 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwrxs\" (UniqueName: \"kubernetes.io/projected/4203209e-a41f-4ca5-97dd-9b5ed5baa2ae-kube-api-access-bwrxs\") pod \"coredns-668d6bf9bc-bdq5b\" (UID: \"4203209e-a41f-4ca5-97dd-9b5ed5baa2ae\") " pod="kube-system/coredns-668d6bf9bc-bdq5b" Aug 13 07:30:46.709462 kubelet[2695]: I0813 07:30:46.709336 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e7c66e4e-8c5c-417e-97e8-b54e154daa7e-config-volume\") pod \"coredns-668d6bf9bc-mrg4z\" (UID: \"e7c66e4e-8c5c-417e-97e8-b54e154daa7e\") " pod="kube-system/coredns-668d6bf9bc-mrg4z" Aug 13 07:30:46.709462 kubelet[2695]: I0813 07:30:46.709374 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9f4d5\" (UniqueName: \"kubernetes.io/projected/e7c66e4e-8c5c-417e-97e8-b54e154daa7e-kube-api-access-9f4d5\") pod \"coredns-668d6bf9bc-mrg4z\" (UID: \"e7c66e4e-8c5c-417e-97e8-b54e154daa7e\") " pod="kube-system/coredns-668d6bf9bc-mrg4z" Aug 13 07:30:46.959532 containerd[1500]: time="2025-08-13T07:30:46.959377752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bdq5b,Uid:4203209e-a41f-4ca5-97dd-9b5ed5baa2ae,Namespace:kube-system,Attempt:0,}" Aug 13 07:30:46.968736 containerd[1500]: time="2025-08-13T07:30:46.968690032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mrg4z,Uid:e7c66e4e-8c5c-417e-97e8-b54e154daa7e,Namespace:kube-system,Attempt:0,}" Aug 13 07:30:49.176646 systemd-networkd[1434]: cilium_host: Link UP Aug 13 07:30:49.177861 systemd-networkd[1434]: cilium_net: Link UP Aug 13 07:30:49.179177 systemd-networkd[1434]: cilium_net: Gained carrier Aug 13 07:30:49.180877 systemd-networkd[1434]: cilium_host: Gained carrier Aug 13 07:30:49.239909 systemd-networkd[1434]: cilium_net: Gained IPv6LL Aug 13 07:30:49.367975 systemd-networkd[1434]: cilium_vxlan: Link UP Aug 13 07:30:49.368207 systemd-networkd[1434]: cilium_vxlan: Gained carrier Aug 13 07:30:49.663806 systemd-networkd[1434]: cilium_host: Gained IPv6LL Aug 13 07:30:49.917039 kernel: NET: Registered PF_ALG protocol family Aug 13 07:30:50.977704 systemd-networkd[1434]: lxc_health: Link UP Aug 13 07:30:50.984120 systemd-networkd[1434]: lxc_health: Gained carrier Aug 13 07:30:51.305142 kubelet[2695]: I0813 07:30:51.304826 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-d7nfx" podStartSLOduration=11.053659647 podStartE2EDuration="21.304775361s" podCreationTimestamp="2025-08-13 07:30:30 +0000 UTC" firstStartedPulling="2025-08-13 07:30:31.516790407 +0000 UTC m=+6.270794247" lastFinishedPulling="2025-08-13 07:30:41.767906113 +0000 UTC m=+16.521909961" observedRunningTime="2025-08-13 07:30:46.788120242 +0000 UTC m=+21.542124103" watchObservedRunningTime="2025-08-13 07:30:51.304775361 +0000 UTC m=+26.058779210" Aug 13 07:30:51.310847 systemd-networkd[1434]: cilium_vxlan: Gained IPv6LL Aug 13 07:30:51.686577 systemd-networkd[1434]: lxc312b1f0e41ce: Link UP Aug 13 07:30:51.688893 systemd-networkd[1434]: lxcb23af9befb79: Link UP Aug 13 07:30:51.697653 kernel: eth0: renamed from tmpc3c62 Aug 13 07:30:51.708697 kernel: eth0: renamed from tmp34beb Aug 13 07:30:51.719526 systemd-networkd[1434]: lxcb23af9befb79: Gained carrier Aug 13 07:30:51.721458 systemd-networkd[1434]: lxc312b1f0e41ce: Gained carrier Aug 13 07:30:52.910962 systemd-networkd[1434]: lxc_health: Gained IPv6LL Aug 13 07:30:53.167957 systemd-networkd[1434]: lxc312b1f0e41ce: Gained IPv6LL Aug 13 07:30:53.427159 systemd-networkd[1434]: lxcb23af9befb79: Gained IPv6LL Aug 13 07:30:56.556599 kubelet[2695]: I0813 07:30:56.556525 2695 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 07:30:57.373070 containerd[1500]: time="2025-08-13T07:30:57.372602277Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:30:57.373070 containerd[1500]: time="2025-08-13T07:30:57.372754538Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:30:57.373070 containerd[1500]: time="2025-08-13T07:30:57.372782826Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:30:57.373070 containerd[1500]: time="2025-08-13T07:30:57.372989169Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:30:57.425886 systemd[1]: run-containerd-runc-k8s.io-34beb3e037b58b0ceaada0a8041b9662d8d71e93737b9ae663a3d32f54960087-runc.beGQx8.mount: Deactivated successfully. Aug 13 07:30:57.455529 systemd[1]: Started cri-containerd-34beb3e037b58b0ceaada0a8041b9662d8d71e93737b9ae663a3d32f54960087.scope - libcontainer container 34beb3e037b58b0ceaada0a8041b9662d8d71e93737b9ae663a3d32f54960087. Aug 13 07:30:57.490770 containerd[1500]: time="2025-08-13T07:30:57.489387690Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:30:57.490770 containerd[1500]: time="2025-08-13T07:30:57.489485152Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:30:57.490770 containerd[1500]: time="2025-08-13T07:30:57.489507895Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:30:57.490770 containerd[1500]: time="2025-08-13T07:30:57.489705274Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:30:57.534835 systemd[1]: Started cri-containerd-c3c62480be3cb9fa3d3177683dd22dbac6fd354c2e491ae909a6633d4b2b9df4.scope - libcontainer container c3c62480be3cb9fa3d3177683dd22dbac6fd354c2e491ae909a6633d4b2b9df4. Aug 13 07:30:57.632756 containerd[1500]: time="2025-08-13T07:30:57.632554363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mrg4z,Uid:e7c66e4e-8c5c-417e-97e8-b54e154daa7e,Namespace:kube-system,Attempt:0,} returns sandbox id \"34beb3e037b58b0ceaada0a8041b9662d8d71e93737b9ae663a3d32f54960087\"" Aug 13 07:30:57.661535 containerd[1500]: time="2025-08-13T07:30:57.661294676Z" level=info msg="CreateContainer within sandbox \"34beb3e037b58b0ceaada0a8041b9662d8d71e93737b9ae663a3d32f54960087\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 07:30:57.691667 containerd[1500]: time="2025-08-13T07:30:57.691564537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bdq5b,Uid:4203209e-a41f-4ca5-97dd-9b5ed5baa2ae,Namespace:kube-system,Attempt:0,} returns sandbox id \"c3c62480be3cb9fa3d3177683dd22dbac6fd354c2e491ae909a6633d4b2b9df4\"" Aug 13 07:30:57.701866 containerd[1500]: time="2025-08-13T07:30:57.701422598Z" level=info msg="CreateContainer within sandbox \"c3c62480be3cb9fa3d3177683dd22dbac6fd354c2e491ae909a6633d4b2b9df4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 07:30:57.717091 containerd[1500]: time="2025-08-13T07:30:57.716904832Z" level=info msg="CreateContainer within sandbox \"34beb3e037b58b0ceaada0a8041b9662d8d71e93737b9ae663a3d32f54960087\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cd459af955e4391d8abbb5dbf4cedc0b2cf8d093c59c528fe068fffc387e1a7e\"" Aug 13 07:30:57.717912 containerd[1500]: time="2025-08-13T07:30:57.717882792Z" level=info msg="StartContainer for \"cd459af955e4391d8abbb5dbf4cedc0b2cf8d093c59c528fe068fffc387e1a7e\"" Aug 13 07:30:57.729208 containerd[1500]: time="2025-08-13T07:30:57.729062893Z" level=info msg="CreateContainer within sandbox \"c3c62480be3cb9fa3d3177683dd22dbac6fd354c2e491ae909a6633d4b2b9df4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5c040fd56b6c6760e99971bba397c523e32b1ebb9098aaf2f6555030ea2018da\"" Aug 13 07:30:57.730469 containerd[1500]: time="2025-08-13T07:30:57.730102026Z" level=info msg="StartContainer for \"5c040fd56b6c6760e99971bba397c523e32b1ebb9098aaf2f6555030ea2018da\"" Aug 13 07:30:57.767870 systemd[1]: Started cri-containerd-cd459af955e4391d8abbb5dbf4cedc0b2cf8d093c59c528fe068fffc387e1a7e.scope - libcontainer container cd459af955e4391d8abbb5dbf4cedc0b2cf8d093c59c528fe068fffc387e1a7e. Aug 13 07:30:57.792832 systemd[1]: Started cri-containerd-5c040fd56b6c6760e99971bba397c523e32b1ebb9098aaf2f6555030ea2018da.scope - libcontainer container 5c040fd56b6c6760e99971bba397c523e32b1ebb9098aaf2f6555030ea2018da. Aug 13 07:30:57.836594 containerd[1500]: time="2025-08-13T07:30:57.836531086Z" level=info msg="StartContainer for \"cd459af955e4391d8abbb5dbf4cedc0b2cf8d093c59c528fe068fffc387e1a7e\" returns successfully" Aug 13 07:30:57.861893 containerd[1500]: time="2025-08-13T07:30:57.861816563Z" level=info msg="StartContainer for \"5c040fd56b6c6760e99971bba397c523e32b1ebb9098aaf2f6555030ea2018da\" returns successfully" Aug 13 07:30:58.858557 kubelet[2695]: I0813 07:30:58.857530 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-mrg4z" podStartSLOduration=27.85749094 podStartE2EDuration="27.85749094s" podCreationTimestamp="2025-08-13 07:30:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:30:58.843388056 +0000 UTC m=+33.597391914" watchObservedRunningTime="2025-08-13 07:30:58.85749094 +0000 UTC m=+33.611494780" Aug 13 07:30:58.918468 kubelet[2695]: I0813 07:30:58.917948 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-bdq5b" podStartSLOduration=28.917928488 podStartE2EDuration="28.917928488s" podCreationTimestamp="2025-08-13 07:30:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:30:58.888732646 +0000 UTC m=+33.642736506" watchObservedRunningTime="2025-08-13 07:30:58.917928488 +0000 UTC m=+33.671932333" Aug 13 07:31:45.103202 systemd[1]: Started sshd@12-10.243.76.66:22-139.178.68.195:38400.service - OpenSSH per-connection server daemon (139.178.68.195:38400). Aug 13 07:31:46.033609 sshd[4103]: Accepted publickey for core from 139.178.68.195 port 38400 ssh2: RSA SHA256:OaWZFdeXPh6CYYASI1PRTz4egRCVAyEUFgarVyGxwBQ Aug 13 07:31:46.036998 sshd[4103]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:31:46.048718 systemd-logind[1485]: New session 12 of user core. Aug 13 07:31:46.061488 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 13 07:31:47.234154 sshd[4103]: pam_unix(sshd:session): session closed for user core Aug 13 07:31:47.241227 systemd-logind[1485]: Session 12 logged out. Waiting for processes to exit. Aug 13 07:31:47.242496 systemd[1]: sshd@12-10.243.76.66:22-139.178.68.195:38400.service: Deactivated successfully. Aug 13 07:31:47.247060 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 07:31:47.248888 systemd-logind[1485]: Removed session 12. Aug 13 07:31:50.174778 update_engine[1486]: I20250813 07:31:50.174245 1486 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Aug 13 07:31:50.174778 update_engine[1486]: I20250813 07:31:50.174386 1486 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Aug 13 07:31:50.179862 update_engine[1486]: I20250813 07:31:50.179433 1486 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Aug 13 07:31:50.180979 update_engine[1486]: I20250813 07:31:50.180835 1486 omaha_request_params.cc:62] Current group set to lts Aug 13 07:31:50.181346 update_engine[1486]: I20250813 07:31:50.181293 1486 update_attempter.cc:499] Already updated boot flags. Skipping. Aug 13 07:31:50.181462 update_engine[1486]: I20250813 07:31:50.181428 1486 update_attempter.cc:643] Scheduling an action processor start. Aug 13 07:31:50.181613 update_engine[1486]: I20250813 07:31:50.181578 1486 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Aug 13 07:31:50.182002 update_engine[1486]: I20250813 07:31:50.181828 1486 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Aug 13 07:31:50.183237 update_engine[1486]: I20250813 07:31:50.182209 1486 omaha_request_action.cc:271] Posting an Omaha request to disabled Aug 13 07:31:50.183237 update_engine[1486]: I20250813 07:31:50.182236 1486 omaha_request_action.cc:272] Request: Aug 13 07:31:50.183237 update_engine[1486]: Aug 13 07:31:50.183237 update_engine[1486]: Aug 13 07:31:50.183237 update_engine[1486]: Aug 13 07:31:50.183237 update_engine[1486]: Aug 13 07:31:50.183237 update_engine[1486]: Aug 13 07:31:50.183237 update_engine[1486]: Aug 13 07:31:50.183237 update_engine[1486]: Aug 13 07:31:50.183237 update_engine[1486]: Aug 13 07:31:50.183237 update_engine[1486]: I20250813 07:31:50.182260 1486 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Aug 13 07:31:50.189154 update_engine[1486]: I20250813 07:31:50.189090 1486 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Aug 13 07:31:50.189756 update_engine[1486]: I20250813 07:31:50.189697 1486 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Aug 13 07:31:50.202987 locksmithd[1515]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Aug 13 07:31:50.259240 update_engine[1486]: E20250813 07:31:50.259117 1486 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Aug 13 07:31:50.259422 update_engine[1486]: I20250813 07:31:50.259315 1486 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Aug 13 07:31:52.393987 systemd[1]: Started sshd@13-10.243.76.66:22-139.178.68.195:57068.service - OpenSSH per-connection server daemon (139.178.68.195:57068). Aug 13 07:31:53.341512 sshd[4118]: Accepted publickey for core from 139.178.68.195 port 57068 ssh2: RSA SHA256:OaWZFdeXPh6CYYASI1PRTz4egRCVAyEUFgarVyGxwBQ Aug 13 07:31:53.344196 sshd[4118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:31:53.352330 systemd-logind[1485]: New session 13 of user core. Aug 13 07:31:53.361821 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 13 07:31:54.117464 sshd[4118]: pam_unix(sshd:session): session closed for user core Aug 13 07:31:54.126448 systemd[1]: sshd@13-10.243.76.66:22-139.178.68.195:57068.service: Deactivated successfully. Aug 13 07:31:54.130049 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 07:31:54.132144 systemd-logind[1485]: Session 13 logged out. Waiting for processes to exit. Aug 13 07:31:54.133760 systemd-logind[1485]: Removed session 13. Aug 13 07:31:59.289072 systemd[1]: Started sshd@14-10.243.76.66:22-139.178.68.195:57084.service - OpenSSH per-connection server daemon (139.178.68.195:57084). Aug 13 07:32:00.139096 update_engine[1486]: I20250813 07:32:00.138751 1486 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Aug 13 07:32:00.140717 update_engine[1486]: I20250813 07:32:00.140057 1486 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Aug 13 07:32:00.140717 update_engine[1486]: I20250813 07:32:00.140581 1486 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Aug 13 07:32:00.141242 update_engine[1486]: E20250813 07:32:00.141196 1486 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Aug 13 07:32:00.141336 update_engine[1486]: I20250813 07:32:00.141286 1486 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Aug 13 07:32:00.203958 sshd[4133]: Accepted publickey for core from 139.178.68.195 port 57084 ssh2: RSA SHA256:OaWZFdeXPh6CYYASI1PRTz4egRCVAyEUFgarVyGxwBQ Aug 13 07:32:00.206424 sshd[4133]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:32:00.214694 systemd-logind[1485]: New session 14 of user core. Aug 13 07:32:00.219865 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 13 07:32:00.946454 sshd[4133]: pam_unix(sshd:session): session closed for user core Aug 13 07:32:00.952121 systemd[1]: sshd@14-10.243.76.66:22-139.178.68.195:57084.service: Deactivated successfully. Aug 13 07:32:00.956344 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 07:32:00.959120 systemd-logind[1485]: Session 14 logged out. Waiting for processes to exit. Aug 13 07:32:00.961464 systemd-logind[1485]: Removed session 14. Aug 13 07:32:06.108027 systemd[1]: Started sshd@15-10.243.76.66:22-139.178.68.195:47120.service - OpenSSH per-connection server daemon (139.178.68.195:47120). Aug 13 07:32:07.023790 sshd[4149]: Accepted publickey for core from 139.178.68.195 port 47120 ssh2: RSA SHA256:OaWZFdeXPh6CYYASI1PRTz4egRCVAyEUFgarVyGxwBQ Aug 13 07:32:07.026197 sshd[4149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:32:07.034947 systemd-logind[1485]: New session 15 of user core. Aug 13 07:32:07.040922 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 13 07:32:07.772025 sshd[4149]: pam_unix(sshd:session): session closed for user core Aug 13 07:32:07.776547 systemd[1]: sshd@15-10.243.76.66:22-139.178.68.195:47120.service: Deactivated successfully. Aug 13 07:32:07.779453 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 07:32:07.781501 systemd-logind[1485]: Session 15 logged out. Waiting for processes to exit. Aug 13 07:32:07.783714 systemd-logind[1485]: Removed session 15. Aug 13 07:32:07.930870 systemd[1]: Started sshd@16-10.243.76.66:22-139.178.68.195:47122.service - OpenSSH per-connection server daemon (139.178.68.195:47122). Aug 13 07:32:08.856477 sshd[4163]: Accepted publickey for core from 139.178.68.195 port 47122 ssh2: RSA SHA256:OaWZFdeXPh6CYYASI1PRTz4egRCVAyEUFgarVyGxwBQ Aug 13 07:32:08.859053 sshd[4163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:32:08.867798 systemd-logind[1485]: New session 16 of user core. Aug 13 07:32:08.878845 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 13 07:32:09.666460 sshd[4163]: pam_unix(sshd:session): session closed for user core Aug 13 07:32:09.674095 systemd[1]: sshd@16-10.243.76.66:22-139.178.68.195:47122.service: Deactivated successfully. Aug 13 07:32:09.677359 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 07:32:09.680063 systemd-logind[1485]: Session 16 logged out. Waiting for processes to exit. Aug 13 07:32:09.683038 systemd-logind[1485]: Removed session 16. Aug 13 07:32:09.830011 systemd[1]: Started sshd@17-10.243.76.66:22-139.178.68.195:47130.service - OpenSSH per-connection server daemon (139.178.68.195:47130). Aug 13 07:32:10.135642 update_engine[1486]: I20250813 07:32:10.133731 1486 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Aug 13 07:32:10.135642 update_engine[1486]: I20250813 07:32:10.134325 1486 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Aug 13 07:32:10.136559 update_engine[1486]: I20250813 07:32:10.136497 1486 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Aug 13 07:32:10.139109 update_engine[1486]: E20250813 07:32:10.138995 1486 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Aug 13 07:32:10.139109 update_engine[1486]: I20250813 07:32:10.139066 1486 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Aug 13 07:32:10.726656 sshd[4175]: Accepted publickey for core from 139.178.68.195 port 47130 ssh2: RSA SHA256:OaWZFdeXPh6CYYASI1PRTz4egRCVAyEUFgarVyGxwBQ Aug 13 07:32:10.729280 sshd[4175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:32:10.736593 systemd-logind[1485]: New session 17 of user core. Aug 13 07:32:10.741795 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 13 07:32:11.453259 sshd[4175]: pam_unix(sshd:session): session closed for user core Aug 13 07:32:11.457598 systemd[1]: sshd@17-10.243.76.66:22-139.178.68.195:47130.service: Deactivated successfully. Aug 13 07:32:11.460395 systemd[1]: session-17.scope: Deactivated successfully. Aug 13 07:32:11.462222 systemd-logind[1485]: Session 17 logged out. Waiting for processes to exit. Aug 13 07:32:11.463729 systemd-logind[1485]: Removed session 17. Aug 13 07:32:16.613989 systemd[1]: Started sshd@18-10.243.76.66:22-139.178.68.195:41334.service - OpenSSH per-connection server daemon (139.178.68.195:41334). Aug 13 07:32:17.512174 sshd[4187]: Accepted publickey for core from 139.178.68.195 port 41334 ssh2: RSA SHA256:OaWZFdeXPh6CYYASI1PRTz4egRCVAyEUFgarVyGxwBQ Aug 13 07:32:17.514732 sshd[4187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:32:17.522135 systemd-logind[1485]: New session 18 of user core. Aug 13 07:32:17.530887 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 13 07:32:18.235289 sshd[4187]: pam_unix(sshd:session): session closed for user core Aug 13 07:32:18.240415 systemd[1]: sshd@18-10.243.76.66:22-139.178.68.195:41334.service: Deactivated successfully. Aug 13 07:32:18.244249 systemd[1]: session-18.scope: Deactivated successfully. Aug 13 07:32:18.246052 systemd-logind[1485]: Session 18 logged out. Waiting for processes to exit. Aug 13 07:32:18.247692 systemd-logind[1485]: Removed session 18. Aug 13 07:32:18.395980 systemd[1]: Started sshd@19-10.243.76.66:22-139.178.68.195:41340.service - OpenSSH per-connection server daemon (139.178.68.195:41340). Aug 13 07:32:19.305177 sshd[4199]: Accepted publickey for core from 139.178.68.195 port 41340 ssh2: RSA SHA256:OaWZFdeXPh6CYYASI1PRTz4egRCVAyEUFgarVyGxwBQ Aug 13 07:32:19.307698 sshd[4199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:32:19.317352 systemd-logind[1485]: New session 19 of user core. Aug 13 07:32:19.322842 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 13 07:32:20.134419 update_engine[1486]: I20250813 07:32:20.133195 1486 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Aug 13 07:32:20.134419 update_engine[1486]: I20250813 07:32:20.133877 1486 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Aug 13 07:32:20.135247 update_engine[1486]: I20250813 07:32:20.135200 1486 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Aug 13 07:32:20.135599 update_engine[1486]: E20250813 07:32:20.135565 1486 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Aug 13 07:32:20.135837 update_engine[1486]: I20250813 07:32:20.135807 1486 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Aug 13 07:32:20.136555 update_engine[1486]: I20250813 07:32:20.135948 1486 omaha_request_action.cc:617] Omaha request response: Aug 13 07:32:20.136555 update_engine[1486]: E20250813 07:32:20.136132 1486 omaha_request_action.cc:636] Omaha request network transfer failed. Aug 13 07:32:20.136555 update_engine[1486]: I20250813 07:32:20.136345 1486 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Aug 13 07:32:20.136555 update_engine[1486]: I20250813 07:32:20.136370 1486 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Aug 13 07:32:20.136555 update_engine[1486]: I20250813 07:32:20.136382 1486 update_attempter.cc:306] Processing Done. Aug 13 07:32:20.136555 update_engine[1486]: E20250813 07:32:20.136441 1486 update_attempter.cc:619] Update failed. Aug 13 07:32:20.139409 update_engine[1486]: I20250813 07:32:20.139374 1486 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Aug 13 07:32:20.140959 update_engine[1486]: I20250813 07:32:20.140100 1486 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Aug 13 07:32:20.140959 update_engine[1486]: I20250813 07:32:20.140129 1486 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Aug 13 07:32:20.140959 update_engine[1486]: I20250813 07:32:20.140275 1486 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Aug 13 07:32:20.140959 update_engine[1486]: I20250813 07:32:20.140345 1486 omaha_request_action.cc:271] Posting an Omaha request to disabled Aug 13 07:32:20.140959 update_engine[1486]: I20250813 07:32:20.140363 1486 omaha_request_action.cc:272] Request: Aug 13 07:32:20.140959 update_engine[1486]: Aug 13 07:32:20.140959 update_engine[1486]: Aug 13 07:32:20.140959 update_engine[1486]: Aug 13 07:32:20.140959 update_engine[1486]: Aug 13 07:32:20.140959 update_engine[1486]: Aug 13 07:32:20.140959 update_engine[1486]: Aug 13 07:32:20.140959 update_engine[1486]: I20250813 07:32:20.140374 1486 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Aug 13 07:32:20.140959 update_engine[1486]: I20250813 07:32:20.140681 1486 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Aug 13 07:32:20.140959 update_engine[1486]: I20250813 07:32:20.140897 1486 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Aug 13 07:32:20.143071 locksmithd[1515]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Aug 13 07:32:20.143993 update_engine[1486]: E20250813 07:32:20.143038 1486 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Aug 13 07:32:20.143993 update_engine[1486]: I20250813 07:32:20.143102 1486 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Aug 13 07:32:20.143993 update_engine[1486]: I20250813 07:32:20.143121 1486 omaha_request_action.cc:617] Omaha request response: Aug 13 07:32:20.143993 update_engine[1486]: I20250813 07:32:20.143133 1486 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Aug 13 07:32:20.143993 update_engine[1486]: I20250813 07:32:20.143144 1486 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Aug 13 07:32:20.143993 update_engine[1486]: I20250813 07:32:20.143154 1486 update_attempter.cc:306] Processing Done. Aug 13 07:32:20.143993 update_engine[1486]: I20250813 07:32:20.143167 1486 update_attempter.cc:310] Error event sent. Aug 13 07:32:20.143993 update_engine[1486]: I20250813 07:32:20.143193 1486 update_check_scheduler.cc:74] Next update check in 40m11s Aug 13 07:32:20.144350 locksmithd[1515]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Aug 13 07:32:20.386855 sshd[4199]: pam_unix(sshd:session): session closed for user core Aug 13 07:32:20.393485 systemd[1]: sshd@19-10.243.76.66:22-139.178.68.195:41340.service: Deactivated successfully. Aug 13 07:32:20.396459 systemd[1]: session-19.scope: Deactivated successfully. Aug 13 07:32:20.398191 systemd-logind[1485]: Session 19 logged out. Waiting for processes to exit. Aug 13 07:32:20.399658 systemd-logind[1485]: Removed session 19. Aug 13 07:32:20.549030 systemd[1]: Started sshd@20-10.243.76.66:22-139.178.68.195:36542.service - OpenSSH per-connection server daemon (139.178.68.195:36542). Aug 13 07:32:21.448693 sshd[4210]: Accepted publickey for core from 139.178.68.195 port 36542 ssh2: RSA SHA256:OaWZFdeXPh6CYYASI1PRTz4egRCVAyEUFgarVyGxwBQ Aug 13 07:32:21.452522 sshd[4210]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:32:21.460290 systemd-logind[1485]: New session 20 of user core. Aug 13 07:32:21.467923 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 13 07:32:22.961477 sshd[4210]: pam_unix(sshd:session): session closed for user core Aug 13 07:32:22.968253 systemd[1]: sshd@20-10.243.76.66:22-139.178.68.195:36542.service: Deactivated successfully. Aug 13 07:32:22.972539 systemd[1]: session-20.scope: Deactivated successfully. Aug 13 07:32:22.975039 systemd-logind[1485]: Session 20 logged out. Waiting for processes to exit. Aug 13 07:32:22.976971 systemd-logind[1485]: Removed session 20. Aug 13 07:32:23.125063 systemd[1]: Started sshd@21-10.243.76.66:22-139.178.68.195:36556.service - OpenSSH per-connection server daemon (139.178.68.195:36556). Aug 13 07:32:24.024005 sshd[4228]: Accepted publickey for core from 139.178.68.195 port 36556 ssh2: RSA SHA256:OaWZFdeXPh6CYYASI1PRTz4egRCVAyEUFgarVyGxwBQ Aug 13 07:32:24.026579 sshd[4228]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:32:24.034458 systemd-logind[1485]: New session 21 of user core. Aug 13 07:32:24.040867 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 13 07:32:24.969164 sshd[4228]: pam_unix(sshd:session): session closed for user core Aug 13 07:32:24.975522 systemd[1]: sshd@21-10.243.76.66:22-139.178.68.195:36556.service: Deactivated successfully. Aug 13 07:32:24.978184 systemd[1]: session-21.scope: Deactivated successfully. Aug 13 07:32:24.980212 systemd-logind[1485]: Session 21 logged out. Waiting for processes to exit. Aug 13 07:32:24.982005 systemd-logind[1485]: Removed session 21. Aug 13 07:32:25.131063 systemd[1]: Started sshd@22-10.243.76.66:22-139.178.68.195:36570.service - OpenSSH per-connection server daemon (139.178.68.195:36570). Aug 13 07:32:26.022247 sshd[4239]: Accepted publickey for core from 139.178.68.195 port 36570 ssh2: RSA SHA256:OaWZFdeXPh6CYYASI1PRTz4egRCVAyEUFgarVyGxwBQ Aug 13 07:32:26.024670 sshd[4239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:32:26.032471 systemd-logind[1485]: New session 22 of user core. Aug 13 07:32:26.039897 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 13 07:32:26.755148 sshd[4239]: pam_unix(sshd:session): session closed for user core Aug 13 07:32:26.761130 systemd[1]: sshd@22-10.243.76.66:22-139.178.68.195:36570.service: Deactivated successfully. Aug 13 07:32:26.764467 systemd[1]: session-22.scope: Deactivated successfully. Aug 13 07:32:26.765681 systemd-logind[1485]: Session 22 logged out. Waiting for processes to exit. Aug 13 07:32:26.768095 systemd-logind[1485]: Removed session 22. Aug 13 07:32:31.925148 systemd[1]: Started sshd@23-10.243.76.66:22-139.178.68.195:56482.service - OpenSSH per-connection server daemon (139.178.68.195:56482). Aug 13 07:32:32.818966 sshd[4256]: Accepted publickey for core from 139.178.68.195 port 56482 ssh2: RSA SHA256:OaWZFdeXPh6CYYASI1PRTz4egRCVAyEUFgarVyGxwBQ Aug 13 07:32:32.819940 sshd[4256]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:32:32.827156 systemd-logind[1485]: New session 23 of user core. Aug 13 07:32:32.835880 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 13 07:32:33.533612 sshd[4256]: pam_unix(sshd:session): session closed for user core Aug 13 07:32:33.541281 systemd[1]: sshd@23-10.243.76.66:22-139.178.68.195:56482.service: Deactivated successfully. Aug 13 07:32:33.544241 systemd[1]: session-23.scope: Deactivated successfully. Aug 13 07:32:33.545845 systemd-logind[1485]: Session 23 logged out. Waiting for processes to exit. Aug 13 07:32:33.548020 systemd-logind[1485]: Removed session 23. Aug 13 07:32:38.690917 systemd[1]: Started sshd@24-10.243.76.66:22-139.178.68.195:56494.service - OpenSSH per-connection server daemon (139.178.68.195:56494). Aug 13 07:32:39.598059 sshd[4271]: Accepted publickey for core from 139.178.68.195 port 56494 ssh2: RSA SHA256:OaWZFdeXPh6CYYASI1PRTz4egRCVAyEUFgarVyGxwBQ Aug 13 07:32:39.600235 sshd[4271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:32:39.609439 systemd-logind[1485]: New session 24 of user core. Aug 13 07:32:39.618886 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 13 07:32:40.313359 sshd[4271]: pam_unix(sshd:session): session closed for user core Aug 13 07:32:40.318329 systemd[1]: sshd@24-10.243.76.66:22-139.178.68.195:56494.service: Deactivated successfully. Aug 13 07:32:40.320895 systemd[1]: session-24.scope: Deactivated successfully. Aug 13 07:32:40.322407 systemd-logind[1485]: Session 24 logged out. Waiting for processes to exit. Aug 13 07:32:40.323892 systemd-logind[1485]: Removed session 24. Aug 13 07:32:45.479156 systemd[1]: Started sshd@25-10.243.76.66:22-139.178.68.195:48170.service - OpenSSH per-connection server daemon (139.178.68.195:48170). Aug 13 07:32:46.375519 sshd[4284]: Accepted publickey for core from 139.178.68.195 port 48170 ssh2: RSA SHA256:OaWZFdeXPh6CYYASI1PRTz4egRCVAyEUFgarVyGxwBQ Aug 13 07:32:46.378119 sshd[4284]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:32:46.386150 systemd-logind[1485]: New session 25 of user core. Aug 13 07:32:46.390801 systemd[1]: Started session-25.scope - Session 25 of User core. Aug 13 07:32:47.096038 sshd[4284]: pam_unix(sshd:session): session closed for user core Aug 13 07:32:47.101007 systemd[1]: sshd@25-10.243.76.66:22-139.178.68.195:48170.service: Deactivated successfully. Aug 13 07:32:47.103779 systemd[1]: session-25.scope: Deactivated successfully. Aug 13 07:32:47.105296 systemd-logind[1485]: Session 25 logged out. Waiting for processes to exit. Aug 13 07:32:47.106976 systemd-logind[1485]: Removed session 25. Aug 13 07:32:47.258153 systemd[1]: Started sshd@26-10.243.76.66:22-139.178.68.195:48178.service - OpenSSH per-connection server daemon (139.178.68.195:48178). Aug 13 07:32:48.157395 sshd[4297]: Accepted publickey for core from 139.178.68.195 port 48178 ssh2: RSA SHA256:OaWZFdeXPh6CYYASI1PRTz4egRCVAyEUFgarVyGxwBQ Aug 13 07:32:48.161347 sshd[4297]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:32:48.169674 systemd-logind[1485]: New session 26 of user core. Aug 13 07:32:48.181133 systemd[1]: Started session-26.scope - Session 26 of User core. Aug 13 07:32:51.285492 systemd[1]: run-containerd-runc-k8s.io-82d08721fcb9751c4fffb7b9d060546f11715adb79057c5c9d61be61c06c33bc-runc.THPe9Y.mount: Deactivated successfully. Aug 13 07:32:51.335649 containerd[1500]: time="2025-08-13T07:32:51.333041963Z" level=info msg="StopContainer for \"72c1c88e7d3c75cb71c30221ea69d82a27e49e9cb2df632df1c7d3846530b8e8\" with timeout 30 (s)" Aug 13 07:32:51.336541 containerd[1500]: time="2025-08-13T07:32:51.336476534Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 07:32:51.339759 containerd[1500]: time="2025-08-13T07:32:51.337698523Z" level=info msg="Stop container \"72c1c88e7d3c75cb71c30221ea69d82a27e49e9cb2df632df1c7d3846530b8e8\" with signal terminated" Aug 13 07:32:51.377860 containerd[1500]: time="2025-08-13T07:32:51.377798216Z" level=info msg="StopContainer for \"82d08721fcb9751c4fffb7b9d060546f11715adb79057c5c9d61be61c06c33bc\" with timeout 2 (s)" Aug 13 07:32:51.379032 containerd[1500]: time="2025-08-13T07:32:51.378903422Z" level=info msg="Stop container \"82d08721fcb9751c4fffb7b9d060546f11715adb79057c5c9d61be61c06c33bc\" with signal terminated" Aug 13 07:32:51.398077 systemd-networkd[1434]: lxc_health: Link DOWN Aug 13 07:32:51.398089 systemd-networkd[1434]: lxc_health: Lost carrier Aug 13 07:32:51.408643 systemd[1]: cri-containerd-72c1c88e7d3c75cb71c30221ea69d82a27e49e9cb2df632df1c7d3846530b8e8.scope: Deactivated successfully. Aug 13 07:32:51.431198 systemd[1]: cri-containerd-82d08721fcb9751c4fffb7b9d060546f11715adb79057c5c9d61be61c06c33bc.scope: Deactivated successfully. Aug 13 07:32:51.432004 systemd[1]: cri-containerd-82d08721fcb9751c4fffb7b9d060546f11715adb79057c5c9d61be61c06c33bc.scope: Consumed 10.335s CPU time. Aug 13 07:32:51.471768 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-72c1c88e7d3c75cb71c30221ea69d82a27e49e9cb2df632df1c7d3846530b8e8-rootfs.mount: Deactivated successfully. Aug 13 07:32:51.480528 containerd[1500]: time="2025-08-13T07:32:51.480360306Z" level=info msg="shim disconnected" id=72c1c88e7d3c75cb71c30221ea69d82a27e49e9cb2df632df1c7d3846530b8e8 namespace=k8s.io Aug 13 07:32:51.480782 containerd[1500]: time="2025-08-13T07:32:51.480532589Z" level=warning msg="cleaning up after shim disconnected" id=72c1c88e7d3c75cb71c30221ea69d82a27e49e9cb2df632df1c7d3846530b8e8 namespace=k8s.io Aug 13 07:32:51.480782 containerd[1500]: time="2025-08-13T07:32:51.480569565Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:32:51.483445 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-82d08721fcb9751c4fffb7b9d060546f11715adb79057c5c9d61be61c06c33bc-rootfs.mount: Deactivated successfully. Aug 13 07:32:51.490792 containerd[1500]: time="2025-08-13T07:32:51.490695642Z" level=info msg="shim disconnected" id=82d08721fcb9751c4fffb7b9d060546f11715adb79057c5c9d61be61c06c33bc namespace=k8s.io Aug 13 07:32:51.490792 containerd[1500]: time="2025-08-13T07:32:51.490786299Z" level=warning msg="cleaning up after shim disconnected" id=82d08721fcb9751c4fffb7b9d060546f11715adb79057c5c9d61be61c06c33bc namespace=k8s.io Aug 13 07:32:51.491072 containerd[1500]: time="2025-08-13T07:32:51.490802425Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:32:51.522648 containerd[1500]: time="2025-08-13T07:32:51.520851843Z" level=warning msg="cleanup warnings time=\"2025-08-13T07:32:51Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Aug 13 07:32:51.528863 containerd[1500]: time="2025-08-13T07:32:51.528819160Z" level=info msg="StopContainer for \"82d08721fcb9751c4fffb7b9d060546f11715adb79057c5c9d61be61c06c33bc\" returns successfully" Aug 13 07:32:51.530599 containerd[1500]: time="2025-08-13T07:32:51.530554196Z" level=info msg="StopContainer for \"72c1c88e7d3c75cb71c30221ea69d82a27e49e9cb2df632df1c7d3846530b8e8\" returns successfully" Aug 13 07:32:51.538810 containerd[1500]: time="2025-08-13T07:32:51.535738386Z" level=info msg="StopPodSandbox for \"2e0c1280fef920037ed348568abf405484bc9fe1ef8d1b9fc47fd7d1fcd82db0\"" Aug 13 07:32:51.538810 containerd[1500]: time="2025-08-13T07:32:51.535738441Z" level=info msg="StopPodSandbox for \"3498313fb013bbd0b1206ee335ccbdd35bafe28f01d33014b4ebfbd5631129a8\"" Aug 13 07:32:51.538810 containerd[1500]: time="2025-08-13T07:32:51.535952221Z" level=info msg="Container to stop \"2d1ecf8ad3a73c52a9d809c8331560d2458c4c894f9511b681a4cbf5046f26ad\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 07:32:51.538810 containerd[1500]: time="2025-08-13T07:32:51.535986676Z" level=info msg="Container to stop \"82d08721fcb9751c4fffb7b9d060546f11715adb79057c5c9d61be61c06c33bc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 07:32:51.538810 containerd[1500]: time="2025-08-13T07:32:51.536021019Z" level=info msg="Container to stop \"425fba5f46b702c1e79008b200aa9ea18090c8c32f8c4864c97ae3547546d52a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 07:32:51.538810 containerd[1500]: time="2025-08-13T07:32:51.536039155Z" level=info msg="Container to stop \"3e5702c9b3e52d09c2f38f29dddba99cce3e1f00e3b51a61e516a31d0463b1b2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 07:32:51.538810 containerd[1500]: time="2025-08-13T07:32:51.536054594Z" level=info msg="Container to stop \"13e44ddfeeb4de6ed5750e8364a9ef7ecb69ecde1374a83a703b3a4275e27014\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 07:32:51.538810 containerd[1500]: time="2025-08-13T07:32:51.536254038Z" level=info msg="Container to stop \"72c1c88e7d3c75cb71c30221ea69d82a27e49e9cb2df632df1c7d3846530b8e8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 07:32:51.540240 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2e0c1280fef920037ed348568abf405484bc9fe1ef8d1b9fc47fd7d1fcd82db0-shm.mount: Deactivated successfully. Aug 13 07:32:51.540389 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3498313fb013bbd0b1206ee335ccbdd35bafe28f01d33014b4ebfbd5631129a8-shm.mount: Deactivated successfully. Aug 13 07:32:51.561091 systemd[1]: cri-containerd-3498313fb013bbd0b1206ee335ccbdd35bafe28f01d33014b4ebfbd5631129a8.scope: Deactivated successfully. Aug 13 07:32:51.572769 systemd[1]: cri-containerd-2e0c1280fef920037ed348568abf405484bc9fe1ef8d1b9fc47fd7d1fcd82db0.scope: Deactivated successfully. Aug 13 07:32:51.614153 containerd[1500]: time="2025-08-13T07:32:51.613369997Z" level=info msg="shim disconnected" id=3498313fb013bbd0b1206ee335ccbdd35bafe28f01d33014b4ebfbd5631129a8 namespace=k8s.io Aug 13 07:32:51.614153 containerd[1500]: time="2025-08-13T07:32:51.614128287Z" level=warning msg="cleaning up after shim disconnected" id=3498313fb013bbd0b1206ee335ccbdd35bafe28f01d33014b4ebfbd5631129a8 namespace=k8s.io Aug 13 07:32:51.614153 containerd[1500]: time="2025-08-13T07:32:51.614150532Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:32:51.632261 containerd[1500]: time="2025-08-13T07:32:51.632192655Z" level=info msg="shim disconnected" id=2e0c1280fef920037ed348568abf405484bc9fe1ef8d1b9fc47fd7d1fcd82db0 namespace=k8s.io Aug 13 07:32:51.632659 containerd[1500]: time="2025-08-13T07:32:51.632516176Z" level=warning msg="cleaning up after shim disconnected" id=2e0c1280fef920037ed348568abf405484bc9fe1ef8d1b9fc47fd7d1fcd82db0 namespace=k8s.io Aug 13 07:32:51.632659 containerd[1500]: time="2025-08-13T07:32:51.632542067Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:32:51.642603 containerd[1500]: time="2025-08-13T07:32:51.642311721Z" level=warning msg="cleanup warnings time=\"2025-08-13T07:32:51Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Aug 13 07:32:51.661700 containerd[1500]: time="2025-08-13T07:32:51.658704359Z" level=warning msg="cleanup warnings time=\"2025-08-13T07:32:51Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Aug 13 07:32:51.661700 containerd[1500]: time="2025-08-13T07:32:51.660327571Z" level=info msg="TearDown network for sandbox \"2e0c1280fef920037ed348568abf405484bc9fe1ef8d1b9fc47fd7d1fcd82db0\" successfully" Aug 13 07:32:51.661700 containerd[1500]: time="2025-08-13T07:32:51.660388270Z" level=info msg="StopPodSandbox for \"2e0c1280fef920037ed348568abf405484bc9fe1ef8d1b9fc47fd7d1fcd82db0\" returns successfully" Aug 13 07:32:51.662713 containerd[1500]: time="2025-08-13T07:32:51.662386029Z" level=info msg="TearDown network for sandbox \"3498313fb013bbd0b1206ee335ccbdd35bafe28f01d33014b4ebfbd5631129a8\" successfully" Aug 13 07:32:51.662713 containerd[1500]: time="2025-08-13T07:32:51.662419583Z" level=info msg="StopPodSandbox for \"3498313fb013bbd0b1206ee335ccbdd35bafe28f01d33014b4ebfbd5631129a8\" returns successfully" Aug 13 07:32:51.747493 kubelet[2695]: I0813 07:32:51.747427 2695 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3bcb37ff-a9d3-4466-b9c6-b6edd611b777-clustermesh-secrets\") pod \"3bcb37ff-a9d3-4466-b9c6-b6edd611b777\" (UID: \"3bcb37ff-a9d3-4466-b9c6-b6edd611b777\") " Aug 13 07:32:51.748695 kubelet[2695]: I0813 07:32:51.748436 2695 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3bcb37ff-a9d3-4466-b9c6-b6edd611b777-lib-modules\") pod \"3bcb37ff-a9d3-4466-b9c6-b6edd611b777\" (UID: \"3bcb37ff-a9d3-4466-b9c6-b6edd611b777\") " Aug 13 07:32:51.748695 kubelet[2695]: I0813 07:32:51.748479 2695 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3bcb37ff-a9d3-4466-b9c6-b6edd611b777-host-proc-sys-kernel\") pod \"3bcb37ff-a9d3-4466-b9c6-b6edd611b777\" (UID: \"3bcb37ff-a9d3-4466-b9c6-b6edd611b777\") " Aug 13 07:32:51.748695 kubelet[2695]: I0813 07:32:51.748506 2695 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3bcb37ff-a9d3-4466-b9c6-b6edd611b777-host-proc-sys-net\") pod \"3bcb37ff-a9d3-4466-b9c6-b6edd611b777\" (UID: \"3bcb37ff-a9d3-4466-b9c6-b6edd611b777\") " Aug 13 07:32:51.748695 kubelet[2695]: I0813 07:32:51.748531 2695 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3bcb37ff-a9d3-4466-b9c6-b6edd611b777-etc-cni-netd\") pod \"3bcb37ff-a9d3-4466-b9c6-b6edd611b777\" (UID: \"3bcb37ff-a9d3-4466-b9c6-b6edd611b777\") " Aug 13 07:32:51.757537 kubelet[2695]: I0813 07:32:51.756222 2695 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bcb37ff-a9d3-4466-b9c6-b6edd611b777-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "3bcb37ff-a9d3-4466-b9c6-b6edd611b777" (UID: "3bcb37ff-a9d3-4466-b9c6-b6edd611b777"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 07:32:51.757537 kubelet[2695]: I0813 07:32:51.757394 2695 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3bcb37ff-a9d3-4466-b9c6-b6edd611b777-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "3bcb37ff-a9d3-4466-b9c6-b6edd611b777" (UID: "3bcb37ff-a9d3-4466-b9c6-b6edd611b777"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 07:32:51.757537 kubelet[2695]: I0813 07:32:51.757460 2695 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3bcb37ff-a9d3-4466-b9c6-b6edd611b777-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "3bcb37ff-a9d3-4466-b9c6-b6edd611b777" (UID: "3bcb37ff-a9d3-4466-b9c6-b6edd611b777"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 07:32:51.757537 kubelet[2695]: I0813 07:32:51.757494 2695 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3bcb37ff-a9d3-4466-b9c6-b6edd611b777-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "3bcb37ff-a9d3-4466-b9c6-b6edd611b777" (UID: "3bcb37ff-a9d3-4466-b9c6-b6edd611b777"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 07:32:51.757920 kubelet[2695]: I0813 07:32:51.756145 2695 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3bcb37ff-a9d3-4466-b9c6-b6edd611b777-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "3bcb37ff-a9d3-4466-b9c6-b6edd611b777" (UID: "3bcb37ff-a9d3-4466-b9c6-b6edd611b777"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 07:32:51.851134 kubelet[2695]: I0813 07:32:51.849404 2695 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3bcb37ff-a9d3-4466-b9c6-b6edd611b777-cilium-run\") pod \"3bcb37ff-a9d3-4466-b9c6-b6edd611b777\" (UID: \"3bcb37ff-a9d3-4466-b9c6-b6edd611b777\") " Aug 13 07:32:51.851134 kubelet[2695]: I0813 07:32:51.849471 2695 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3bcb37ff-a9d3-4466-b9c6-b6edd611b777-cni-path\") pod \"3bcb37ff-a9d3-4466-b9c6-b6edd611b777\" (UID: \"3bcb37ff-a9d3-4466-b9c6-b6edd611b777\") " Aug 13 07:32:51.851134 kubelet[2695]: I0813 07:32:51.849552 2695 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gmqj2\" (UniqueName: \"kubernetes.io/projected/3bcb37ff-a9d3-4466-b9c6-b6edd611b777-kube-api-access-gmqj2\") pod \"3bcb37ff-a9d3-4466-b9c6-b6edd611b777\" (UID: \"3bcb37ff-a9d3-4466-b9c6-b6edd611b777\") " Aug 13 07:32:51.851134 kubelet[2695]: I0813 07:32:51.849590 2695 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3bcb37ff-a9d3-4466-b9c6-b6edd611b777-xtables-lock\") pod \"3bcb37ff-a9d3-4466-b9c6-b6edd611b777\" (UID: \"3bcb37ff-a9d3-4466-b9c6-b6edd611b777\") " Aug 13 07:32:51.851134 kubelet[2695]: I0813 07:32:51.849645 2695 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3bcb37ff-a9d3-4466-b9c6-b6edd611b777-bpf-maps\") pod \"3bcb37ff-a9d3-4466-b9c6-b6edd611b777\" (UID: \"3bcb37ff-a9d3-4466-b9c6-b6edd611b777\") " Aug 13 07:32:51.851134 kubelet[2695]: I0813 07:32:51.849691 2695 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3bcb37ff-a9d3-4466-b9c6-b6edd611b777-hubble-tls\") pod \"3bcb37ff-a9d3-4466-b9c6-b6edd611b777\" (UID: \"3bcb37ff-a9d3-4466-b9c6-b6edd611b777\") " Aug 13 07:32:51.852558 kubelet[2695]: I0813 07:32:51.849733 2695 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3bcb37ff-a9d3-4466-b9c6-b6edd611b777-cilium-config-path\") pod \"3bcb37ff-a9d3-4466-b9c6-b6edd611b777\" (UID: \"3bcb37ff-a9d3-4466-b9c6-b6edd611b777\") " Aug 13 07:32:51.852558 kubelet[2695]: I0813 07:32:51.851584 2695 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/71dc097a-e994-4252-bb5d-63cd78f0f615-cilium-config-path\") pod \"71dc097a-e994-4252-bb5d-63cd78f0f615\" (UID: \"71dc097a-e994-4252-bb5d-63cd78f0f615\") " Aug 13 07:32:51.852558 kubelet[2695]: I0813 07:32:51.851613 2695 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3bcb37ff-a9d3-4466-b9c6-b6edd611b777-cilium-cgroup\") pod \"3bcb37ff-a9d3-4466-b9c6-b6edd611b777\" (UID: \"3bcb37ff-a9d3-4466-b9c6-b6edd611b777\") " Aug 13 07:32:51.852558 kubelet[2695]: I0813 07:32:51.851654 2695 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3bcb37ff-a9d3-4466-b9c6-b6edd611b777-hostproc\") pod \"3bcb37ff-a9d3-4466-b9c6-b6edd611b777\" (UID: \"3bcb37ff-a9d3-4466-b9c6-b6edd611b777\") " Aug 13 07:32:51.852558 kubelet[2695]: I0813 07:32:51.851700 2695 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nlblh\" (UniqueName: \"kubernetes.io/projected/71dc097a-e994-4252-bb5d-63cd78f0f615-kube-api-access-nlblh\") pod \"71dc097a-e994-4252-bb5d-63cd78f0f615\" (UID: \"71dc097a-e994-4252-bb5d-63cd78f0f615\") " Aug 13 07:32:51.852558 kubelet[2695]: I0813 07:32:51.851790 2695 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3bcb37ff-a9d3-4466-b9c6-b6edd611b777-lib-modules\") on node \"srv-qvhwp.gb1.brightbox.com\" DevicePath \"\"" Aug 13 07:32:51.852950 kubelet[2695]: I0813 07:32:51.851814 2695 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3bcb37ff-a9d3-4466-b9c6-b6edd611b777-host-proc-sys-kernel\") on node \"srv-qvhwp.gb1.brightbox.com\" DevicePath \"\"" Aug 13 07:32:51.852950 kubelet[2695]: I0813 07:32:51.851833 2695 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3bcb37ff-a9d3-4466-b9c6-b6edd611b777-clustermesh-secrets\") on node \"srv-qvhwp.gb1.brightbox.com\" DevicePath \"\"" Aug 13 07:32:51.852950 kubelet[2695]: I0813 07:32:51.851849 2695 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3bcb37ff-a9d3-4466-b9c6-b6edd611b777-etc-cni-netd\") on node \"srv-qvhwp.gb1.brightbox.com\" DevicePath \"\"" Aug 13 07:32:51.852950 kubelet[2695]: I0813 07:32:51.851865 2695 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3bcb37ff-a9d3-4466-b9c6-b6edd611b777-host-proc-sys-net\") on node \"srv-qvhwp.gb1.brightbox.com\" DevicePath \"\"" Aug 13 07:32:51.856819 kubelet[2695]: I0813 07:32:51.856741 2695 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3bcb37ff-a9d3-4466-b9c6-b6edd611b777-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "3bcb37ff-a9d3-4466-b9c6-b6edd611b777" (UID: "3bcb37ff-a9d3-4466-b9c6-b6edd611b777"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 07:32:51.856924 kubelet[2695]: I0813 07:32:51.856844 2695 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3bcb37ff-a9d3-4466-b9c6-b6edd611b777-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "3bcb37ff-a9d3-4466-b9c6-b6edd611b777" (UID: "3bcb37ff-a9d3-4466-b9c6-b6edd611b777"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 07:32:51.864646 kubelet[2695]: I0813 07:32:51.863793 2695 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bcb37ff-a9d3-4466-b9c6-b6edd611b777-kube-api-access-gmqj2" (OuterVolumeSpecName: "kube-api-access-gmqj2") pod "3bcb37ff-a9d3-4466-b9c6-b6edd611b777" (UID: "3bcb37ff-a9d3-4466-b9c6-b6edd611b777"). InnerVolumeSpecName "kube-api-access-gmqj2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 07:32:51.864646 kubelet[2695]: I0813 07:32:51.863884 2695 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71dc097a-e994-4252-bb5d-63cd78f0f615-kube-api-access-nlblh" (OuterVolumeSpecName: "kube-api-access-nlblh") pod "71dc097a-e994-4252-bb5d-63cd78f0f615" (UID: "71dc097a-e994-4252-bb5d-63cd78f0f615"). InnerVolumeSpecName "kube-api-access-nlblh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 07:32:51.864646 kubelet[2695]: I0813 07:32:51.863912 2695 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bcb37ff-a9d3-4466-b9c6-b6edd611b777-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "3bcb37ff-a9d3-4466-b9c6-b6edd611b777" (UID: "3bcb37ff-a9d3-4466-b9c6-b6edd611b777"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 07:32:51.864646 kubelet[2695]: I0813 07:32:51.863952 2695 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3bcb37ff-a9d3-4466-b9c6-b6edd611b777-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "3bcb37ff-a9d3-4466-b9c6-b6edd611b777" (UID: "3bcb37ff-a9d3-4466-b9c6-b6edd611b777"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 07:32:51.864646 kubelet[2695]: I0813 07:32:51.863983 2695 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3bcb37ff-a9d3-4466-b9c6-b6edd611b777-cni-path" (OuterVolumeSpecName: "cni-path") pod "3bcb37ff-a9d3-4466-b9c6-b6edd611b777" (UID: "3bcb37ff-a9d3-4466-b9c6-b6edd611b777"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 07:32:51.865025 kubelet[2695]: I0813 07:32:51.864016 2695 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3bcb37ff-a9d3-4466-b9c6-b6edd611b777-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "3bcb37ff-a9d3-4466-b9c6-b6edd611b777" (UID: "3bcb37ff-a9d3-4466-b9c6-b6edd611b777"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 07:32:51.865025 kubelet[2695]: I0813 07:32:51.864786 2695 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bcb37ff-a9d3-4466-b9c6-b6edd611b777-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3bcb37ff-a9d3-4466-b9c6-b6edd611b777" (UID: "3bcb37ff-a9d3-4466-b9c6-b6edd611b777"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 07:32:51.865025 kubelet[2695]: I0813 07:32:51.864838 2695 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3bcb37ff-a9d3-4466-b9c6-b6edd611b777-hostproc" (OuterVolumeSpecName: "hostproc") pod "3bcb37ff-a9d3-4466-b9c6-b6edd611b777" (UID: "3bcb37ff-a9d3-4466-b9c6-b6edd611b777"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 07:32:51.868911 kubelet[2695]: I0813 07:32:51.868882 2695 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71dc097a-e994-4252-bb5d-63cd78f0f615-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "71dc097a-e994-4252-bb5d-63cd78f0f615" (UID: "71dc097a-e994-4252-bb5d-63cd78f0f615"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 07:32:51.953071 kubelet[2695]: I0813 07:32:51.953007 2695 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3bcb37ff-a9d3-4466-b9c6-b6edd611b777-cilium-run\") on node \"srv-qvhwp.gb1.brightbox.com\" DevicePath \"\"" Aug 13 07:32:51.953071 kubelet[2695]: I0813 07:32:51.953060 2695 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3bcb37ff-a9d3-4466-b9c6-b6edd611b777-cni-path\") on node \"srv-qvhwp.gb1.brightbox.com\" DevicePath \"\"" Aug 13 07:32:51.953071 kubelet[2695]: I0813 07:32:51.953078 2695 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gmqj2\" (UniqueName: \"kubernetes.io/projected/3bcb37ff-a9d3-4466-b9c6-b6edd611b777-kube-api-access-gmqj2\") on node \"srv-qvhwp.gb1.brightbox.com\" DevicePath \"\"" Aug 13 07:32:51.954173 kubelet[2695]: I0813 07:32:51.953098 2695 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3bcb37ff-a9d3-4466-b9c6-b6edd611b777-xtables-lock\") on node \"srv-qvhwp.gb1.brightbox.com\" DevicePath \"\"" Aug 13 07:32:51.954173 kubelet[2695]: I0813 07:32:51.953113 2695 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3bcb37ff-a9d3-4466-b9c6-b6edd611b777-bpf-maps\") on node \"srv-qvhwp.gb1.brightbox.com\" DevicePath \"\"" Aug 13 07:32:51.954173 kubelet[2695]: I0813 07:32:51.953126 2695 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3bcb37ff-a9d3-4466-b9c6-b6edd611b777-hubble-tls\") on node \"srv-qvhwp.gb1.brightbox.com\" DevicePath \"\"" Aug 13 07:32:51.954173 kubelet[2695]: I0813 07:32:51.953154 2695 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3bcb37ff-a9d3-4466-b9c6-b6edd611b777-cilium-config-path\") on node \"srv-qvhwp.gb1.brightbox.com\" DevicePath \"\"" Aug 13 07:32:51.954173 kubelet[2695]: I0813 07:32:51.953173 2695 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/71dc097a-e994-4252-bb5d-63cd78f0f615-cilium-config-path\") on node \"srv-qvhwp.gb1.brightbox.com\" DevicePath \"\"" Aug 13 07:32:51.954173 kubelet[2695]: I0813 07:32:51.953199 2695 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3bcb37ff-a9d3-4466-b9c6-b6edd611b777-cilium-cgroup\") on node \"srv-qvhwp.gb1.brightbox.com\" DevicePath \"\"" Aug 13 07:32:51.954173 kubelet[2695]: I0813 07:32:51.953221 2695 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3bcb37ff-a9d3-4466-b9c6-b6edd611b777-hostproc\") on node \"srv-qvhwp.gb1.brightbox.com\" DevicePath \"\"" Aug 13 07:32:51.954173 kubelet[2695]: I0813 07:32:51.953237 2695 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nlblh\" (UniqueName: \"kubernetes.io/projected/71dc097a-e994-4252-bb5d-63cd78f0f615-kube-api-access-nlblh\") on node \"srv-qvhwp.gb1.brightbox.com\" DevicePath \"\"" Aug 13 07:32:52.165369 kubelet[2695]: I0813 07:32:52.163606 2695 scope.go:117] "RemoveContainer" containerID="72c1c88e7d3c75cb71c30221ea69d82a27e49e9cb2df632df1c7d3846530b8e8" Aug 13 07:32:52.167195 containerd[1500]: time="2025-08-13T07:32:52.167152019Z" level=info msg="RemoveContainer for \"72c1c88e7d3c75cb71c30221ea69d82a27e49e9cb2df632df1c7d3846530b8e8\"" Aug 13 07:32:52.174096 systemd[1]: Removed slice kubepods-besteffort-pod71dc097a_e994_4252_bb5d_63cd78f0f615.slice - libcontainer container kubepods-besteffort-pod71dc097a_e994_4252_bb5d_63cd78f0f615.slice. Aug 13 07:32:52.177104 containerd[1500]: time="2025-08-13T07:32:52.177066407Z" level=info msg="RemoveContainer for \"72c1c88e7d3c75cb71c30221ea69d82a27e49e9cb2df632df1c7d3846530b8e8\" returns successfully" Aug 13 07:32:52.187979 systemd[1]: Removed slice kubepods-burstable-pod3bcb37ff_a9d3_4466_b9c6_b6edd611b777.slice - libcontainer container kubepods-burstable-pod3bcb37ff_a9d3_4466_b9c6_b6edd611b777.slice. Aug 13 07:32:52.188521 kubelet[2695]: I0813 07:32:52.188461 2695 scope.go:117] "RemoveContainer" containerID="72c1c88e7d3c75cb71c30221ea69d82a27e49e9cb2df632df1c7d3846530b8e8" Aug 13 07:32:52.188782 systemd[1]: kubepods-burstable-pod3bcb37ff_a9d3_4466_b9c6_b6edd611b777.slice: Consumed 10.472s CPU time. Aug 13 07:32:52.215997 containerd[1500]: time="2025-08-13T07:32:52.195109297Z" level=error msg="ContainerStatus for \"72c1c88e7d3c75cb71c30221ea69d82a27e49e9cb2df632df1c7d3846530b8e8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"72c1c88e7d3c75cb71c30221ea69d82a27e49e9cb2df632df1c7d3846530b8e8\": not found" Aug 13 07:32:52.217021 kubelet[2695]: E0813 07:32:52.216803 2695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"72c1c88e7d3c75cb71c30221ea69d82a27e49e9cb2df632df1c7d3846530b8e8\": not found" containerID="72c1c88e7d3c75cb71c30221ea69d82a27e49e9cb2df632df1c7d3846530b8e8" Aug 13 07:32:52.218885 kubelet[2695]: I0813 07:32:52.218675 2695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"72c1c88e7d3c75cb71c30221ea69d82a27e49e9cb2df632df1c7d3846530b8e8"} err="failed to get container status \"72c1c88e7d3c75cb71c30221ea69d82a27e49e9cb2df632df1c7d3846530b8e8\": rpc error: code = NotFound desc = an error occurred when try to find container \"72c1c88e7d3c75cb71c30221ea69d82a27e49e9cb2df632df1c7d3846530b8e8\": not found" Aug 13 07:32:52.219073 kubelet[2695]: I0813 07:32:52.219012 2695 scope.go:117] "RemoveContainer" containerID="82d08721fcb9751c4fffb7b9d060546f11715adb79057c5c9d61be61c06c33bc" Aug 13 07:32:52.225365 containerd[1500]: time="2025-08-13T07:32:52.225302392Z" level=info msg="RemoveContainer for \"82d08721fcb9751c4fffb7b9d060546f11715adb79057c5c9d61be61c06c33bc\"" Aug 13 07:32:52.232031 containerd[1500]: time="2025-08-13T07:32:52.231945853Z" level=info msg="RemoveContainer for \"82d08721fcb9751c4fffb7b9d060546f11715adb79057c5c9d61be61c06c33bc\" returns successfully" Aug 13 07:32:52.232713 kubelet[2695]: I0813 07:32:52.232535 2695 scope.go:117] "RemoveContainer" containerID="2d1ecf8ad3a73c52a9d809c8331560d2458c4c894f9511b681a4cbf5046f26ad" Aug 13 07:32:52.235003 containerd[1500]: time="2025-08-13T07:32:52.234958254Z" level=info msg="RemoveContainer for \"2d1ecf8ad3a73c52a9d809c8331560d2458c4c894f9511b681a4cbf5046f26ad\"" Aug 13 07:32:52.264854 containerd[1500]: time="2025-08-13T07:32:52.264783964Z" level=info msg="RemoveContainer for \"2d1ecf8ad3a73c52a9d809c8331560d2458c4c894f9511b681a4cbf5046f26ad\" returns successfully" Aug 13 07:32:52.265643 kubelet[2695]: I0813 07:32:52.265581 2695 scope.go:117] "RemoveContainer" containerID="13e44ddfeeb4de6ed5750e8364a9ef7ecb69ecde1374a83a703b3a4275e27014" Aug 13 07:32:52.267985 containerd[1500]: time="2025-08-13T07:32:52.267915680Z" level=info msg="RemoveContainer for \"13e44ddfeeb4de6ed5750e8364a9ef7ecb69ecde1374a83a703b3a4275e27014\"" Aug 13 07:32:52.271059 containerd[1500]: time="2025-08-13T07:32:52.270993867Z" level=info msg="RemoveContainer for \"13e44ddfeeb4de6ed5750e8364a9ef7ecb69ecde1374a83a703b3a4275e27014\" returns successfully" Aug 13 07:32:52.271271 kubelet[2695]: I0813 07:32:52.271195 2695 scope.go:117] "RemoveContainer" containerID="425fba5f46b702c1e79008b200aa9ea18090c8c32f8c4864c97ae3547546d52a" Aug 13 07:32:52.272833 containerd[1500]: time="2025-08-13T07:32:52.272793404Z" level=info msg="RemoveContainer for \"425fba5f46b702c1e79008b200aa9ea18090c8c32f8c4864c97ae3547546d52a\"" Aug 13 07:32:52.277042 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2e0c1280fef920037ed348568abf405484bc9fe1ef8d1b9fc47fd7d1fcd82db0-rootfs.mount: Deactivated successfully. Aug 13 07:32:52.277870 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3498313fb013bbd0b1206ee335ccbdd35bafe28f01d33014b4ebfbd5631129a8-rootfs.mount: Deactivated successfully. Aug 13 07:32:52.278690 containerd[1500]: time="2025-08-13T07:32:52.278330671Z" level=info msg="RemoveContainer for \"425fba5f46b702c1e79008b200aa9ea18090c8c32f8c4864c97ae3547546d52a\" returns successfully" Aug 13 07:32:52.278862 kubelet[2695]: I0813 07:32:52.278598 2695 scope.go:117] "RemoveContainer" containerID="3e5702c9b3e52d09c2f38f29dddba99cce3e1f00e3b51a61e516a31d0463b1b2" Aug 13 07:32:52.278146 systemd[1]: var-lib-kubelet-pods-71dc097a\x2de994\x2d4252\x2dbb5d\x2d63cd78f0f615-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnlblh.mount: Deactivated successfully. Aug 13 07:32:52.278282 systemd[1]: var-lib-kubelet-pods-3bcb37ff\x2da9d3\x2d4466\x2db9c6\x2db6edd611b777-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Aug 13 07:32:52.278470 systemd[1]: var-lib-kubelet-pods-3bcb37ff\x2da9d3\x2d4466\x2db9c6\x2db6edd611b777-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgmqj2.mount: Deactivated successfully. Aug 13 07:32:52.278872 systemd[1]: var-lib-kubelet-pods-3bcb37ff\x2da9d3\x2d4466\x2db9c6\x2db6edd611b777-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Aug 13 07:32:52.283667 containerd[1500]: time="2025-08-13T07:32:52.283384479Z" level=info msg="RemoveContainer for \"3e5702c9b3e52d09c2f38f29dddba99cce3e1f00e3b51a61e516a31d0463b1b2\"" Aug 13 07:32:52.286687 containerd[1500]: time="2025-08-13T07:32:52.286543221Z" level=info msg="RemoveContainer for \"3e5702c9b3e52d09c2f38f29dddba99cce3e1f00e3b51a61e516a31d0463b1b2\" returns successfully" Aug 13 07:32:52.286878 kubelet[2695]: I0813 07:32:52.286855 2695 scope.go:117] "RemoveContainer" containerID="82d08721fcb9751c4fffb7b9d060546f11715adb79057c5c9d61be61c06c33bc" Aug 13 07:32:52.287140 containerd[1500]: time="2025-08-13T07:32:52.287091541Z" level=error msg="ContainerStatus for \"82d08721fcb9751c4fffb7b9d060546f11715adb79057c5c9d61be61c06c33bc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"82d08721fcb9751c4fffb7b9d060546f11715adb79057c5c9d61be61c06c33bc\": not found" Aug 13 07:32:52.287293 kubelet[2695]: E0813 07:32:52.287254 2695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"82d08721fcb9751c4fffb7b9d060546f11715adb79057c5c9d61be61c06c33bc\": not found" containerID="82d08721fcb9751c4fffb7b9d060546f11715adb79057c5c9d61be61c06c33bc" Aug 13 07:32:52.287412 kubelet[2695]: I0813 07:32:52.287302 2695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"82d08721fcb9751c4fffb7b9d060546f11715adb79057c5c9d61be61c06c33bc"} err="failed to get container status \"82d08721fcb9751c4fffb7b9d060546f11715adb79057c5c9d61be61c06c33bc\": rpc error: code = NotFound desc = an error occurred when try to find container \"82d08721fcb9751c4fffb7b9d060546f11715adb79057c5c9d61be61c06c33bc\": not found" Aug 13 07:32:52.287412 kubelet[2695]: I0813 07:32:52.287366 2695 scope.go:117] "RemoveContainer" containerID="2d1ecf8ad3a73c52a9d809c8331560d2458c4c894f9511b681a4cbf5046f26ad" Aug 13 07:32:52.287921 containerd[1500]: time="2025-08-13T07:32:52.287741385Z" level=error msg="ContainerStatus for \"2d1ecf8ad3a73c52a9d809c8331560d2458c4c894f9511b681a4cbf5046f26ad\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2d1ecf8ad3a73c52a9d809c8331560d2458c4c894f9511b681a4cbf5046f26ad\": not found" Aug 13 07:32:52.288047 kubelet[2695]: E0813 07:32:52.287981 2695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2d1ecf8ad3a73c52a9d809c8331560d2458c4c894f9511b681a4cbf5046f26ad\": not found" containerID="2d1ecf8ad3a73c52a9d809c8331560d2458c4c894f9511b681a4cbf5046f26ad" Aug 13 07:32:52.288176 kubelet[2695]: I0813 07:32:52.288059 2695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2d1ecf8ad3a73c52a9d809c8331560d2458c4c894f9511b681a4cbf5046f26ad"} err="failed to get container status \"2d1ecf8ad3a73c52a9d809c8331560d2458c4c894f9511b681a4cbf5046f26ad\": rpc error: code = NotFound desc = an error occurred when try to find container \"2d1ecf8ad3a73c52a9d809c8331560d2458c4c894f9511b681a4cbf5046f26ad\": not found" Aug 13 07:32:52.288176 kubelet[2695]: I0813 07:32:52.288090 2695 scope.go:117] "RemoveContainer" containerID="13e44ddfeeb4de6ed5750e8364a9ef7ecb69ecde1374a83a703b3a4275e27014" Aug 13 07:32:52.288361 containerd[1500]: time="2025-08-13T07:32:52.288288838Z" level=error msg="ContainerStatus for \"13e44ddfeeb4de6ed5750e8364a9ef7ecb69ecde1374a83a703b3a4275e27014\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"13e44ddfeeb4de6ed5750e8364a9ef7ecb69ecde1374a83a703b3a4275e27014\": not found" Aug 13 07:32:52.288491 kubelet[2695]: E0813 07:32:52.288458 2695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"13e44ddfeeb4de6ed5750e8364a9ef7ecb69ecde1374a83a703b3a4275e27014\": not found" containerID="13e44ddfeeb4de6ed5750e8364a9ef7ecb69ecde1374a83a703b3a4275e27014" Aug 13 07:32:52.288567 kubelet[2695]: I0813 07:32:52.288486 2695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"13e44ddfeeb4de6ed5750e8364a9ef7ecb69ecde1374a83a703b3a4275e27014"} err="failed to get container status \"13e44ddfeeb4de6ed5750e8364a9ef7ecb69ecde1374a83a703b3a4275e27014\": rpc error: code = NotFound desc = an error occurred when try to find container \"13e44ddfeeb4de6ed5750e8364a9ef7ecb69ecde1374a83a703b3a4275e27014\": not found" Aug 13 07:32:52.288567 kubelet[2695]: I0813 07:32:52.288521 2695 scope.go:117] "RemoveContainer" containerID="425fba5f46b702c1e79008b200aa9ea18090c8c32f8c4864c97ae3547546d52a" Aug 13 07:32:52.289109 containerd[1500]: time="2025-08-13T07:32:52.289058180Z" level=error msg="ContainerStatus for \"425fba5f46b702c1e79008b200aa9ea18090c8c32f8c4864c97ae3547546d52a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"425fba5f46b702c1e79008b200aa9ea18090c8c32f8c4864c97ae3547546d52a\": not found" Aug 13 07:32:52.289366 kubelet[2695]: E0813 07:32:52.289322 2695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"425fba5f46b702c1e79008b200aa9ea18090c8c32f8c4864c97ae3547546d52a\": not found" containerID="425fba5f46b702c1e79008b200aa9ea18090c8c32f8c4864c97ae3547546d52a" Aug 13 07:32:52.289441 kubelet[2695]: I0813 07:32:52.289384 2695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"425fba5f46b702c1e79008b200aa9ea18090c8c32f8c4864c97ae3547546d52a"} err="failed to get container status \"425fba5f46b702c1e79008b200aa9ea18090c8c32f8c4864c97ae3547546d52a\": rpc error: code = NotFound desc = an error occurred when try to find container \"425fba5f46b702c1e79008b200aa9ea18090c8c32f8c4864c97ae3547546d52a\": not found" Aug 13 07:32:52.289441 kubelet[2695]: I0813 07:32:52.289417 2695 scope.go:117] "RemoveContainer" containerID="3e5702c9b3e52d09c2f38f29dddba99cce3e1f00e3b51a61e516a31d0463b1b2" Aug 13 07:32:52.289975 containerd[1500]: time="2025-08-13T07:32:52.289927170Z" level=error msg="ContainerStatus for \"3e5702c9b3e52d09c2f38f29dddba99cce3e1f00e3b51a61e516a31d0463b1b2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3e5702c9b3e52d09c2f38f29dddba99cce3e1f00e3b51a61e516a31d0463b1b2\": not found" Aug 13 07:32:52.290223 kubelet[2695]: E0813 07:32:52.290137 2695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3e5702c9b3e52d09c2f38f29dddba99cce3e1f00e3b51a61e516a31d0463b1b2\": not found" containerID="3e5702c9b3e52d09c2f38f29dddba99cce3e1f00e3b51a61e516a31d0463b1b2" Aug 13 07:32:52.290328 kubelet[2695]: I0813 07:32:52.290273 2695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3e5702c9b3e52d09c2f38f29dddba99cce3e1f00e3b51a61e516a31d0463b1b2"} err="failed to get container status \"3e5702c9b3e52d09c2f38f29dddba99cce3e1f00e3b51a61e516a31d0463b1b2\": rpc error: code = NotFound desc = an error occurred when try to find container \"3e5702c9b3e52d09c2f38f29dddba99cce3e1f00e3b51a61e516a31d0463b1b2\": not found" Aug 13 07:32:53.193676 sshd[4297]: pam_unix(sshd:session): session closed for user core Aug 13 07:32:53.200105 systemd-logind[1485]: Session 26 logged out. Waiting for processes to exit. Aug 13 07:32:53.201283 systemd[1]: sshd@26-10.243.76.66:22-139.178.68.195:48178.service: Deactivated successfully. Aug 13 07:32:53.205288 systemd[1]: session-26.scope: Deactivated successfully. Aug 13 07:32:53.205934 systemd[1]: session-26.scope: Consumed 1.796s CPU time. Aug 13 07:32:53.208163 systemd-logind[1485]: Removed session 26. Aug 13 07:32:53.399014 systemd[1]: Started sshd@27-10.243.76.66:22-139.178.68.195:50840.service - OpenSSH per-connection server daemon (139.178.68.195:50840). Aug 13 07:32:53.617843 kubelet[2695]: I0813 07:32:53.617776 2695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bcb37ff-a9d3-4466-b9c6-b6edd611b777" path="/var/lib/kubelet/pods/3bcb37ff-a9d3-4466-b9c6-b6edd611b777/volumes" Aug 13 07:32:53.619261 kubelet[2695]: I0813 07:32:53.619234 2695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71dc097a-e994-4252-bb5d-63cd78f0f615" path="/var/lib/kubelet/pods/71dc097a-e994-4252-bb5d-63cd78f0f615/volumes" Aug 13 07:32:54.316897 sshd[4461]: Accepted publickey for core from 139.178.68.195 port 50840 ssh2: RSA SHA256:OaWZFdeXPh6CYYASI1PRTz4egRCVAyEUFgarVyGxwBQ Aug 13 07:32:54.319861 sshd[4461]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:32:54.328254 systemd-logind[1485]: New session 27 of user core. Aug 13 07:32:54.332920 systemd[1]: Started session-27.scope - Session 27 of User core. Aug 13 07:32:55.780668 kubelet[2695]: E0813 07:32:55.778979 2695 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 13 07:32:55.987418 kubelet[2695]: I0813 07:32:55.986798 2695 memory_manager.go:355] "RemoveStaleState removing state" podUID="3bcb37ff-a9d3-4466-b9c6-b6edd611b777" containerName="cilium-agent" Aug 13 07:32:55.987418 kubelet[2695]: I0813 07:32:55.986841 2695 memory_manager.go:355] "RemoveStaleState removing state" podUID="71dc097a-e994-4252-bb5d-63cd78f0f615" containerName="cilium-operator" Aug 13 07:32:56.049505 systemd[1]: Created slice kubepods-burstable-pod77836bd1_d9fc_4829_9902_5f9aa7b43b9b.slice - libcontainer container kubepods-burstable-pod77836bd1_d9fc_4829_9902_5f9aa7b43b9b.slice. Aug 13 07:32:56.084097 kubelet[2695]: I0813 07:32:56.083970 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/77836bd1-d9fc-4829-9902-5f9aa7b43b9b-cni-path\") pod \"cilium-7j248\" (UID: \"77836bd1-d9fc-4829-9902-5f9aa7b43b9b\") " pod="kube-system/cilium-7j248" Aug 13 07:32:56.084097 kubelet[2695]: I0813 07:32:56.084030 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/77836bd1-d9fc-4829-9902-5f9aa7b43b9b-host-proc-sys-net\") pod \"cilium-7j248\" (UID: \"77836bd1-d9fc-4829-9902-5f9aa7b43b9b\") " pod="kube-system/cilium-7j248" Aug 13 07:32:56.084097 kubelet[2695]: I0813 07:32:56.084062 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pglv\" (UniqueName: \"kubernetes.io/projected/77836bd1-d9fc-4829-9902-5f9aa7b43b9b-kube-api-access-8pglv\") pod \"cilium-7j248\" (UID: \"77836bd1-d9fc-4829-9902-5f9aa7b43b9b\") " pod="kube-system/cilium-7j248" Aug 13 07:32:56.085424 kubelet[2695]: I0813 07:32:56.085361 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/77836bd1-d9fc-4829-9902-5f9aa7b43b9b-cilium-run\") pod \"cilium-7j248\" (UID: \"77836bd1-d9fc-4829-9902-5f9aa7b43b9b\") " pod="kube-system/cilium-7j248" Aug 13 07:32:56.085539 kubelet[2695]: I0813 07:32:56.085425 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/77836bd1-d9fc-4829-9902-5f9aa7b43b9b-bpf-maps\") pod \"cilium-7j248\" (UID: \"77836bd1-d9fc-4829-9902-5f9aa7b43b9b\") " pod="kube-system/cilium-7j248" Aug 13 07:32:56.085539 kubelet[2695]: I0813 07:32:56.085476 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/77836bd1-d9fc-4829-9902-5f9aa7b43b9b-etc-cni-netd\") pod \"cilium-7j248\" (UID: \"77836bd1-d9fc-4829-9902-5f9aa7b43b9b\") " pod="kube-system/cilium-7j248" Aug 13 07:32:56.085539 kubelet[2695]: I0813 07:32:56.085506 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/77836bd1-d9fc-4829-9902-5f9aa7b43b9b-cilium-cgroup\") pod \"cilium-7j248\" (UID: \"77836bd1-d9fc-4829-9902-5f9aa7b43b9b\") " pod="kube-system/cilium-7j248" Aug 13 07:32:56.085728 kubelet[2695]: I0813 07:32:56.085543 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/77836bd1-d9fc-4829-9902-5f9aa7b43b9b-host-proc-sys-kernel\") pod \"cilium-7j248\" (UID: \"77836bd1-d9fc-4829-9902-5f9aa7b43b9b\") " pod="kube-system/cilium-7j248" Aug 13 07:32:56.085728 kubelet[2695]: I0813 07:32:56.085587 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/77836bd1-d9fc-4829-9902-5f9aa7b43b9b-hostproc\") pod \"cilium-7j248\" (UID: \"77836bd1-d9fc-4829-9902-5f9aa7b43b9b\") " pod="kube-system/cilium-7j248" Aug 13 07:32:56.085728 kubelet[2695]: I0813 07:32:56.085634 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/77836bd1-d9fc-4829-9902-5f9aa7b43b9b-lib-modules\") pod \"cilium-7j248\" (UID: \"77836bd1-d9fc-4829-9902-5f9aa7b43b9b\") " pod="kube-system/cilium-7j248" Aug 13 07:32:56.085728 kubelet[2695]: I0813 07:32:56.085673 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/77836bd1-d9fc-4829-9902-5f9aa7b43b9b-xtables-lock\") pod \"cilium-7j248\" (UID: \"77836bd1-d9fc-4829-9902-5f9aa7b43b9b\") " pod="kube-system/cilium-7j248" Aug 13 07:32:56.085728 kubelet[2695]: I0813 07:32:56.085717 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/77836bd1-d9fc-4829-9902-5f9aa7b43b9b-clustermesh-secrets\") pod \"cilium-7j248\" (UID: \"77836bd1-d9fc-4829-9902-5f9aa7b43b9b\") " pod="kube-system/cilium-7j248" Aug 13 07:32:56.085999 kubelet[2695]: I0813 07:32:56.085752 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/77836bd1-d9fc-4829-9902-5f9aa7b43b9b-cilium-config-path\") pod \"cilium-7j248\" (UID: \"77836bd1-d9fc-4829-9902-5f9aa7b43b9b\") " pod="kube-system/cilium-7j248" Aug 13 07:32:56.085999 kubelet[2695]: I0813 07:32:56.085805 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/77836bd1-d9fc-4829-9902-5f9aa7b43b9b-cilium-ipsec-secrets\") pod \"cilium-7j248\" (UID: \"77836bd1-d9fc-4829-9902-5f9aa7b43b9b\") " pod="kube-system/cilium-7j248" Aug 13 07:32:56.085999 kubelet[2695]: I0813 07:32:56.085842 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/77836bd1-d9fc-4829-9902-5f9aa7b43b9b-hubble-tls\") pod \"cilium-7j248\" (UID: \"77836bd1-d9fc-4829-9902-5f9aa7b43b9b\") " pod="kube-system/cilium-7j248" Aug 13 07:32:56.126367 sshd[4461]: pam_unix(sshd:session): session closed for user core Aug 13 07:32:56.131610 systemd[1]: sshd@27-10.243.76.66:22-139.178.68.195:50840.service: Deactivated successfully. Aug 13 07:32:56.134376 systemd[1]: session-27.scope: Deactivated successfully. Aug 13 07:32:56.134694 systemd[1]: session-27.scope: Consumed 1.084s CPU time. Aug 13 07:32:56.135482 systemd-logind[1485]: Session 27 logged out. Waiting for processes to exit. Aug 13 07:32:56.137460 systemd-logind[1485]: Removed session 27. Aug 13 07:32:56.289325 systemd[1]: Started sshd@28-10.243.76.66:22-139.178.68.195:50842.service - OpenSSH per-connection server daemon (139.178.68.195:50842). Aug 13 07:32:56.358708 containerd[1500]: time="2025-08-13T07:32:56.358499570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7j248,Uid:77836bd1-d9fc-4829-9902-5f9aa7b43b9b,Namespace:kube-system,Attempt:0,}" Aug 13 07:32:56.403321 containerd[1500]: time="2025-08-13T07:32:56.403081708Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:32:56.403321 containerd[1500]: time="2025-08-13T07:32:56.403265259Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:32:56.403321 containerd[1500]: time="2025-08-13T07:32:56.403317763Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:32:56.404082 containerd[1500]: time="2025-08-13T07:32:56.403542872Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:32:56.432825 systemd[1]: Started cri-containerd-7cdbfadba431bf1eaed28eddc5e9dd8b91caf94f2f0c2536225d585cfd382c47.scope - libcontainer container 7cdbfadba431bf1eaed28eddc5e9dd8b91caf94f2f0c2536225d585cfd382c47. Aug 13 07:32:56.469470 containerd[1500]: time="2025-08-13T07:32:56.469390932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7j248,Uid:77836bd1-d9fc-4829-9902-5f9aa7b43b9b,Namespace:kube-system,Attempt:0,} returns sandbox id \"7cdbfadba431bf1eaed28eddc5e9dd8b91caf94f2f0c2536225d585cfd382c47\"" Aug 13 07:32:56.475840 containerd[1500]: time="2025-08-13T07:32:56.475677916Z" level=info msg="CreateContainer within sandbox \"7cdbfadba431bf1eaed28eddc5e9dd8b91caf94f2f0c2536225d585cfd382c47\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 07:32:56.493162 containerd[1500]: time="2025-08-13T07:32:56.493101830Z" level=info msg="CreateContainer within sandbox \"7cdbfadba431bf1eaed28eddc5e9dd8b91caf94f2f0c2536225d585cfd382c47\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9b877b7bc31cfa45326f0f46d57528bf2a022a63e292a7db19c6f228309d48cf\"" Aug 13 07:32:56.495587 containerd[1500]: time="2025-08-13T07:32:56.495293655Z" level=info msg="StartContainer for \"9b877b7bc31cfa45326f0f46d57528bf2a022a63e292a7db19c6f228309d48cf\"" Aug 13 07:32:56.531809 systemd[1]: Started cri-containerd-9b877b7bc31cfa45326f0f46d57528bf2a022a63e292a7db19c6f228309d48cf.scope - libcontainer container 9b877b7bc31cfa45326f0f46d57528bf2a022a63e292a7db19c6f228309d48cf. Aug 13 07:32:56.576505 containerd[1500]: time="2025-08-13T07:32:56.576095988Z" level=info msg="StartContainer for \"9b877b7bc31cfa45326f0f46d57528bf2a022a63e292a7db19c6f228309d48cf\" returns successfully" Aug 13 07:32:56.601700 systemd[1]: cri-containerd-9b877b7bc31cfa45326f0f46d57528bf2a022a63e292a7db19c6f228309d48cf.scope: Deactivated successfully. Aug 13 07:32:56.657917 containerd[1500]: time="2025-08-13T07:32:56.657708452Z" level=info msg="shim disconnected" id=9b877b7bc31cfa45326f0f46d57528bf2a022a63e292a7db19c6f228309d48cf namespace=k8s.io Aug 13 07:32:56.659538 containerd[1500]: time="2025-08-13T07:32:56.659390153Z" level=warning msg="cleaning up after shim disconnected" id=9b877b7bc31cfa45326f0f46d57528bf2a022a63e292a7db19c6f228309d48cf namespace=k8s.io Aug 13 07:32:56.659538 containerd[1500]: time="2025-08-13T07:32:56.659485899Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:32:57.188343 sshd[4476]: Accepted publickey for core from 139.178.68.195 port 50842 ssh2: RSA SHA256:OaWZFdeXPh6CYYASI1PRTz4egRCVAyEUFgarVyGxwBQ Aug 13 07:32:57.194428 sshd[4476]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:32:57.209606 containerd[1500]: time="2025-08-13T07:32:57.208725229Z" level=info msg="CreateContainer within sandbox \"7cdbfadba431bf1eaed28eddc5e9dd8b91caf94f2f0c2536225d585cfd382c47\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 13 07:32:57.214017 systemd-logind[1485]: New session 28 of user core. Aug 13 07:32:57.219829 systemd[1]: Started session-28.scope - Session 28 of User core. Aug 13 07:32:57.248444 containerd[1500]: time="2025-08-13T07:32:57.247863515Z" level=info msg="CreateContainer within sandbox \"7cdbfadba431bf1eaed28eddc5e9dd8b91caf94f2f0c2536225d585cfd382c47\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a994811e9d9fc1d58abefcb31f2d0131cef3c9892e3b08f5ab43d08c498d831b\"" Aug 13 07:32:57.249654 containerd[1500]: time="2025-08-13T07:32:57.249608024Z" level=info msg="StartContainer for \"a994811e9d9fc1d58abefcb31f2d0131cef3c9892e3b08f5ab43d08c498d831b\"" Aug 13 07:32:57.300833 systemd[1]: Started cri-containerd-a994811e9d9fc1d58abefcb31f2d0131cef3c9892e3b08f5ab43d08c498d831b.scope - libcontainer container a994811e9d9fc1d58abefcb31f2d0131cef3c9892e3b08f5ab43d08c498d831b. Aug 13 07:32:57.344581 containerd[1500]: time="2025-08-13T07:32:57.343694330Z" level=info msg="StartContainer for \"a994811e9d9fc1d58abefcb31f2d0131cef3c9892e3b08f5ab43d08c498d831b\" returns successfully" Aug 13 07:32:57.356265 systemd[1]: cri-containerd-a994811e9d9fc1d58abefcb31f2d0131cef3c9892e3b08f5ab43d08c498d831b.scope: Deactivated successfully. Aug 13 07:32:57.387180 containerd[1500]: time="2025-08-13T07:32:57.387061901Z" level=info msg="shim disconnected" id=a994811e9d9fc1d58abefcb31f2d0131cef3c9892e3b08f5ab43d08c498d831b namespace=k8s.io Aug 13 07:32:57.388105 containerd[1500]: time="2025-08-13T07:32:57.387674112Z" level=warning msg="cleaning up after shim disconnected" id=a994811e9d9fc1d58abefcb31f2d0131cef3c9892e3b08f5ab43d08c498d831b namespace=k8s.io Aug 13 07:32:57.388105 containerd[1500]: time="2025-08-13T07:32:57.387708845Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:32:57.810068 sshd[4476]: pam_unix(sshd:session): session closed for user core Aug 13 07:32:57.814399 systemd[1]: sshd@28-10.243.76.66:22-139.178.68.195:50842.service: Deactivated successfully. Aug 13 07:32:57.817451 systemd[1]: session-28.scope: Deactivated successfully. Aug 13 07:32:57.819361 systemd-logind[1485]: Session 28 logged out. Waiting for processes to exit. Aug 13 07:32:57.821683 systemd-logind[1485]: Removed session 28. Aug 13 07:32:57.973451 systemd[1]: Started sshd@29-10.243.76.66:22-139.178.68.195:50844.service - OpenSSH per-connection server daemon (139.178.68.195:50844). Aug 13 07:32:58.203174 containerd[1500]: time="2025-08-13T07:32:58.203027870Z" level=info msg="CreateContainer within sandbox \"7cdbfadba431bf1eaed28eddc5e9dd8b91caf94f2f0c2536225d585cfd382c47\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 13 07:32:58.208141 systemd[1]: run-containerd-runc-k8s.io-a994811e9d9fc1d58abefcb31f2d0131cef3c9892e3b08f5ab43d08c498d831b-runc.XD1OCM.mount: Deactivated successfully. Aug 13 07:32:58.208939 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a994811e9d9fc1d58abefcb31f2d0131cef3c9892e3b08f5ab43d08c498d831b-rootfs.mount: Deactivated successfully. Aug 13 07:32:58.242186 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1458900892.mount: Deactivated successfully. Aug 13 07:32:58.243962 containerd[1500]: time="2025-08-13T07:32:58.243876876Z" level=info msg="CreateContainer within sandbox \"7cdbfadba431bf1eaed28eddc5e9dd8b91caf94f2f0c2536225d585cfd382c47\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3d3ee2266e020f0fa273f16d36cf79aa2b3e9287c4dd36e6467d3ab82c6cb15d\"" Aug 13 07:32:58.245194 containerd[1500]: time="2025-08-13T07:32:58.245058127Z" level=info msg="StartContainer for \"3d3ee2266e020f0fa273f16d36cf79aa2b3e9287c4dd36e6467d3ab82c6cb15d\"" Aug 13 07:32:58.297875 systemd[1]: Started cri-containerd-3d3ee2266e020f0fa273f16d36cf79aa2b3e9287c4dd36e6467d3ab82c6cb15d.scope - libcontainer container 3d3ee2266e020f0fa273f16d36cf79aa2b3e9287c4dd36e6467d3ab82c6cb15d. Aug 13 07:32:58.342733 containerd[1500]: time="2025-08-13T07:32:58.342527629Z" level=info msg="StartContainer for \"3d3ee2266e020f0fa273f16d36cf79aa2b3e9287c4dd36e6467d3ab82c6cb15d\" returns successfully" Aug 13 07:32:58.353232 systemd[1]: cri-containerd-3d3ee2266e020f0fa273f16d36cf79aa2b3e9287c4dd36e6467d3ab82c6cb15d.scope: Deactivated successfully. Aug 13 07:32:58.387713 containerd[1500]: time="2025-08-13T07:32:58.387604190Z" level=info msg="shim disconnected" id=3d3ee2266e020f0fa273f16d36cf79aa2b3e9287c4dd36e6467d3ab82c6cb15d namespace=k8s.io Aug 13 07:32:58.388633 containerd[1500]: time="2025-08-13T07:32:58.388344283Z" level=warning msg="cleaning up after shim disconnected" id=3d3ee2266e020f0fa273f16d36cf79aa2b3e9287c4dd36e6467d3ab82c6cb15d namespace=k8s.io Aug 13 07:32:58.388633 containerd[1500]: time="2025-08-13T07:32:58.388416759Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:32:58.872204 sshd[4645]: Accepted publickey for core from 139.178.68.195 port 50844 ssh2: RSA SHA256:OaWZFdeXPh6CYYASI1PRTz4egRCVAyEUFgarVyGxwBQ Aug 13 07:32:58.874530 sshd[4645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:32:58.881828 systemd-logind[1485]: New session 29 of user core. Aug 13 07:32:58.890861 systemd[1]: Started session-29.scope - Session 29 of User core. Aug 13 07:32:58.902040 kubelet[2695]: I0813 07:32:58.901615 2695 setters.go:602] "Node became not ready" node="srv-qvhwp.gb1.brightbox.com" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T07:32:58Z","lastTransitionTime":"2025-08-13T07:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Aug 13 07:32:59.204972 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3d3ee2266e020f0fa273f16d36cf79aa2b3e9287c4dd36e6467d3ab82c6cb15d-rootfs.mount: Deactivated successfully. Aug 13 07:32:59.213486 containerd[1500]: time="2025-08-13T07:32:59.213262328Z" level=info msg="CreateContainer within sandbox \"7cdbfadba431bf1eaed28eddc5e9dd8b91caf94f2f0c2536225d585cfd382c47\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 13 07:32:59.264140 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1348568976.mount: Deactivated successfully. Aug 13 07:32:59.265995 containerd[1500]: time="2025-08-13T07:32:59.265942494Z" level=info msg="CreateContainer within sandbox \"7cdbfadba431bf1eaed28eddc5e9dd8b91caf94f2f0c2536225d585cfd382c47\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"164a4cf3c58f537132b074bea15785e130afc818adc8810bb5fac55b798c0b89\"" Aug 13 07:32:59.268092 containerd[1500]: time="2025-08-13T07:32:59.268047268Z" level=info msg="StartContainer for \"164a4cf3c58f537132b074bea15785e130afc818adc8810bb5fac55b798c0b89\"" Aug 13 07:32:59.330823 systemd[1]: Started cri-containerd-164a4cf3c58f537132b074bea15785e130afc818adc8810bb5fac55b798c0b89.scope - libcontainer container 164a4cf3c58f537132b074bea15785e130afc818adc8810bb5fac55b798c0b89. Aug 13 07:32:59.388588 containerd[1500]: time="2025-08-13T07:32:59.388514194Z" level=info msg="StartContainer for \"164a4cf3c58f537132b074bea15785e130afc818adc8810bb5fac55b798c0b89\" returns successfully" Aug 13 07:32:59.398675 systemd[1]: cri-containerd-164a4cf3c58f537132b074bea15785e130afc818adc8810bb5fac55b798c0b89.scope: Deactivated successfully. Aug 13 07:32:59.462256 containerd[1500]: time="2025-08-13T07:32:59.462092177Z" level=info msg="shim disconnected" id=164a4cf3c58f537132b074bea15785e130afc818adc8810bb5fac55b798c0b89 namespace=k8s.io Aug 13 07:32:59.462256 containerd[1500]: time="2025-08-13T07:32:59.462169930Z" level=warning msg="cleaning up after shim disconnected" id=164a4cf3c58f537132b074bea15785e130afc818adc8810bb5fac55b798c0b89 namespace=k8s.io Aug 13 07:32:59.462256 containerd[1500]: time="2025-08-13T07:32:59.462185524Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:33:00.205656 systemd[1]: run-containerd-runc-k8s.io-164a4cf3c58f537132b074bea15785e130afc818adc8810bb5fac55b798c0b89-runc.qHbtU2.mount: Deactivated successfully. Aug 13 07:33:00.205926 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-164a4cf3c58f537132b074bea15785e130afc818adc8810bb5fac55b798c0b89-rootfs.mount: Deactivated successfully. Aug 13 07:33:00.220124 containerd[1500]: time="2025-08-13T07:33:00.220067472Z" level=info msg="CreateContainer within sandbox \"7cdbfadba431bf1eaed28eddc5e9dd8b91caf94f2f0c2536225d585cfd382c47\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 13 07:33:00.247579 containerd[1500]: time="2025-08-13T07:33:00.247505530Z" level=info msg="CreateContainer within sandbox \"7cdbfadba431bf1eaed28eddc5e9dd8b91caf94f2f0c2536225d585cfd382c47\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e611032df3b00cc67706044e2a962b928ced03c794c04cccc678dde8ae0a3dc8\"" Aug 13 07:33:00.248427 containerd[1500]: time="2025-08-13T07:33:00.248378139Z" level=info msg="StartContainer for \"e611032df3b00cc67706044e2a962b928ced03c794c04cccc678dde8ae0a3dc8\"" Aug 13 07:33:00.301043 systemd[1]: Started cri-containerd-e611032df3b00cc67706044e2a962b928ced03c794c04cccc678dde8ae0a3dc8.scope - libcontainer container e611032df3b00cc67706044e2a962b928ced03c794c04cccc678dde8ae0a3dc8. Aug 13 07:33:00.349931 containerd[1500]: time="2025-08-13T07:33:00.349863772Z" level=info msg="StartContainer for \"e611032df3b00cc67706044e2a962b928ced03c794c04cccc678dde8ae0a3dc8\" returns successfully" Aug 13 07:33:01.105143 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Aug 13 07:33:01.206614 systemd[1]: run-containerd-runc-k8s.io-e611032df3b00cc67706044e2a962b928ced03c794c04cccc678dde8ae0a3dc8-runc.KJn1MH.mount: Deactivated successfully. Aug 13 07:33:01.249486 kubelet[2695]: I0813 07:33:01.249023 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-7j248" podStartSLOduration=6.248980556 podStartE2EDuration="6.248980556s" podCreationTimestamp="2025-08-13 07:32:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:33:01.246548686 +0000 UTC m=+156.000552552" watchObservedRunningTime="2025-08-13 07:33:01.248980556 +0000 UTC m=+156.002984402" Aug 13 07:33:01.870138 systemd[1]: run-containerd-runc-k8s.io-e611032df3b00cc67706044e2a962b928ced03c794c04cccc678dde8ae0a3dc8-runc.hsXF3f.mount: Deactivated successfully. Aug 13 07:33:04.081099 systemd[1]: run-containerd-runc-k8s.io-e611032df3b00cc67706044e2a962b928ced03c794c04cccc678dde8ae0a3dc8-runc.zsyJd4.mount: Deactivated successfully. Aug 13 07:33:04.972086 systemd-networkd[1434]: lxc_health: Link UP Aug 13 07:33:04.981340 systemd-networkd[1434]: lxc_health: Gained carrier Aug 13 07:33:06.032769 systemd-networkd[1434]: lxc_health: Gained IPv6LL Aug 13 07:33:08.621297 systemd[1]: run-containerd-runc-k8s.io-e611032df3b00cc67706044e2a962b928ced03c794c04cccc678dde8ae0a3dc8-runc.8WrPxc.mount: Deactivated successfully. Aug 13 07:33:10.879875 systemd[1]: run-containerd-runc-k8s.io-e611032df3b00cc67706044e2a962b928ced03c794c04cccc678dde8ae0a3dc8-runc.WK0fJG.mount: Deactivated successfully. Aug 13 07:33:11.133041 sshd[4645]: pam_unix(sshd:session): session closed for user core Aug 13 07:33:11.146101 systemd[1]: sshd@29-10.243.76.66:22-139.178.68.195:50844.service: Deactivated successfully. Aug 13 07:33:11.151528 systemd[1]: session-29.scope: Deactivated successfully. Aug 13 07:33:11.155316 systemd-logind[1485]: Session 29 logged out. Waiting for processes to exit. Aug 13 07:33:11.159098 systemd-logind[1485]: Removed session 29.