Jul 7 01:22:25.041254 kernel: Linux version 6.6.95-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Sun Jul 6 22:23:50 -00 2025 Jul 7 01:22:25.041593 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 7 01:22:25.041613 kernel: BIOS-provided physical RAM map: Jul 7 01:22:25.041632 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 7 01:22:25.041642 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 7 01:22:25.041652 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 7 01:22:25.041664 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Jul 7 01:22:25.041675 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Jul 7 01:22:25.041686 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jul 7 01:22:25.041697 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jul 7 01:22:25.041708 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 7 01:22:25.041718 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 7 01:22:25.041735 kernel: NX (Execute Disable) protection: active Jul 7 01:22:25.041746 kernel: APIC: Static calls initialized Jul 7 01:22:25.041759 kernel: SMBIOS 2.8 present. Jul 7 01:22:25.041771 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Jul 7 01:22:25.041783 kernel: Hypervisor detected: KVM Jul 7 01:22:25.041799 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 7 01:22:25.041811 kernel: kvm-clock: using sched offset of 4418130397 cycles Jul 7 01:22:25.041824 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 7 01:22:25.041836 kernel: tsc: Detected 2499.998 MHz processor Jul 7 01:22:25.041848 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 7 01:22:25.041860 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 7 01:22:25.041872 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Jul 7 01:22:25.041884 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jul 7 01:22:25.041895 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 7 01:22:25.041912 kernel: Using GB pages for direct mapping Jul 7 01:22:25.041924 kernel: ACPI: Early table checksum verification disabled Jul 7 01:22:25.041936 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Jul 7 01:22:25.041948 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 01:22:25.041960 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 01:22:25.041972 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 01:22:25.041983 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Jul 7 01:22:25.041995 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 01:22:25.042006 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 01:22:25.042023 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 01:22:25.042035 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 01:22:25.042047 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Jul 7 01:22:25.042059 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Jul 7 01:22:25.042071 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Jul 7 01:22:25.042090 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Jul 7 01:22:25.042102 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Jul 7 01:22:25.042119 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Jul 7 01:22:25.042132 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Jul 7 01:22:25.042144 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jul 7 01:22:25.042156 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jul 7 01:22:25.042169 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Jul 7 01:22:25.042181 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Jul 7 01:22:25.042193 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Jul 7 01:22:25.042205 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Jul 7 01:22:25.042222 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Jul 7 01:22:25.042234 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Jul 7 01:22:25.042247 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Jul 7 01:22:25.042259 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Jul 7 01:22:25.042271 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Jul 7 01:22:25.042296 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Jul 7 01:22:25.042310 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Jul 7 01:22:25.042322 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Jul 7 01:22:25.042334 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Jul 7 01:22:25.042352 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Jul 7 01:22:25.042364 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jul 7 01:22:25.042376 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jul 7 01:22:25.042388 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Jul 7 01:22:25.042401 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Jul 7 01:22:25.042413 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Jul 7 01:22:25.042426 kernel: Zone ranges: Jul 7 01:22:25.042438 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 7 01:22:25.042450 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Jul 7 01:22:25.042467 kernel: Normal empty Jul 7 01:22:25.042480 kernel: Movable zone start for each node Jul 7 01:22:25.042492 kernel: Early memory node ranges Jul 7 01:22:25.042504 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 7 01:22:25.042516 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Jul 7 01:22:25.042528 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Jul 7 01:22:25.042540 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 7 01:22:25.042552 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 7 01:22:25.042648 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Jul 7 01:22:25.042667 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 7 01:22:25.042687 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 7 01:22:25.042699 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 7 01:22:25.042711 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 7 01:22:25.042723 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 7 01:22:25.042736 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 7 01:22:25.042748 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 7 01:22:25.042760 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 7 01:22:25.042772 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 7 01:22:25.042785 kernel: TSC deadline timer available Jul 7 01:22:25.042802 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Jul 7 01:22:25.042815 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 7 01:22:25.042827 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jul 7 01:22:25.042840 kernel: Booting paravirtualized kernel on KVM Jul 7 01:22:25.042852 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 7 01:22:25.042864 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Jul 7 01:22:25.042877 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u262144 Jul 7 01:22:25.042889 kernel: pcpu-alloc: s197096 r8192 d32280 u262144 alloc=1*2097152 Jul 7 01:22:25.042901 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jul 7 01:22:25.042919 kernel: kvm-guest: PV spinlocks enabled Jul 7 01:22:25.042931 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 7 01:22:25.042945 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 7 01:22:25.042958 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 7 01:22:25.042970 kernel: random: crng init done Jul 7 01:22:25.042982 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 7 01:22:25.042995 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 7 01:22:25.043007 kernel: Fallback order for Node 0: 0 Jul 7 01:22:25.043024 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Jul 7 01:22:25.043037 kernel: Policy zone: DMA32 Jul 7 01:22:25.043049 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 7 01:22:25.043062 kernel: software IO TLB: area num 16. Jul 7 01:22:25.043074 kernel: Memory: 1901532K/2096616K available (12288K kernel code, 2295K rwdata, 22748K rodata, 42868K init, 2324K bss, 194824K reserved, 0K cma-reserved) Jul 7 01:22:25.043087 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jul 7 01:22:25.043100 kernel: Kernel/User page tables isolation: enabled Jul 7 01:22:25.043112 kernel: ftrace: allocating 37966 entries in 149 pages Jul 7 01:22:25.043124 kernel: ftrace: allocated 149 pages with 4 groups Jul 7 01:22:25.043142 kernel: Dynamic Preempt: voluntary Jul 7 01:22:25.043154 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 7 01:22:25.043171 kernel: rcu: RCU event tracing is enabled. Jul 7 01:22:25.043185 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jul 7 01:22:25.043198 kernel: Trampoline variant of Tasks RCU enabled. Jul 7 01:22:25.043223 kernel: Rude variant of Tasks RCU enabled. Jul 7 01:22:25.043241 kernel: Tracing variant of Tasks RCU enabled. Jul 7 01:22:25.043254 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 7 01:22:25.043267 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jul 7 01:22:25.043280 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Jul 7 01:22:25.043308 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 7 01:22:25.043327 kernel: Console: colour VGA+ 80x25 Jul 7 01:22:25.043340 kernel: printk: console [tty0] enabled Jul 7 01:22:25.043353 kernel: printk: console [ttyS0] enabled Jul 7 01:22:25.043366 kernel: ACPI: Core revision 20230628 Jul 7 01:22:25.043379 kernel: APIC: Switch to symmetric I/O mode setup Jul 7 01:22:25.043392 kernel: x2apic enabled Jul 7 01:22:25.043410 kernel: APIC: Switched APIC routing to: physical x2apic Jul 7 01:22:25.043423 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jul 7 01:22:25.043436 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Jul 7 01:22:25.043449 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 7 01:22:25.043463 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jul 7 01:22:25.043475 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jul 7 01:22:25.043488 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 7 01:22:25.043501 kernel: Spectre V2 : Mitigation: Retpolines Jul 7 01:22:25.043513 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 7 01:22:25.043532 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jul 7 01:22:25.043545 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 7 01:22:25.043558 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 7 01:22:25.043583 kernel: MDS: Mitigation: Clear CPU buffers Jul 7 01:22:25.043597 kernel: MMIO Stale Data: Unknown: No mitigations Jul 7 01:22:25.043610 kernel: SRBDS: Unknown: Dependent on hypervisor status Jul 7 01:22:25.043622 kernel: ITS: Mitigation: Aligned branch/return thunks Jul 7 01:22:25.043635 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 7 01:22:25.043648 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 7 01:22:25.043661 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 7 01:22:25.043674 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 7 01:22:25.043693 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jul 7 01:22:25.043706 kernel: Freeing SMP alternatives memory: 32K Jul 7 01:22:25.043719 kernel: pid_max: default: 32768 minimum: 301 Jul 7 01:22:25.043732 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 7 01:22:25.043745 kernel: landlock: Up and running. Jul 7 01:22:25.043758 kernel: SELinux: Initializing. Jul 7 01:22:25.043771 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 7 01:22:25.043784 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 7 01:22:25.043796 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Jul 7 01:22:25.043809 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jul 7 01:22:25.043822 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jul 7 01:22:25.043841 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jul 7 01:22:25.043854 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Jul 7 01:22:25.043868 kernel: signal: max sigframe size: 1776 Jul 7 01:22:25.043881 kernel: rcu: Hierarchical SRCU implementation. Jul 7 01:22:25.043894 kernel: rcu: Max phase no-delay instances is 400. Jul 7 01:22:25.043907 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 7 01:22:25.043920 kernel: smp: Bringing up secondary CPUs ... Jul 7 01:22:25.043933 kernel: smpboot: x86: Booting SMP configuration: Jul 7 01:22:25.043945 kernel: .... node #0, CPUs: #1 Jul 7 01:22:25.043964 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Jul 7 01:22:25.043977 kernel: smp: Brought up 1 node, 2 CPUs Jul 7 01:22:25.043990 kernel: smpboot: Max logical packages: 16 Jul 7 01:22:25.044003 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Jul 7 01:22:25.044016 kernel: devtmpfs: initialized Jul 7 01:22:25.044029 kernel: x86/mm: Memory block size: 128MB Jul 7 01:22:25.044042 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 7 01:22:25.044055 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jul 7 01:22:25.044068 kernel: pinctrl core: initialized pinctrl subsystem Jul 7 01:22:25.044087 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 7 01:22:25.044100 kernel: audit: initializing netlink subsys (disabled) Jul 7 01:22:25.044113 kernel: audit: type=2000 audit(1751851343.973:1): state=initialized audit_enabled=0 res=1 Jul 7 01:22:25.044125 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 7 01:22:25.044138 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 7 01:22:25.044151 kernel: cpuidle: using governor menu Jul 7 01:22:25.044164 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 7 01:22:25.044177 kernel: dca service started, version 1.12.1 Jul 7 01:22:25.044190 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jul 7 01:22:25.044209 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jul 7 01:22:25.044222 kernel: PCI: Using configuration type 1 for base access Jul 7 01:22:25.044235 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 7 01:22:25.044248 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 7 01:22:25.044261 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 7 01:22:25.044273 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 7 01:22:25.044297 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 7 01:22:25.044311 kernel: ACPI: Added _OSI(Module Device) Jul 7 01:22:25.044324 kernel: ACPI: Added _OSI(Processor Device) Jul 7 01:22:25.044343 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 7 01:22:25.044356 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 7 01:22:25.044369 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jul 7 01:22:25.044382 kernel: ACPI: Interpreter enabled Jul 7 01:22:25.044394 kernel: ACPI: PM: (supports S0 S5) Jul 7 01:22:25.044407 kernel: ACPI: Using IOAPIC for interrupt routing Jul 7 01:22:25.044420 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 7 01:22:25.044433 kernel: PCI: Using E820 reservations for host bridge windows Jul 7 01:22:25.044446 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 7 01:22:25.044465 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 7 01:22:25.044751 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 7 01:22:25.044937 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 7 01:22:25.045111 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 7 01:22:25.045131 kernel: PCI host bridge to bus 0000:00 Jul 7 01:22:25.045333 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 7 01:22:25.045502 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 7 01:22:25.045685 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 7 01:22:25.045843 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Jul 7 01:22:25.046014 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 7 01:22:25.046174 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Jul 7 01:22:25.046508 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 7 01:22:25.046743 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jul 7 01:22:25.046935 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Jul 7 01:22:25.047106 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Jul 7 01:22:25.047273 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Jul 7 01:22:25.047461 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Jul 7 01:22:25.047657 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 7 01:22:25.047850 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jul 7 01:22:25.048030 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Jul 7 01:22:25.048224 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jul 7 01:22:25.048413 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Jul 7 01:22:25.049140 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jul 7 01:22:25.049340 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Jul 7 01:22:25.049522 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jul 7 01:22:25.049738 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Jul 7 01:22:25.049929 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jul 7 01:22:25.050103 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Jul 7 01:22:25.050292 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jul 7 01:22:25.050466 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Jul 7 01:22:25.050682 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jul 7 01:22:25.050852 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Jul 7 01:22:25.051038 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jul 7 01:22:25.051206 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Jul 7 01:22:25.051411 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jul 7 01:22:25.051605 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Jul 7 01:22:25.051777 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Jul 7 01:22:25.051951 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Jul 7 01:22:25.052121 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Jul 7 01:22:25.052327 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jul 7 01:22:25.052499 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jul 7 01:22:25.052686 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Jul 7 01:22:25.052858 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Jul 7 01:22:25.053039 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jul 7 01:22:25.053210 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 7 01:22:25.053406 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jul 7 01:22:25.053609 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Jul 7 01:22:25.053781 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Jul 7 01:22:25.053963 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jul 7 01:22:25.054134 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jul 7 01:22:25.054334 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Jul 7 01:22:25.054513 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Jul 7 01:22:25.054714 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jul 7 01:22:25.054881 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jul 7 01:22:25.055052 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jul 7 01:22:25.055245 kernel: pci_bus 0000:02: extended config space not accessible Jul 7 01:22:25.055456 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Jul 7 01:22:25.055669 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Jul 7 01:22:25.055857 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jul 7 01:22:25.056049 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jul 7 01:22:25.056242 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Jul 7 01:22:25.056575 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Jul 7 01:22:25.056757 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jul 7 01:22:25.056934 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jul 7 01:22:25.059708 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jul 7 01:22:25.059922 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Jul 7 01:22:25.060107 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Jul 7 01:22:25.060295 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jul 7 01:22:25.060473 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jul 7 01:22:25.062161 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jul 7 01:22:25.062373 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jul 7 01:22:25.062551 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jul 7 01:22:25.062742 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jul 7 01:22:25.062928 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jul 7 01:22:25.063099 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jul 7 01:22:25.063268 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jul 7 01:22:25.063470 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jul 7 01:22:25.067555 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jul 7 01:22:25.067782 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jul 7 01:22:25.067984 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jul 7 01:22:25.068162 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jul 7 01:22:25.068365 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jul 7 01:22:25.068545 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jul 7 01:22:25.069196 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jul 7 01:22:25.069388 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jul 7 01:22:25.069410 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 7 01:22:25.069424 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 7 01:22:25.069437 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 7 01:22:25.069450 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 7 01:22:25.069472 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 7 01:22:25.069486 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 7 01:22:25.069499 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 7 01:22:25.069512 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 7 01:22:25.069525 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 7 01:22:25.069539 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 7 01:22:25.069552 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 7 01:22:25.071169 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 7 01:22:25.071193 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 7 01:22:25.071216 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 7 01:22:25.071229 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 7 01:22:25.071243 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 7 01:22:25.071256 kernel: iommu: Default domain type: Translated Jul 7 01:22:25.071269 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 7 01:22:25.071298 kernel: PCI: Using ACPI for IRQ routing Jul 7 01:22:25.071313 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 7 01:22:25.071326 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 7 01:22:25.071339 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Jul 7 01:22:25.071535 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 7 01:22:25.071746 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 7 01:22:25.071920 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 7 01:22:25.071940 kernel: vgaarb: loaded Jul 7 01:22:25.071954 kernel: clocksource: Switched to clocksource kvm-clock Jul 7 01:22:25.071967 kernel: VFS: Disk quotas dquot_6.6.0 Jul 7 01:22:25.071981 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 7 01:22:25.071994 kernel: pnp: PnP ACPI init Jul 7 01:22:25.072180 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jul 7 01:22:25.072203 kernel: pnp: PnP ACPI: found 5 devices Jul 7 01:22:25.072217 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 7 01:22:25.072230 kernel: NET: Registered PF_INET protocol family Jul 7 01:22:25.072243 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 7 01:22:25.072256 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jul 7 01:22:25.072270 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 7 01:22:25.072295 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 7 01:22:25.072309 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jul 7 01:22:25.072330 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jul 7 01:22:25.072343 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 7 01:22:25.072357 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 7 01:22:25.072370 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 7 01:22:25.072383 kernel: NET: Registered PF_XDP protocol family Jul 7 01:22:25.072554 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Jul 7 01:22:25.072758 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jul 7 01:22:25.072941 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jul 7 01:22:25.075238 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jul 7 01:22:25.075754 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jul 7 01:22:25.075938 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jul 7 01:22:25.076114 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jul 7 01:22:25.076302 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jul 7 01:22:25.076500 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Jul 7 01:22:25.077928 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Jul 7 01:22:25.078108 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Jul 7 01:22:25.078297 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Jul 7 01:22:25.078474 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Jul 7 01:22:25.079702 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Jul 7 01:22:25.079881 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Jul 7 01:22:25.080054 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Jul 7 01:22:25.080243 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jul 7 01:22:25.082787 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jul 7 01:22:25.082967 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jul 7 01:22:25.083140 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jul 7 01:22:25.083844 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jul 7 01:22:25.084027 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jul 7 01:22:25.084200 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jul 7 01:22:25.084392 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jul 7 01:22:25.084609 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jul 7 01:22:25.084793 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jul 7 01:22:25.084972 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jul 7 01:22:25.085153 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jul 7 01:22:25.085341 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jul 7 01:22:25.085518 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jul 7 01:22:25.087760 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jul 7 01:22:25.087958 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jul 7 01:22:25.088153 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jul 7 01:22:25.088347 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jul 7 01:22:25.088538 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jul 7 01:22:25.088739 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jul 7 01:22:25.088911 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jul 7 01:22:25.089080 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jul 7 01:22:25.089252 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jul 7 01:22:25.090764 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jul 7 01:22:25.090953 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jul 7 01:22:25.091127 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jul 7 01:22:25.091315 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jul 7 01:22:25.091489 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jul 7 01:22:25.092733 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jul 7 01:22:25.092913 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jul 7 01:22:25.093088 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jul 7 01:22:25.093258 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jul 7 01:22:25.093442 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jul 7 01:22:25.095655 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jul 7 01:22:25.095829 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 7 01:22:25.095987 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 7 01:22:25.096142 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 7 01:22:25.096319 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Jul 7 01:22:25.096473 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jul 7 01:22:25.096672 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Jul 7 01:22:25.096850 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jul 7 01:22:25.097012 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Jul 7 01:22:25.097171 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Jul 7 01:22:25.097357 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Jul 7 01:22:25.097539 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Jul 7 01:22:25.103973 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Jul 7 01:22:25.104155 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Jul 7 01:22:25.104349 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Jul 7 01:22:25.104515 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Jul 7 01:22:25.104721 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Jul 7 01:22:25.104902 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Jul 7 01:22:25.105075 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Jul 7 01:22:25.105236 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Jul 7 01:22:25.105431 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Jul 7 01:22:25.105625 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Jul 7 01:22:25.105789 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Jul 7 01:22:25.105961 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Jul 7 01:22:25.106123 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Jul 7 01:22:25.106313 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Jul 7 01:22:25.106500 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Jul 7 01:22:25.106693 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Jul 7 01:22:25.106859 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Jul 7 01:22:25.107034 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Jul 7 01:22:25.107199 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Jul 7 01:22:25.108165 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Jul 7 01:22:25.108191 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 7 01:22:25.108207 kernel: PCI: CLS 0 bytes, default 64 Jul 7 01:22:25.108221 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jul 7 01:22:25.108235 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Jul 7 01:22:25.108249 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jul 7 01:22:25.108263 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jul 7 01:22:25.108277 kernel: Initialise system trusted keyrings Jul 7 01:22:25.108303 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jul 7 01:22:25.108327 kernel: Key type asymmetric registered Jul 7 01:22:25.108341 kernel: Asymmetric key parser 'x509' registered Jul 7 01:22:25.108354 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jul 7 01:22:25.108369 kernel: io scheduler mq-deadline registered Jul 7 01:22:25.108382 kernel: io scheduler kyber registered Jul 7 01:22:25.108396 kernel: io scheduler bfq registered Jul 7 01:22:25.108632 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jul 7 01:22:25.108810 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jul 7 01:22:25.108982 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 7 01:22:25.109166 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jul 7 01:22:25.109352 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jul 7 01:22:25.109522 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 7 01:22:25.109731 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jul 7 01:22:25.109908 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jul 7 01:22:25.110079 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 7 01:22:25.110264 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jul 7 01:22:25.110451 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jul 7 01:22:25.111668 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 7 01:22:25.111848 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jul 7 01:22:25.112021 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jul 7 01:22:25.112191 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 7 01:22:25.112387 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jul 7 01:22:25.115754 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jul 7 01:22:25.115955 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 7 01:22:25.116133 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jul 7 01:22:25.116317 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jul 7 01:22:25.116490 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 7 01:22:25.116717 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jul 7 01:22:25.116888 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jul 7 01:22:25.117056 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 7 01:22:25.117078 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 7 01:22:25.117094 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 7 01:22:25.117108 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jul 7 01:22:25.117130 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 7 01:22:25.117145 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 7 01:22:25.117159 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 7 01:22:25.117173 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 7 01:22:25.117186 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 7 01:22:25.117378 kernel: rtc_cmos 00:03: RTC can wake from S4 Jul 7 01:22:25.117402 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 7 01:22:25.117556 kernel: rtc_cmos 00:03: registered as rtc0 Jul 7 01:22:25.117770 kernel: rtc_cmos 00:03: setting system clock to 2025-07-07T01:22:24 UTC (1751851344) Jul 7 01:22:25.117931 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jul 7 01:22:25.117952 kernel: intel_pstate: CPU model not supported Jul 7 01:22:25.117966 kernel: NET: Registered PF_INET6 protocol family Jul 7 01:22:25.117979 kernel: Segment Routing with IPv6 Jul 7 01:22:25.118001 kernel: In-situ OAM (IOAM) with IPv6 Jul 7 01:22:25.118015 kernel: NET: Registered PF_PACKET protocol family Jul 7 01:22:25.118029 kernel: Key type dns_resolver registered Jul 7 01:22:25.118043 kernel: IPI shorthand broadcast: enabled Jul 7 01:22:25.118062 kernel: sched_clock: Marking stable (1173004247, 231065509)->(1633218433, -229148677) Jul 7 01:22:25.118076 kernel: registered taskstats version 1 Jul 7 01:22:25.118090 kernel: Loading compiled-in X.509 certificates Jul 7 01:22:25.118104 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.95-flatcar: 6372c48ca52cc7f7bbee5675b604584c1c68ec5b' Jul 7 01:22:25.118117 kernel: Key type .fscrypt registered Jul 7 01:22:25.118131 kernel: Key type fscrypt-provisioning registered Jul 7 01:22:25.118145 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 7 01:22:25.118158 kernel: ima: Allocated hash algorithm: sha1 Jul 7 01:22:25.118172 kernel: ima: No architecture policies found Jul 7 01:22:25.118191 kernel: clk: Disabling unused clocks Jul 7 01:22:25.118205 kernel: Freeing unused kernel image (initmem) memory: 42868K Jul 7 01:22:25.118219 kernel: Write protecting the kernel read-only data: 36864k Jul 7 01:22:25.118233 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Jul 7 01:22:25.118247 kernel: Run /init as init process Jul 7 01:22:25.118260 kernel: with arguments: Jul 7 01:22:25.118274 kernel: /init Jul 7 01:22:25.118503 kernel: with environment: Jul 7 01:22:25.118521 kernel: HOME=/ Jul 7 01:22:25.118543 kernel: TERM=linux Jul 7 01:22:25.118556 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 7 01:22:25.118603 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 7 01:22:25.118623 systemd[1]: Detected virtualization kvm. Jul 7 01:22:25.118638 systemd[1]: Detected architecture x86-64. Jul 7 01:22:25.118652 systemd[1]: Running in initrd. Jul 7 01:22:25.118666 systemd[1]: No hostname configured, using default hostname. Jul 7 01:22:25.118688 systemd[1]: Hostname set to . Jul 7 01:22:25.118703 systemd[1]: Initializing machine ID from VM UUID. Jul 7 01:22:25.118718 systemd[1]: Queued start job for default target initrd.target. Jul 7 01:22:25.118733 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 01:22:25.118748 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 01:22:25.118763 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 7 01:22:25.118778 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 01:22:25.118793 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 7 01:22:25.118814 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 7 01:22:25.118831 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 7 01:22:25.118847 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 7 01:22:25.118861 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 01:22:25.118876 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 01:22:25.118891 systemd[1]: Reached target paths.target - Path Units. Jul 7 01:22:25.118906 systemd[1]: Reached target slices.target - Slice Units. Jul 7 01:22:25.118926 systemd[1]: Reached target swap.target - Swaps. Jul 7 01:22:25.118941 systemd[1]: Reached target timers.target - Timer Units. Jul 7 01:22:25.118956 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 01:22:25.118971 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 01:22:25.118985 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 7 01:22:25.119000 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 7 01:22:25.119016 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 01:22:25.119030 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 01:22:25.119045 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 01:22:25.119065 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 01:22:25.119080 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 7 01:22:25.119095 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 01:22:25.119110 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 7 01:22:25.119125 systemd[1]: Starting systemd-fsck-usr.service... Jul 7 01:22:25.119139 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 01:22:25.119154 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 01:22:25.119169 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 01:22:25.119236 systemd-journald[201]: Collecting audit messages is disabled. Jul 7 01:22:25.119270 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 7 01:22:25.125822 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 01:22:25.125851 systemd[1]: Finished systemd-fsck-usr.service. Jul 7 01:22:25.125884 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 7 01:22:25.125904 systemd-journald[201]: Journal started Jul 7 01:22:25.125938 systemd-journald[201]: Runtime Journal (/run/log/journal/103d322dbf8c41ec835f368e03e6b705) is 4.7M, max 38.0M, 33.2M free. Jul 7 01:22:25.069634 systemd-modules-load[202]: Inserted module 'overlay' Jul 7 01:22:25.192666 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 7 01:22:25.192703 kernel: Bridge firewalling registered Jul 7 01:22:25.160223 systemd-modules-load[202]: Inserted module 'br_netfilter' Jul 7 01:22:25.201587 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 01:22:25.202380 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 01:22:25.204468 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 01:22:25.209257 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 01:22:25.216841 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 01:22:25.227091 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 01:22:25.231747 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 01:22:25.236532 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 01:22:25.242242 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 01:22:25.255049 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 01:22:25.267171 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 01:22:25.269046 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 01:22:25.278807 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 7 01:22:25.284767 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 01:22:25.304595 dracut-cmdline[234]: dracut-dracut-053 Jul 7 01:22:25.307670 dracut-cmdline[234]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 7 01:22:25.330637 systemd-resolved[235]: Positive Trust Anchors: Jul 7 01:22:25.330654 systemd-resolved[235]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 01:22:25.330698 systemd-resolved[235]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 01:22:25.339869 systemd-resolved[235]: Defaulting to hostname 'linux'. Jul 7 01:22:25.341488 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 01:22:25.342617 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 01:22:25.405722 kernel: SCSI subsystem initialized Jul 7 01:22:25.417830 kernel: Loading iSCSI transport class v2.0-870. Jul 7 01:22:25.431608 kernel: iscsi: registered transport (tcp) Jul 7 01:22:25.457705 kernel: iscsi: registered transport (qla4xxx) Jul 7 01:22:25.457796 kernel: QLogic iSCSI HBA Driver Jul 7 01:22:25.517901 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 7 01:22:25.527853 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 7 01:22:25.562589 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 7 01:22:25.562681 kernel: device-mapper: uevent: version 1.0.3 Jul 7 01:22:25.565605 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 7 01:22:25.613624 kernel: raid6: sse2x4 gen() 14027 MB/s Jul 7 01:22:25.631599 kernel: raid6: sse2x2 gen() 9637 MB/s Jul 7 01:22:25.650225 kernel: raid6: sse2x1 gen() 10227 MB/s Jul 7 01:22:25.650312 kernel: raid6: using algorithm sse2x4 gen() 14027 MB/s Jul 7 01:22:25.669324 kernel: raid6: .... xor() 7726 MB/s, rmw enabled Jul 7 01:22:25.669414 kernel: raid6: using ssse3x2 recovery algorithm Jul 7 01:22:25.695619 kernel: xor: automatically using best checksumming function avx Jul 7 01:22:25.886627 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 7 01:22:25.902189 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 7 01:22:25.911950 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 01:22:25.929929 systemd-udevd[418]: Using default interface naming scheme 'v255'. Jul 7 01:22:25.937260 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 01:22:25.944825 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 7 01:22:25.967148 dracut-pre-trigger[422]: rd.md=0: removing MD RAID activation Jul 7 01:22:26.008034 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 01:22:26.014816 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 01:22:26.131477 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 01:22:26.144632 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 7 01:22:26.171706 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 7 01:22:26.175017 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 01:22:26.175820 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 01:22:26.178960 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 01:22:26.187770 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 7 01:22:26.213195 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 7 01:22:26.268644 kernel: cryptd: max_cpu_qlen set to 1000 Jul 7 01:22:26.273629 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Jul 7 01:22:26.292961 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jul 7 01:22:26.303615 kernel: AVX version of gcm_enc/dec engaged. Jul 7 01:22:26.313720 kernel: AES CTR mode by8 optimization enabled Jul 7 01:22:26.316594 kernel: libata version 3.00 loaded. Jul 7 01:22:26.316800 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 7 01:22:26.317190 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 01:22:26.325813 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 01:22:26.338494 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 7 01:22:26.338528 kernel: GPT:17805311 != 125829119 Jul 7 01:22:26.338547 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 7 01:22:26.338582 kernel: GPT:17805311 != 125829119 Jul 7 01:22:26.338603 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 7 01:22:26.338621 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 01:22:26.326744 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 01:22:26.326960 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 01:22:26.340472 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 01:22:26.352914 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 01:22:26.357586 kernel: ahci 0000:00:1f.2: version 3.0 Jul 7 01:22:26.357870 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 7 01:22:26.363606 kernel: ACPI: bus type USB registered Jul 7 01:22:26.370842 kernel: usbcore: registered new interface driver usbfs Jul 7 01:22:26.370906 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jul 7 01:22:26.371172 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 7 01:22:26.377066 kernel: usbcore: registered new interface driver hub Jul 7 01:22:26.377121 kernel: usbcore: registered new device driver usb Jul 7 01:22:26.378114 kernel: scsi host0: ahci Jul 7 01:22:26.380591 kernel: scsi host1: ahci Jul 7 01:22:26.385587 kernel: scsi host2: ahci Jul 7 01:22:26.389603 kernel: scsi host3: ahci Jul 7 01:22:26.390592 kernel: scsi host4: ahci Jul 7 01:22:26.392605 kernel: scsi host5: ahci Jul 7 01:22:26.392831 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 38 Jul 7 01:22:26.392864 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 38 Jul 7 01:22:26.392883 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 38 Jul 7 01:22:26.392901 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 38 Jul 7 01:22:26.392917 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 38 Jul 7 01:22:26.392934 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 38 Jul 7 01:22:26.423600 kernel: BTRFS: device fsid 01287863-c21f-4cbb-820d-bbae8208f32f devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (463) Jul 7 01:22:26.438137 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 7 01:22:26.501553 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (473) Jul 7 01:22:26.500590 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 7 01:22:26.502611 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 01:22:26.516472 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 7 01:22:26.523850 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 7 01:22:26.535757 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 7 01:22:26.542779 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 7 01:22:26.546751 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 01:22:26.551075 disk-uuid[556]: Primary Header is updated. Jul 7 01:22:26.551075 disk-uuid[556]: Secondary Entries is updated. Jul 7 01:22:26.551075 disk-uuid[556]: Secondary Header is updated. Jul 7 01:22:26.557631 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 01:22:26.567598 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 01:22:26.584210 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 01:22:26.699604 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 7 01:22:26.707587 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 7 01:22:26.707632 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 7 01:22:26.708856 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 7 01:22:26.711488 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jul 7 01:22:26.713330 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 7 01:22:26.728802 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jul 7 01:22:26.736297 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Jul 7 01:22:26.736627 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jul 7 01:22:26.741765 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jul 7 01:22:26.742060 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Jul 7 01:22:26.744736 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Jul 7 01:22:26.745025 kernel: hub 1-0:1.0: USB hub found Jul 7 01:22:26.747244 kernel: hub 1-0:1.0: 4 ports detected Jul 7 01:22:26.747688 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jul 7 01:22:26.751001 kernel: hub 2-0:1.0: USB hub found Jul 7 01:22:26.751275 kernel: hub 2-0:1.0: 4 ports detected Jul 7 01:22:26.990713 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jul 7 01:22:27.131666 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 7 01:22:27.138236 kernel: usbcore: registered new interface driver usbhid Jul 7 01:22:27.138294 kernel: usbhid: USB HID core driver Jul 7 01:22:27.146043 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Jul 7 01:22:27.146110 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Jul 7 01:22:27.575885 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 01:22:27.575964 disk-uuid[557]: The operation has completed successfully. Jul 7 01:22:27.634376 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 7 01:22:27.634543 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 7 01:22:27.658829 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 7 01:22:27.665511 sh[580]: Success Jul 7 01:22:27.683630 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Jul 7 01:22:27.732409 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 7 01:22:27.751709 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 7 01:22:27.755493 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 7 01:22:27.780840 kernel: BTRFS info (device dm-0): first mount of filesystem 01287863-c21f-4cbb-820d-bbae8208f32f Jul 7 01:22:27.780919 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 7 01:22:27.782977 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 7 01:22:27.786367 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 7 01:22:27.786402 kernel: BTRFS info (device dm-0): using free space tree Jul 7 01:22:27.797321 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 7 01:22:27.798820 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 7 01:22:27.803782 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 7 01:22:27.807753 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 7 01:22:27.824992 kernel: BTRFS info (device vda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 7 01:22:27.825047 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 7 01:22:27.825068 kernel: BTRFS info (device vda6): using free space tree Jul 7 01:22:27.829583 kernel: BTRFS info (device vda6): auto enabling async discard Jul 7 01:22:27.842383 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 7 01:22:27.845124 kernel: BTRFS info (device vda6): last unmount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 7 01:22:27.853049 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 7 01:22:27.859977 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 7 01:22:28.007546 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 01:22:28.010337 ignition[675]: Ignition 2.19.0 Jul 7 01:22:28.011330 ignition[675]: Stage: fetch-offline Jul 7 01:22:28.011452 ignition[675]: no configs at "/usr/lib/ignition/base.d" Jul 7 01:22:28.011473 ignition[675]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 7 01:22:28.011672 ignition[675]: parsed url from cmdline: "" Jul 7 01:22:28.011679 ignition[675]: no config URL provided Jul 7 01:22:28.011690 ignition[675]: reading system config file "/usr/lib/ignition/user.ign" Jul 7 01:22:28.015805 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 01:22:28.011705 ignition[675]: no config at "/usr/lib/ignition/user.ign" Jul 7 01:22:28.011714 ignition[675]: failed to fetch config: resource requires networking Jul 7 01:22:28.020874 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 01:22:28.012781 ignition[675]: Ignition finished successfully Jul 7 01:22:28.053120 systemd-networkd[769]: lo: Link UP Jul 7 01:22:28.053137 systemd-networkd[769]: lo: Gained carrier Jul 7 01:22:28.055434 systemd-networkd[769]: Enumeration completed Jul 7 01:22:28.055597 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 01:22:28.056469 systemd[1]: Reached target network.target - Network. Jul 7 01:22:28.056904 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 01:22:28.056910 systemd-networkd[769]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 01:22:28.059523 systemd-networkd[769]: eth0: Link UP Jul 7 01:22:28.059530 systemd-networkd[769]: eth0: Gained carrier Jul 7 01:22:28.059542 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 01:22:28.066802 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 7 01:22:28.082689 systemd-networkd[769]: eth0: DHCPv4 address 10.244.21.90/30, gateway 10.244.21.89 acquired from 10.244.21.89 Jul 7 01:22:28.090762 ignition[772]: Ignition 2.19.0 Jul 7 01:22:28.090780 ignition[772]: Stage: fetch Jul 7 01:22:28.091058 ignition[772]: no configs at "/usr/lib/ignition/base.d" Jul 7 01:22:28.091079 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 7 01:22:28.091217 ignition[772]: parsed url from cmdline: "" Jul 7 01:22:28.091225 ignition[772]: no config URL provided Jul 7 01:22:28.091248 ignition[772]: reading system config file "/usr/lib/ignition/user.ign" Jul 7 01:22:28.091265 ignition[772]: no config at "/usr/lib/ignition/user.ign" Jul 7 01:22:28.091450 ignition[772]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jul 7 01:22:28.091915 ignition[772]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jul 7 01:22:28.091959 ignition[772]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jul 7 01:22:28.109767 ignition[772]: GET result: OK Jul 7 01:22:28.111128 ignition[772]: parsing config with SHA512: 3b541433cefbf4ee02becdd955b1f636fdfecaf87df8d8136417ac22fe6a2ae89c4e077639790dccbe7ffa89d0dbda49c72781d0b97a71d83341e21dc789f847 Jul 7 01:22:28.116638 unknown[772]: fetched base config from "system" Jul 7 01:22:28.116657 unknown[772]: fetched base config from "system" Jul 7 01:22:28.117204 ignition[772]: fetch: fetch complete Jul 7 01:22:28.116666 unknown[772]: fetched user config from "openstack" Jul 7 01:22:28.117213 ignition[772]: fetch: fetch passed Jul 7 01:22:28.119253 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 7 01:22:28.117348 ignition[772]: Ignition finished successfully Jul 7 01:22:28.128816 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 7 01:22:28.151517 ignition[780]: Ignition 2.19.0 Jul 7 01:22:28.151540 ignition[780]: Stage: kargs Jul 7 01:22:28.151823 ignition[780]: no configs at "/usr/lib/ignition/base.d" Jul 7 01:22:28.154637 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 7 01:22:28.151845 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 7 01:22:28.153264 ignition[780]: kargs: kargs passed Jul 7 01:22:28.153341 ignition[780]: Ignition finished successfully Jul 7 01:22:28.169335 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 7 01:22:28.188734 ignition[786]: Ignition 2.19.0 Jul 7 01:22:28.188757 ignition[786]: Stage: disks Jul 7 01:22:28.189018 ignition[786]: no configs at "/usr/lib/ignition/base.d" Jul 7 01:22:28.189039 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 7 01:22:28.192683 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 7 01:22:28.190394 ignition[786]: disks: disks passed Jul 7 01:22:28.194027 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 7 01:22:28.190480 ignition[786]: Ignition finished successfully Jul 7 01:22:28.194850 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 7 01:22:28.196431 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 01:22:28.197803 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 01:22:28.199338 systemd[1]: Reached target basic.target - Basic System. Jul 7 01:22:28.213877 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 7 01:22:28.232697 systemd-fsck[794]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jul 7 01:22:28.236195 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 7 01:22:28.242698 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 7 01:22:28.368619 kernel: EXT4-fs (vda9): mounted filesystem c3eefe20-4a42-420d-8034-4d5498275b2f r/w with ordered data mode. Quota mode: none. Jul 7 01:22:28.369610 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 7 01:22:28.370967 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 7 01:22:28.377736 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 01:22:28.385834 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 7 01:22:28.389815 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 7 01:22:28.391459 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jul 7 01:22:28.393351 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 7 01:22:28.393396 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 01:22:28.397490 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 7 01:22:28.404365 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 7 01:22:28.413597 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (802) Jul 7 01:22:28.420751 kernel: BTRFS info (device vda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 7 01:22:28.420814 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 7 01:22:28.420836 kernel: BTRFS info (device vda6): using free space tree Jul 7 01:22:28.433795 kernel: BTRFS info (device vda6): auto enabling async discard Jul 7 01:22:28.439796 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 01:22:28.492514 initrd-setup-root[829]: cut: /sysroot/etc/passwd: No such file or directory Jul 7 01:22:28.501621 initrd-setup-root[837]: cut: /sysroot/etc/group: No such file or directory Jul 7 01:22:28.511436 initrd-setup-root[844]: cut: /sysroot/etc/shadow: No such file or directory Jul 7 01:22:28.518676 initrd-setup-root[851]: cut: /sysroot/etc/gshadow: No such file or directory Jul 7 01:22:28.635021 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 7 01:22:28.639715 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 7 01:22:28.642765 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 7 01:22:28.657591 kernel: BTRFS info (device vda6): last unmount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 7 01:22:28.681348 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 7 01:22:28.692270 ignition[919]: INFO : Ignition 2.19.0 Jul 7 01:22:28.692270 ignition[919]: INFO : Stage: mount Jul 7 01:22:28.695205 ignition[919]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 01:22:28.695205 ignition[919]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 7 01:22:28.697931 ignition[919]: INFO : mount: mount passed Jul 7 01:22:28.697931 ignition[919]: INFO : Ignition finished successfully Jul 7 01:22:28.697989 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 7 01:22:28.779183 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 7 01:22:29.851150 systemd-networkd[769]: eth0: Gained IPv6LL Jul 7 01:22:31.187533 systemd-networkd[769]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:556:24:19ff:fef4:155a/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:556:24:19ff:fef4:155a/64 assigned by NDisc. Jul 7 01:22:31.187552 systemd-networkd[769]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jul 7 01:22:35.588996 coreos-metadata[804]: Jul 07 01:22:35.588 WARN failed to locate config-drive, using the metadata service API instead Jul 7 01:22:35.613276 coreos-metadata[804]: Jul 07 01:22:35.613 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jul 7 01:22:35.645716 coreos-metadata[804]: Jul 07 01:22:35.645 INFO Fetch successful Jul 7 01:22:35.646784 coreos-metadata[804]: Jul 07 01:22:35.646 INFO wrote hostname srv-3dgpq.gb1.brightbox.com to /sysroot/etc/hostname Jul 7 01:22:35.648870 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jul 7 01:22:35.649052 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jul 7 01:22:35.668007 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 7 01:22:35.690902 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 01:22:35.703717 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (935) Jul 7 01:22:35.707509 kernel: BTRFS info (device vda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 7 01:22:35.707553 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 7 01:22:35.709239 kernel: BTRFS info (device vda6): using free space tree Jul 7 01:22:35.715684 kernel: BTRFS info (device vda6): auto enabling async discard Jul 7 01:22:35.719715 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 01:22:35.758532 ignition[953]: INFO : Ignition 2.19.0 Jul 7 01:22:35.758532 ignition[953]: INFO : Stage: files Jul 7 01:22:35.760941 ignition[953]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 01:22:35.760941 ignition[953]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 7 01:22:35.762784 ignition[953]: DEBUG : files: compiled without relabeling support, skipping Jul 7 01:22:35.762784 ignition[953]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 7 01:22:35.762784 ignition[953]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 7 01:22:35.766411 ignition[953]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 7 01:22:35.767664 ignition[953]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 7 01:22:35.769399 unknown[953]: wrote ssh authorized keys file for user: core Jul 7 01:22:35.770708 ignition[953]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 7 01:22:35.772012 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 7 01:22:35.773274 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 7 01:22:35.773274 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 7 01:22:35.773274 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 7 01:22:36.036916 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 7 01:22:37.383215 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 7 01:22:37.383215 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 7 01:22:37.391516 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 7 01:22:38.315455 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Jul 7 01:22:38.753703 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 7 01:22:38.753703 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Jul 7 01:22:38.753703 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Jul 7 01:22:38.753703 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 7 01:22:38.753703 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 7 01:22:38.753703 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 01:22:38.753703 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 01:22:38.753703 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 01:22:38.764873 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 01:22:38.764873 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 01:22:38.764873 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 01:22:38.764873 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 7 01:22:38.764873 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 7 01:22:38.764873 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 7 01:22:38.764873 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Jul 7 01:22:39.429765 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Jul 7 01:22:41.464655 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 7 01:22:41.464655 ignition[953]: INFO : files: op(d): [started] processing unit "containerd.service" Jul 7 01:22:41.468104 ignition[953]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 7 01:22:41.468104 ignition[953]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 7 01:22:41.468104 ignition[953]: INFO : files: op(d): [finished] processing unit "containerd.service" Jul 7 01:22:41.468104 ignition[953]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Jul 7 01:22:41.468104 ignition[953]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 01:22:41.468104 ignition[953]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 01:22:41.468104 ignition[953]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Jul 7 01:22:41.468104 ignition[953]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jul 7 01:22:41.468104 ignition[953]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jul 7 01:22:41.468104 ignition[953]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 7 01:22:41.468104 ignition[953]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 7 01:22:41.468104 ignition[953]: INFO : files: files passed Jul 7 01:22:41.498236 ignition[953]: INFO : Ignition finished successfully Jul 7 01:22:41.472123 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 7 01:22:41.498889 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 7 01:22:41.500977 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 7 01:22:41.530091 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 7 01:22:41.531265 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 7 01:22:41.539251 initrd-setup-root-after-ignition[982]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 01:22:41.539251 initrd-setup-root-after-ignition[982]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 7 01:22:41.542964 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 01:22:41.544840 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 01:22:41.546459 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 7 01:22:41.558940 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 7 01:22:41.602865 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 7 01:22:41.603154 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 7 01:22:41.605076 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 7 01:22:41.606303 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 7 01:22:41.607983 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 7 01:22:41.613908 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 7 01:22:41.636610 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 01:22:41.652902 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 7 01:22:41.667164 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 7 01:22:41.668263 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 01:22:41.670075 systemd[1]: Stopped target timers.target - Timer Units. Jul 7 01:22:41.671591 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 7 01:22:41.671804 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 01:22:41.673590 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 7 01:22:41.682392 systemd[1]: Stopped target basic.target - Basic System. Jul 7 01:22:41.683345 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 7 01:22:41.684831 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 01:22:41.686431 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 7 01:22:41.688057 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 7 01:22:41.689616 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 01:22:41.691197 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 7 01:22:41.692856 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 7 01:22:41.694321 systemd[1]: Stopped target swap.target - Swaps. Jul 7 01:22:41.695607 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 7 01:22:41.695862 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 7 01:22:41.697496 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 7 01:22:41.698487 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 01:22:41.700030 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 7 01:22:41.700769 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 01:22:41.701726 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 7 01:22:41.701963 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 7 01:22:41.703831 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 7 01:22:41.704098 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 01:22:41.705989 systemd[1]: ignition-files.service: Deactivated successfully. Jul 7 01:22:41.706179 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 7 01:22:41.723688 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 7 01:22:41.726337 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 7 01:22:41.728136 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 7 01:22:41.728372 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 01:22:41.733496 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 7 01:22:41.733785 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 01:22:41.744288 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 7 01:22:41.744501 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 7 01:22:41.764307 ignition[1006]: INFO : Ignition 2.19.0 Jul 7 01:22:41.764307 ignition[1006]: INFO : Stage: umount Jul 7 01:22:41.766062 ignition[1006]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 01:22:41.766062 ignition[1006]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 7 01:22:41.766062 ignition[1006]: INFO : umount: umount passed Jul 7 01:22:41.770198 ignition[1006]: INFO : Ignition finished successfully Jul 7 01:22:41.769959 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 7 01:22:41.770922 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 7 01:22:41.772657 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 7 01:22:41.774997 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 7 01:22:41.775150 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 7 01:22:41.776786 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 7 01:22:41.776868 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 7 01:22:41.778230 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 7 01:22:41.778299 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 7 01:22:41.779673 systemd[1]: Stopped target network.target - Network. Jul 7 01:22:41.780961 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 7 01:22:41.781048 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 01:22:41.782462 systemd[1]: Stopped target paths.target - Path Units. Jul 7 01:22:41.783796 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 7 01:22:41.785982 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 01:22:41.787059 systemd[1]: Stopped target slices.target - Slice Units. Jul 7 01:22:41.788391 systemd[1]: Stopped target sockets.target - Socket Units. Jul 7 01:22:41.790211 systemd[1]: iscsid.socket: Deactivated successfully. Jul 7 01:22:41.790288 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 01:22:41.791778 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 7 01:22:41.791845 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 01:22:41.793237 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 7 01:22:41.793352 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 7 01:22:41.794602 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 7 01:22:41.794677 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 7 01:22:41.796251 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 7 01:22:41.798681 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 7 01:22:41.800524 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 7 01:22:41.800697 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 7 01:22:41.802120 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 7 01:22:41.802264 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 7 01:22:41.802442 systemd-networkd[769]: eth0: DHCPv6 lease lost Jul 7 01:22:41.805626 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 7 01:22:41.806683 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 7 01:22:41.809591 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 7 01:22:41.809766 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 7 01:22:41.815430 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 7 01:22:41.815553 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 7 01:22:41.822840 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 7 01:22:41.824416 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 7 01:22:41.824526 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 01:22:41.827526 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 7 01:22:41.827633 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 7 01:22:41.828537 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 7 01:22:41.828646 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 7 01:22:41.829396 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 7 01:22:41.829463 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 01:22:41.831216 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 01:22:41.849817 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 7 01:22:41.850114 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 01:22:41.853267 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 7 01:22:41.853451 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 7 01:22:41.855713 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 7 01:22:41.855821 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 7 01:22:41.856746 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 7 01:22:41.856805 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 01:22:41.858368 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 7 01:22:41.858445 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 7 01:22:41.860730 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 7 01:22:41.860798 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 7 01:22:41.862288 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 7 01:22:41.862375 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 01:22:41.873883 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 7 01:22:41.874685 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 7 01:22:41.874775 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 01:22:41.875638 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 7 01:22:41.875707 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 01:22:41.877456 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 7 01:22:41.877531 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 01:22:41.881661 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 01:22:41.881753 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 01:22:41.884808 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 7 01:22:41.884975 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 7 01:22:41.886363 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 7 01:22:41.895781 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 7 01:22:41.908537 systemd[1]: Switching root. Jul 7 01:22:41.941530 systemd-journald[201]: Journal stopped Jul 7 01:22:43.549853 systemd-journald[201]: Received SIGTERM from PID 1 (systemd). Jul 7 01:22:43.549954 kernel: SELinux: policy capability network_peer_controls=1 Jul 7 01:22:43.550003 kernel: SELinux: policy capability open_perms=1 Jul 7 01:22:43.550027 kernel: SELinux: policy capability extended_socket_class=1 Jul 7 01:22:43.550045 kernel: SELinux: policy capability always_check_network=0 Jul 7 01:22:43.550064 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 7 01:22:43.550082 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 7 01:22:43.550100 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 7 01:22:43.550125 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 7 01:22:43.550162 kernel: audit: type=1403 audit(1751851362.349:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 7 01:22:43.550185 systemd[1]: Successfully loaded SELinux policy in 51.778ms. Jul 7 01:22:43.550224 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 21.264ms. Jul 7 01:22:43.550247 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 7 01:22:43.550268 systemd[1]: Detected virtualization kvm. Jul 7 01:22:43.550289 systemd[1]: Detected architecture x86-64. Jul 7 01:22:43.550309 systemd[1]: Detected first boot. Jul 7 01:22:43.550329 systemd[1]: Hostname set to . Jul 7 01:22:43.550349 systemd[1]: Initializing machine ID from VM UUID. Jul 7 01:22:43.550369 zram_generator::config[1067]: No configuration found. Jul 7 01:22:43.550402 systemd[1]: Populated /etc with preset unit settings. Jul 7 01:22:43.550425 systemd[1]: Queued start job for default target multi-user.target. Jul 7 01:22:43.550445 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 7 01:22:43.550472 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 7 01:22:43.550493 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 7 01:22:43.550512 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 7 01:22:43.550533 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 7 01:22:43.550560 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 7 01:22:43.550676 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 7 01:22:43.550701 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 7 01:22:43.550721 systemd[1]: Created slice user.slice - User and Session Slice. Jul 7 01:22:43.550741 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 01:22:43.550761 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 01:22:43.550781 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 7 01:22:43.550801 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 7 01:22:43.550821 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 7 01:22:43.550843 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 01:22:43.550876 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 7 01:22:43.550899 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 01:22:43.550919 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 7 01:22:43.550951 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 01:22:43.550973 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 01:22:43.551019 systemd[1]: Reached target slices.target - Slice Units. Jul 7 01:22:43.551065 systemd[1]: Reached target swap.target - Swaps. Jul 7 01:22:43.551088 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 7 01:22:43.551109 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 7 01:22:43.551129 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 7 01:22:43.551149 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 7 01:22:43.551170 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 01:22:43.551190 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 01:22:43.551223 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 01:22:43.551245 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 7 01:22:43.551265 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 7 01:22:43.551285 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 7 01:22:43.551306 systemd[1]: Mounting media.mount - External Media Directory... Jul 7 01:22:43.551326 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 01:22:43.551346 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 7 01:22:43.551371 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 7 01:22:43.551404 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 7 01:22:43.551426 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 7 01:22:43.551447 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 01:22:43.551467 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 01:22:43.551487 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 7 01:22:43.551507 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 01:22:43.551527 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 01:22:43.551547 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 01:22:43.551581 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 7 01:22:43.551617 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 01:22:43.551639 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 7 01:22:43.551661 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jul 7 01:22:43.551682 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jul 7 01:22:43.551701 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 01:22:43.551720 kernel: loop: module loaded Jul 7 01:22:43.551740 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 01:22:43.551759 kernel: fuse: init (API version 7.39) Jul 7 01:22:43.551778 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 7 01:22:43.551811 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 7 01:22:43.551833 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 01:22:43.551868 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 01:22:43.551890 kernel: ACPI: bus type drm_connector registered Jul 7 01:22:43.551909 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 7 01:22:43.551929 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 7 01:22:43.551949 systemd[1]: Mounted media.mount - External Media Directory. Jul 7 01:22:43.551969 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 7 01:22:43.552040 systemd-journald[1178]: Collecting audit messages is disabled. Jul 7 01:22:43.552078 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 7 01:22:43.552099 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 7 01:22:43.552119 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 7 01:22:43.552140 systemd-journald[1178]: Journal started Jul 7 01:22:43.552179 systemd-journald[1178]: Runtime Journal (/run/log/journal/103d322dbf8c41ec835f368e03e6b705) is 4.7M, max 38.0M, 33.2M free. Jul 7 01:22:43.557610 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 01:22:43.559326 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 01:22:43.560527 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 7 01:22:43.560875 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 7 01:22:43.562066 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 01:22:43.562306 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 01:22:43.563561 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 01:22:43.563827 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 01:22:43.565100 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 01:22:43.565334 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 01:22:43.566741 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 7 01:22:43.566989 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 7 01:22:43.568307 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 01:22:43.568628 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 01:22:43.571006 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 01:22:43.573193 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 01:22:43.574431 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 7 01:22:43.593534 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 7 01:22:43.600680 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 7 01:22:43.610692 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 7 01:22:43.611549 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 7 01:22:43.623759 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 7 01:22:43.639804 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 7 01:22:43.643481 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 01:22:43.649183 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 7 01:22:43.650730 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 01:22:43.660754 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 01:22:43.672495 systemd-journald[1178]: Time spent on flushing to /var/log/journal/103d322dbf8c41ec835f368e03e6b705 is 44.747ms for 1128 entries. Jul 7 01:22:43.672495 systemd-journald[1178]: System Journal (/var/log/journal/103d322dbf8c41ec835f368e03e6b705) is 8.0M, max 584.8M, 576.8M free. Jul 7 01:22:43.759433 systemd-journald[1178]: Received client request to flush runtime journal. Jul 7 01:22:43.674818 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 7 01:22:43.689827 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 7 01:22:43.691811 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 7 01:22:43.696210 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 7 01:22:43.702050 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 7 01:22:43.753580 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 01:22:43.762869 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 7 01:22:43.781270 systemd-tmpfiles[1223]: ACLs are not supported, ignoring. Jul 7 01:22:43.781296 systemd-tmpfiles[1223]: ACLs are not supported, ignoring. Jul 7 01:22:43.790322 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 01:22:43.797852 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 7 01:22:43.828678 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 01:22:43.839816 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 7 01:22:43.843429 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 7 01:22:43.853779 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 01:22:43.869255 udevadm[1242]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 7 01:22:43.897142 systemd-tmpfiles[1245]: ACLs are not supported, ignoring. Jul 7 01:22:43.897714 systemd-tmpfiles[1245]: ACLs are not supported, ignoring. Jul 7 01:22:43.905245 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 01:22:44.452542 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 7 01:22:44.463836 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 01:22:44.497013 systemd-udevd[1251]: Using default interface naming scheme 'v255'. Jul 7 01:22:44.527362 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 01:22:44.540958 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 01:22:44.576109 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 7 01:22:44.616276 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jul 7 01:22:44.619597 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1260) Jul 7 01:22:44.716769 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 7 01:22:44.740632 kernel: mousedev: PS/2 mouse device common for all mice Jul 7 01:22:44.833596 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 7 01:22:44.837305 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 7 01:22:44.844410 systemd-networkd[1256]: lo: Link UP Jul 7 01:22:44.844423 systemd-networkd[1256]: lo: Gained carrier Jul 7 01:22:44.846779 systemd-networkd[1256]: Enumeration completed Jul 7 01:22:44.846954 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 01:22:44.847424 systemd-networkd[1256]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 01:22:44.847431 systemd-networkd[1256]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 01:22:44.849153 kernel: ACPI: button: Power Button [PWRF] Jul 7 01:22:44.849432 systemd-networkd[1256]: eth0: Link UP Jul 7 01:22:44.849445 systemd-networkd[1256]: eth0: Gained carrier Jul 7 01:22:44.849464 systemd-networkd[1256]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 01:22:44.856774 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 7 01:22:44.887717 systemd-networkd[1256]: eth0: DHCPv4 address 10.244.21.90/30, gateway 10.244.21.89 acquired from 10.244.21.89 Jul 7 01:22:44.918611 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 7 01:22:44.922807 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jul 7 01:22:44.924867 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 7 01:22:44.931869 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jul 7 01:22:44.985237 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 01:22:45.151550 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 01:22:45.184047 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 7 01:22:45.191892 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 7 01:22:45.218613 lvm[1291]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 7 01:22:45.250366 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 7 01:22:45.252410 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 01:22:45.268999 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 7 01:22:45.275395 lvm[1294]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 7 01:22:45.306081 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 7 01:22:45.307843 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 7 01:22:45.308791 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 7 01:22:45.308961 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 01:22:45.309988 systemd[1]: Reached target machines.target - Containers. Jul 7 01:22:45.312453 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 7 01:22:45.319799 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 7 01:22:45.323793 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 7 01:22:45.325865 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 01:22:45.328355 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 7 01:22:45.340533 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 7 01:22:45.353807 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 7 01:22:45.356975 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 7 01:22:45.371173 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 7 01:22:45.385404 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 7 01:22:45.389012 kernel: loop0: detected capacity change from 0 to 142488 Jul 7 01:22:45.389313 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 7 01:22:45.420637 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 7 01:22:45.440676 kernel: loop1: detected capacity change from 0 to 8 Jul 7 01:22:45.460662 kernel: loop2: detected capacity change from 0 to 221472 Jul 7 01:22:45.514618 kernel: loop3: detected capacity change from 0 to 140768 Jul 7 01:22:45.572591 kernel: loop4: detected capacity change from 0 to 142488 Jul 7 01:22:45.593628 kernel: loop5: detected capacity change from 0 to 8 Jul 7 01:22:45.598595 kernel: loop6: detected capacity change from 0 to 221472 Jul 7 01:22:45.626607 kernel: loop7: detected capacity change from 0 to 140768 Jul 7 01:22:45.644106 (sd-merge)[1315]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jul 7 01:22:45.644910 (sd-merge)[1315]: Merged extensions into '/usr'. Jul 7 01:22:45.651799 systemd[1]: Reloading requested from client PID 1302 ('systemd-sysext') (unit systemd-sysext.service)... Jul 7 01:22:45.651830 systemd[1]: Reloading... Jul 7 01:22:45.763003 zram_generator::config[1343]: No configuration found. Jul 7 01:22:46.002561 ldconfig[1298]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 7 01:22:46.004603 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 01:22:46.101747 systemd[1]: Reloading finished in 449 ms. Jul 7 01:22:46.107303 systemd-networkd[1256]: eth0: Gained IPv6LL Jul 7 01:22:46.127145 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 7 01:22:46.128533 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 7 01:22:46.129755 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 7 01:22:46.142886 systemd[1]: Starting ensure-sysext.service... Jul 7 01:22:46.159339 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 01:22:46.175894 systemd[1]: Reloading requested from client PID 1408 ('systemctl') (unit ensure-sysext.service)... Jul 7 01:22:46.175973 systemd[1]: Reloading... Jul 7 01:22:46.207655 systemd-tmpfiles[1409]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 7 01:22:46.208332 systemd-tmpfiles[1409]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 7 01:22:46.210968 systemd-tmpfiles[1409]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 7 01:22:46.211412 systemd-tmpfiles[1409]: ACLs are not supported, ignoring. Jul 7 01:22:46.211531 systemd-tmpfiles[1409]: ACLs are not supported, ignoring. Jul 7 01:22:46.217809 systemd-tmpfiles[1409]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 01:22:46.217829 systemd-tmpfiles[1409]: Skipping /boot Jul 7 01:22:46.234735 systemd-tmpfiles[1409]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 01:22:46.234756 systemd-tmpfiles[1409]: Skipping /boot Jul 7 01:22:46.265597 zram_generator::config[1435]: No configuration found. Jul 7 01:22:46.468027 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 01:22:46.559647 systemd[1]: Reloading finished in 383 ms. Jul 7 01:22:46.591372 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 01:22:46.599366 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 7 01:22:46.605802 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 7 01:22:46.611447 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 7 01:22:46.618457 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 01:22:46.632152 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 7 01:22:46.653086 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 01:22:46.653971 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 01:22:46.657713 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 01:22:46.669968 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 01:22:46.684248 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 01:22:46.686800 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 01:22:46.686982 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 01:22:46.698363 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 01:22:46.699477 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 01:22:46.699861 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 01:22:46.700084 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 01:22:46.705907 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 7 01:22:46.710891 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 7 01:22:46.713502 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 01:22:46.715752 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 01:22:46.720734 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 01:22:46.720995 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 01:22:46.724488 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 01:22:46.724749 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 01:22:46.739790 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 01:22:46.740806 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 01:22:46.741951 augenrules[1532]: No rules Jul 7 01:22:46.748952 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 01:22:46.754584 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 01:22:46.763961 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 01:22:46.776975 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 01:22:46.781790 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 01:22:46.785309 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 7 01:22:46.789172 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 01:22:46.793073 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 7 01:22:46.803718 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 7 01:22:46.811277 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 01:22:46.811758 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 01:22:46.817252 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 01:22:46.817510 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 01:22:46.819778 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 01:22:46.820030 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 01:22:46.823337 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 01:22:46.826986 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 01:22:46.841674 systemd[1]: Finished ensure-sysext.service. Jul 7 01:22:46.845120 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 01:22:46.846811 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 01:22:46.855826 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 7 01:22:46.857959 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 7 01:22:46.864640 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 7 01:22:46.866871 systemd-resolved[1511]: Positive Trust Anchors: Jul 7 01:22:46.866915 systemd-resolved[1511]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 01:22:46.866975 systemd-resolved[1511]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 01:22:46.874470 systemd-resolved[1511]: Using system hostname 'srv-3dgpq.gb1.brightbox.com'. Jul 7 01:22:46.878114 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 01:22:46.879988 systemd[1]: Reached target network.target - Network. Jul 7 01:22:46.880843 systemd[1]: Reached target network-online.target - Network is Online. Jul 7 01:22:46.881707 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 01:22:46.940470 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 7 01:22:46.941647 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 01:22:46.943523 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 7 01:22:46.944382 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 7 01:22:46.945204 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 7 01:22:46.946023 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 7 01:22:46.946073 systemd[1]: Reached target paths.target - Path Units. Jul 7 01:22:46.946725 systemd[1]: Reached target time-set.target - System Time Set. Jul 7 01:22:46.947718 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 7 01:22:46.948645 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 7 01:22:46.949421 systemd[1]: Reached target timers.target - Timer Units. Jul 7 01:22:46.951381 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 7 01:22:46.955176 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 7 01:22:46.958500 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 7 01:22:46.961413 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 7 01:22:46.962233 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 01:22:46.962350 systemd-networkd[1256]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:556:24:19ff:fef4:155a/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:556:24:19ff:fef4:155a/64 assigned by NDisc. Jul 7 01:22:46.962357 systemd-networkd[1256]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jul 7 01:22:46.962916 systemd[1]: Reached target basic.target - Basic System. Jul 7 01:22:46.963916 systemd[1]: System is tainted: cgroupsv1 Jul 7 01:22:46.963987 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 7 01:22:46.964029 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 7 01:22:46.971781 systemd[1]: Starting containerd.service - containerd container runtime... Jul 7 01:22:46.977199 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 7 01:22:46.981755 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 7 01:22:46.990752 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 7 01:22:46.996879 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 7 01:22:47.001712 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 7 01:22:47.005620 jq[1573]: false Jul 7 01:22:47.012745 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 01:22:47.024817 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 7 01:22:47.044216 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 7 01:22:47.049696 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 7 01:22:47.056151 extend-filesystems[1574]: Found loop4 Jul 7 01:22:47.059722 extend-filesystems[1574]: Found loop5 Jul 7 01:22:47.059722 extend-filesystems[1574]: Found loop6 Jul 7 01:22:47.059722 extend-filesystems[1574]: Found loop7 Jul 7 01:22:47.059722 extend-filesystems[1574]: Found vda Jul 7 01:22:47.059722 extend-filesystems[1574]: Found vda1 Jul 7 01:22:47.059722 extend-filesystems[1574]: Found vda2 Jul 7 01:22:47.059722 extend-filesystems[1574]: Found vda3 Jul 7 01:22:47.059722 extend-filesystems[1574]: Found usr Jul 7 01:22:47.059722 extend-filesystems[1574]: Found vda4 Jul 7 01:22:47.059722 extend-filesystems[1574]: Found vda6 Jul 7 01:22:47.059722 extend-filesystems[1574]: Found vda7 Jul 7 01:22:47.059722 extend-filesystems[1574]: Found vda9 Jul 7 01:22:47.059722 extend-filesystems[1574]: Checking size of /dev/vda9 Jul 7 01:22:47.061762 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 7 01:22:47.065203 dbus-daemon[1572]: [system] SELinux support is enabled Jul 7 01:22:47.068170 dbus-daemon[1572]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1256 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jul 7 01:22:47.083693 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 7 01:22:47.104833 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 7 01:22:47.112703 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 7 01:22:47.118949 extend-filesystems[1574]: Resized partition /dev/vda9 Jul 7 01:22:47.131023 extend-filesystems[1601]: resize2fs 1.47.1 (20-May-2024) Jul 7 01:22:47.124053 systemd[1]: Starting update-engine.service - Update Engine... Jul 7 01:22:47.140728 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 7 01:22:47.149178 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Jul 7 01:22:47.146301 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 7 01:22:47.170237 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 7 01:22:47.170671 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 7 01:22:47.172254 systemd[1]: motdgen.service: Deactivated successfully. Jul 7 01:22:47.172630 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 7 01:22:47.177189 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 7 01:22:47.177594 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 7 01:22:47.198738 update_engine[1600]: I20250707 01:22:47.197125 1600 main.cc:92] Flatcar Update Engine starting Jul 7 01:22:47.207723 jq[1605]: true Jul 7 01:22:47.217015 dbus-daemon[1572]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 7 01:22:47.240095 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1265) Jul 7 01:22:47.240398 update_engine[1600]: I20250707 01:22:47.235771 1600 update_check_scheduler.cc:74] Next update check in 4m40s Jul 7 01:22:47.248469 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 7 01:22:47.282196 (ntainerd)[1624]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 7 01:22:47.293710 jq[1620]: true Jul 7 01:22:47.295198 systemd[1]: Started update-engine.service - Update Engine. Jul 7 01:22:47.297948 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 7 01:22:47.297992 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 7 01:22:47.308874 tar[1611]: linux-amd64/helm Jul 7 01:22:47.314108 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jul 7 01:22:47.315698 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 7 01:22:47.315740 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 7 01:22:47.317385 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 7 01:22:47.329752 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 7 01:22:47.517180 systemd-logind[1596]: Watching system buttons on /dev/input/event2 (Power Button) Jul 7 01:22:47.519948 systemd-logind[1596]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 7 01:22:47.521441 systemd-logind[1596]: New seat seat0. Jul 7 01:22:47.528728 systemd[1]: Started systemd-logind.service - User Login Management. Jul 7 01:22:47.616180 dbus-daemon[1572]: [system] Successfully activated service 'org.freedesktop.hostname1' Jul 7 01:22:47.616410 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jul 7 01:22:47.618827 dbus-daemon[1572]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1632 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jul 7 01:22:47.629984 systemd[1]: Starting polkit.service - Authorization Manager... Jul 7 01:22:47.690948 polkitd[1649]: Started polkitd version 121 Jul 7 01:22:47.721506 polkitd[1649]: Loading rules from directory /etc/polkit-1/rules.d Jul 7 01:22:47.730437 bash[1648]: Updated "/home/core/.ssh/authorized_keys" Jul 7 01:22:47.722189 polkitd[1649]: Loading rules from directory /usr/share/polkit-1/rules.d Jul 7 01:22:47.728605 polkitd[1649]: Finished loading, compiling and executing 2 rules Jul 7 01:22:47.732647 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 7 01:22:47.752984 dbus-daemon[1572]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jul 7 01:22:47.754999 systemd[1]: Starting sshkeys.service... Jul 7 01:22:47.757036 systemd[1]: Started polkit.service - Authorization Manager. Jul 7 01:22:47.767612 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jul 7 01:22:47.763631 polkitd[1649]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jul 7 01:22:47.798026 extend-filesystems[1601]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 7 01:22:47.798026 extend-filesystems[1601]: old_desc_blocks = 1, new_desc_blocks = 8 Jul 7 01:22:47.798026 extend-filesystems[1601]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jul 7 01:22:47.830406 extend-filesystems[1574]: Resized filesystem in /dev/vda9 Jul 7 01:22:47.799309 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 7 01:22:47.802213 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 7 01:22:47.839430 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 7 01:22:47.848015 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 7 01:22:47.859967 systemd-hostnamed[1632]: Hostname set to (static) Jul 7 01:22:47.864596 sshd_keygen[1622]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 7 01:22:47.881812 containerd[1624]: time="2025-07-07T01:22:47.880171255Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 7 01:22:47.923685 locksmithd[1633]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 7 01:22:47.954327 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 7 01:22:47.965110 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 7 01:22:48.958934 systemd-timesyncd[1563]: Contacted time server 51.89.151.183:123 (0.flatcar.pool.ntp.org). Jul 7 01:22:48.959031 systemd-timesyncd[1563]: Initial clock synchronization to Mon 2025-07-07 01:22:48.958392 UTC. Jul 7 01:22:48.960374 systemd-resolved[1511]: Clock change detected. Flushing caches. Jul 7 01:22:48.972869 containerd[1624]: time="2025-07-07T01:22:48.972816798Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 7 01:22:48.977347 containerd[1624]: time="2025-07-07T01:22:48.976611759Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.95-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 7 01:22:48.977347 containerd[1624]: time="2025-07-07T01:22:48.976675929Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 7 01:22:48.977347 containerd[1624]: time="2025-07-07T01:22:48.976702435Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 7 01:22:48.977347 containerd[1624]: time="2025-07-07T01:22:48.976980633Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 7 01:22:48.977347 containerd[1624]: time="2025-07-07T01:22:48.977008970Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 7 01:22:48.977347 containerd[1624]: time="2025-07-07T01:22:48.977118828Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 01:22:48.977347 containerd[1624]: time="2025-07-07T01:22:48.977151844Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 7 01:22:48.980325 containerd[1624]: time="2025-07-07T01:22:48.979356605Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 01:22:48.980325 containerd[1624]: time="2025-07-07T01:22:48.979392108Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 7 01:22:48.980325 containerd[1624]: time="2025-07-07T01:22:48.979416572Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 01:22:48.980325 containerd[1624]: time="2025-07-07T01:22:48.979433732Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 7 01:22:48.980325 containerd[1624]: time="2025-07-07T01:22:48.979588752Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 7 01:22:48.980325 containerd[1624]: time="2025-07-07T01:22:48.980007008Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 7 01:22:48.980325 containerd[1624]: time="2025-07-07T01:22:48.980192752Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 01:22:48.980325 containerd[1624]: time="2025-07-07T01:22:48.980218546Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 7 01:22:48.981430 containerd[1624]: time="2025-07-07T01:22:48.981401086Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 7 01:22:48.981842 containerd[1624]: time="2025-07-07T01:22:48.981814644Z" level=info msg="metadata content store policy set" policy=shared Jul 7 01:22:48.988136 containerd[1624]: time="2025-07-07T01:22:48.988090490Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 7 01:22:48.989417 containerd[1624]: time="2025-07-07T01:22:48.989385671Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 7 01:22:48.989613 containerd[1624]: time="2025-07-07T01:22:48.989585820Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 7 01:22:48.990320 containerd[1624]: time="2025-07-07T01:22:48.989732676Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 7 01:22:48.990320 containerd[1624]: time="2025-07-07T01:22:48.989815534Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 7 01:22:48.990320 containerd[1624]: time="2025-07-07T01:22:48.990059469Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 7 01:22:48.993853 systemd[1]: issuegen.service: Deactivated successfully. Jul 7 01:22:48.994241 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 7 01:22:48.997503 containerd[1624]: time="2025-07-07T01:22:48.995586113Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 7 01:22:48.997503 containerd[1624]: time="2025-07-07T01:22:48.995866596Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 7 01:22:48.997503 containerd[1624]: time="2025-07-07T01:22:48.995895377Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 7 01:22:48.997503 containerd[1624]: time="2025-07-07T01:22:48.995917582Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 7 01:22:48.997503 containerd[1624]: time="2025-07-07T01:22:48.995940257Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 7 01:22:48.997503 containerd[1624]: time="2025-07-07T01:22:48.995961681Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 7 01:22:48.997503 containerd[1624]: time="2025-07-07T01:22:48.995983777Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 7 01:22:48.997503 containerd[1624]: time="2025-07-07T01:22:48.996007062Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 7 01:22:48.997503 containerd[1624]: time="2025-07-07T01:22:48.996029685Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 7 01:22:48.997503 containerd[1624]: time="2025-07-07T01:22:48.996050953Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 7 01:22:48.997503 containerd[1624]: time="2025-07-07T01:22:48.996070459Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 7 01:22:48.997503 containerd[1624]: time="2025-07-07T01:22:48.996089561Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 7 01:22:48.997503 containerd[1624]: time="2025-07-07T01:22:48.996132403Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 7 01:22:48.997503 containerd[1624]: time="2025-07-07T01:22:48.996158262Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 7 01:22:48.998034 containerd[1624]: time="2025-07-07T01:22:48.996178162Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 7 01:22:48.998034 containerd[1624]: time="2025-07-07T01:22:48.996200950Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 7 01:22:48.998034 containerd[1624]: time="2025-07-07T01:22:48.996221927Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 7 01:22:48.998034 containerd[1624]: time="2025-07-07T01:22:48.996242173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 7 01:22:48.998034 containerd[1624]: time="2025-07-07T01:22:48.996262887Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 7 01:22:48.998034 containerd[1624]: time="2025-07-07T01:22:48.996304539Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 7 01:22:48.998034 containerd[1624]: time="2025-07-07T01:22:48.996330036Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 7 01:22:48.998034 containerd[1624]: time="2025-07-07T01:22:48.996354899Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 7 01:22:48.998034 containerd[1624]: time="2025-07-07T01:22:48.996373653Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 7 01:22:48.998034 containerd[1624]: time="2025-07-07T01:22:48.996393558Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 7 01:22:48.998034 containerd[1624]: time="2025-07-07T01:22:48.996413372Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 7 01:22:48.998034 containerd[1624]: time="2025-07-07T01:22:48.996448659Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 7 01:22:48.998034 containerd[1624]: time="2025-07-07T01:22:48.996485756Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 7 01:22:48.998034 containerd[1624]: time="2025-07-07T01:22:48.996507939Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 7 01:22:48.998034 containerd[1624]: time="2025-07-07T01:22:48.996525791Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 7 01:22:48.998616 containerd[1624]: time="2025-07-07T01:22:48.996590191Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 7 01:22:48.998616 containerd[1624]: time="2025-07-07T01:22:48.996637940Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 7 01:22:48.998616 containerd[1624]: time="2025-07-07T01:22:48.996663540Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 7 01:22:48.998616 containerd[1624]: time="2025-07-07T01:22:48.996685120Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 7 01:22:48.998616 containerd[1624]: time="2025-07-07T01:22:48.996702058Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 7 01:22:48.998616 containerd[1624]: time="2025-07-07T01:22:48.996728749Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 7 01:22:48.998616 containerd[1624]: time="2025-07-07T01:22:48.996752012Z" level=info msg="NRI interface is disabled by configuration." Jul 7 01:22:48.998616 containerd[1624]: time="2025-07-07T01:22:48.996771011Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 7 01:22:48.998922 containerd[1624]: time="2025-07-07T01:22:48.997170799Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 7 01:22:48.998922 containerd[1624]: time="2025-07-07T01:22:48.997256053Z" level=info msg="Connect containerd service" Jul 7 01:22:49.004064 containerd[1624]: time="2025-07-07T01:22:49.001764016Z" level=info msg="using legacy CRI server" Jul 7 01:22:49.004064 containerd[1624]: time="2025-07-07T01:22:49.001836669Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 7 01:22:49.004064 containerd[1624]: time="2025-07-07T01:22:49.002093870Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 7 01:22:49.006859 containerd[1624]: time="2025-07-07T01:22:49.004622342Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 01:22:49.006859 containerd[1624]: time="2025-07-07T01:22:49.004856454Z" level=info msg="Start subscribing containerd event" Jul 7 01:22:49.006859 containerd[1624]: time="2025-07-07T01:22:49.004983939Z" level=info msg="Start recovering state" Jul 7 01:22:49.006859 containerd[1624]: time="2025-07-07T01:22:49.005099887Z" level=info msg="Start event monitor" Jul 7 01:22:49.006859 containerd[1624]: time="2025-07-07T01:22:49.005154814Z" level=info msg="Start snapshots syncer" Jul 7 01:22:49.006859 containerd[1624]: time="2025-07-07T01:22:49.005180500Z" level=info msg="Start cni network conf syncer for default" Jul 7 01:22:49.006859 containerd[1624]: time="2025-07-07T01:22:49.005200246Z" level=info msg="Start streaming server" Jul 7 01:22:49.006859 containerd[1624]: time="2025-07-07T01:22:49.006216695Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 7 01:22:49.006859 containerd[1624]: time="2025-07-07T01:22:49.006333162Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 7 01:22:49.006859 containerd[1624]: time="2025-07-07T01:22:49.006434482Z" level=info msg="containerd successfully booted in 0.145246s" Jul 7 01:22:49.012117 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 7 01:22:49.014072 systemd[1]: Started containerd.service - containerd container runtime. Jul 7 01:22:49.055854 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 7 01:22:49.069634 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 7 01:22:49.082513 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 7 01:22:49.085665 systemd[1]: Reached target getty.target - Login Prompts. Jul 7 01:22:49.550472 tar[1611]: linux-amd64/LICENSE Jul 7 01:22:49.550472 tar[1611]: linux-amd64/README.md Jul 7 01:22:49.576625 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 7 01:22:49.872553 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 01:22:49.904245 (kubelet)[1722]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 01:22:50.521579 kubelet[1722]: E0707 01:22:50.521157 1722 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 01:22:50.526623 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 01:22:50.526976 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 01:22:52.539819 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 7 01:22:52.557488 systemd[1]: Started sshd@0-10.244.21.90:22-139.178.68.195:47442.service - OpenSSH per-connection server daemon (139.178.68.195:47442). Jul 7 01:22:53.556451 sshd[1734]: Accepted publickey for core from 139.178.68.195 port 47442 ssh2: RSA SHA256:OzzIFs54pJXMP2eymQNEzIb/qF+YzQ98zvMT1AG90zI Jul 7 01:22:53.559463 sshd[1734]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:22:53.579324 systemd-logind[1596]: New session 1 of user core. Jul 7 01:22:53.582403 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 7 01:22:53.594013 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 7 01:22:53.619418 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 7 01:22:53.632317 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 7 01:22:53.646390 (systemd)[1740]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 7 01:22:53.793822 systemd[1740]: Queued start job for default target default.target. Jul 7 01:22:53.794375 systemd[1740]: Created slice app.slice - User Application Slice. Jul 7 01:22:53.794407 systemd[1740]: Reached target paths.target - Paths. Jul 7 01:22:53.794429 systemd[1740]: Reached target timers.target - Timers. Jul 7 01:22:53.800448 systemd[1740]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 7 01:22:53.813210 systemd[1740]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 7 01:22:53.813322 systemd[1740]: Reached target sockets.target - Sockets. Jul 7 01:22:53.813350 systemd[1740]: Reached target basic.target - Basic System. Jul 7 01:22:53.813415 systemd[1740]: Reached target default.target - Main User Target. Jul 7 01:22:53.813478 systemd[1740]: Startup finished in 157ms. Jul 7 01:22:53.813656 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 7 01:22:53.820824 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 7 01:22:54.146056 login[1707]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 7 01:22:54.150634 login[1708]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 7 01:22:54.154723 systemd-logind[1596]: New session 2 of user core. Jul 7 01:22:54.166616 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 7 01:22:54.171182 systemd-logind[1596]: New session 3 of user core. Jul 7 01:22:54.179929 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 7 01:22:54.531126 systemd[1]: Started sshd@1-10.244.21.90:22-139.178.68.195:47458.service - OpenSSH per-connection server daemon (139.178.68.195:47458). Jul 7 01:22:55.106408 coreos-metadata[1570]: Jul 07 01:22:55.106 WARN failed to locate config-drive, using the metadata service API instead Jul 7 01:22:55.132048 coreos-metadata[1570]: Jul 07 01:22:55.131 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jul 7 01:22:55.141623 coreos-metadata[1570]: Jul 07 01:22:55.141 INFO Fetch failed with 404: resource not found Jul 7 01:22:55.141856 coreos-metadata[1570]: Jul 07 01:22:55.141 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jul 7 01:22:55.142625 coreos-metadata[1570]: Jul 07 01:22:55.142 INFO Fetch successful Jul 7 01:22:55.142910 coreos-metadata[1570]: Jul 07 01:22:55.142 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jul 7 01:22:55.158416 coreos-metadata[1570]: Jul 07 01:22:55.158 INFO Fetch successful Jul 7 01:22:55.158416 coreos-metadata[1570]: Jul 07 01:22:55.158 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jul 7 01:22:55.177334 coreos-metadata[1570]: Jul 07 01:22:55.177 INFO Fetch successful Jul 7 01:22:55.177586 coreos-metadata[1570]: Jul 07 01:22:55.177 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jul 7 01:22:55.204942 coreos-metadata[1570]: Jul 07 01:22:55.204 INFO Fetch successful Jul 7 01:22:55.205208 coreos-metadata[1570]: Jul 07 01:22:55.205 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jul 7 01:22:55.226717 coreos-metadata[1570]: Jul 07 01:22:55.226 INFO Fetch successful Jul 7 01:22:55.261448 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 7 01:22:55.264194 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 7 01:22:55.550161 sshd[1780]: Accepted publickey for core from 139.178.68.195 port 47458 ssh2: RSA SHA256:OzzIFs54pJXMP2eymQNEzIb/qF+YzQ98zvMT1AG90zI Jul 7 01:22:55.552225 sshd[1780]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:22:55.558599 systemd-logind[1596]: New session 4 of user core. Jul 7 01:22:55.566973 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 7 01:22:55.942730 coreos-metadata[1680]: Jul 07 01:22:55.942 WARN failed to locate config-drive, using the metadata service API instead Jul 7 01:22:55.964676 coreos-metadata[1680]: Jul 07 01:22:55.964 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jul 7 01:22:55.997550 coreos-metadata[1680]: Jul 07 01:22:55.997 INFO Fetch successful Jul 7 01:22:55.997792 coreos-metadata[1680]: Jul 07 01:22:55.997 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jul 7 01:22:56.027156 coreos-metadata[1680]: Jul 07 01:22:56.026 INFO Fetch successful Jul 7 01:22:56.031649 unknown[1680]: wrote ssh authorized keys file for user: core Jul 7 01:22:56.061074 update-ssh-keys[1798]: Updated "/home/core/.ssh/authorized_keys" Jul 7 01:22:56.062077 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 7 01:22:56.069176 systemd[1]: Finished sshkeys.service. Jul 7 01:22:56.073153 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 7 01:22:56.073831 systemd[1]: Startup finished in 18.945s (kernel) + 12.787s (userspace) = 31.733s. Jul 7 01:22:56.249451 sshd[1780]: pam_unix(sshd:session): session closed for user core Jul 7 01:22:56.255639 systemd[1]: sshd@1-10.244.21.90:22-139.178.68.195:47458.service: Deactivated successfully. Jul 7 01:22:56.259089 systemd[1]: session-4.scope: Deactivated successfully. Jul 7 01:22:56.261657 systemd-logind[1596]: Session 4 logged out. Waiting for processes to exit. Jul 7 01:22:56.263366 systemd-logind[1596]: Removed session 4. Jul 7 01:22:56.425746 systemd[1]: Started sshd@2-10.244.21.90:22-139.178.68.195:47474.service - OpenSSH per-connection server daemon (139.178.68.195:47474). Jul 7 01:22:57.431242 sshd[1808]: Accepted publickey for core from 139.178.68.195 port 47474 ssh2: RSA SHA256:OzzIFs54pJXMP2eymQNEzIb/qF+YzQ98zvMT1AG90zI Jul 7 01:22:57.433924 sshd[1808]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:22:57.441118 systemd-logind[1596]: New session 5 of user core. Jul 7 01:22:57.448822 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 7 01:22:58.129651 sshd[1808]: pam_unix(sshd:session): session closed for user core Jul 7 01:22:58.134690 systemd[1]: sshd@2-10.244.21.90:22-139.178.68.195:47474.service: Deactivated successfully. Jul 7 01:22:58.138400 systemd-logind[1596]: Session 5 logged out. Waiting for processes to exit. Jul 7 01:22:58.139250 systemd[1]: session-5.scope: Deactivated successfully. Jul 7 01:22:58.140705 systemd-logind[1596]: Removed session 5. Jul 7 01:22:58.298801 systemd[1]: Started sshd@3-10.244.21.90:22-139.178.68.195:47482.service - OpenSSH per-connection server daemon (139.178.68.195:47482). Jul 7 01:22:59.291625 sshd[1816]: Accepted publickey for core from 139.178.68.195 port 47482 ssh2: RSA SHA256:OzzIFs54pJXMP2eymQNEzIb/qF+YzQ98zvMT1AG90zI Jul 7 01:22:59.293795 sshd[1816]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:22:59.301047 systemd-logind[1596]: New session 6 of user core. Jul 7 01:22:59.311799 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 7 01:22:59.982705 sshd[1816]: pam_unix(sshd:session): session closed for user core Jul 7 01:22:59.986458 systemd[1]: sshd@3-10.244.21.90:22-139.178.68.195:47482.service: Deactivated successfully. Jul 7 01:22:59.990739 systemd-logind[1596]: Session 6 logged out. Waiting for processes to exit. Jul 7 01:22:59.990929 systemd[1]: session-6.scope: Deactivated successfully. Jul 7 01:22:59.993092 systemd-logind[1596]: Removed session 6. Jul 7 01:23:00.148893 systemd[1]: Started sshd@4-10.244.21.90:22-139.178.68.195:51766.service - OpenSSH per-connection server daemon (139.178.68.195:51766). Jul 7 01:23:00.599480 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 7 01:23:00.606709 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 01:23:00.808585 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 01:23:00.813827 (kubelet)[1837]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 01:23:00.892763 kubelet[1837]: E0707 01:23:00.889415 1837 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 01:23:00.896598 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 01:23:00.896905 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 01:23:01.165058 sshd[1824]: Accepted publickey for core from 139.178.68.195 port 51766 ssh2: RSA SHA256:OzzIFs54pJXMP2eymQNEzIb/qF+YzQ98zvMT1AG90zI Jul 7 01:23:01.166729 sshd[1824]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:23:01.174770 systemd-logind[1596]: New session 7 of user core. Jul 7 01:23:01.186998 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 7 01:23:01.707108 sudo[1848]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 7 01:23:01.707617 sudo[1848]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 01:23:01.720929 sudo[1848]: pam_unix(sudo:session): session closed for user root Jul 7 01:23:01.878819 sshd[1824]: pam_unix(sshd:session): session closed for user core Jul 7 01:23:01.884658 systemd[1]: sshd@4-10.244.21.90:22-139.178.68.195:51766.service: Deactivated successfully. Jul 7 01:23:01.888525 systemd-logind[1596]: Session 7 logged out. Waiting for processes to exit. Jul 7 01:23:01.889353 systemd[1]: session-7.scope: Deactivated successfully. Jul 7 01:23:01.891795 systemd-logind[1596]: Removed session 7. Jul 7 01:23:02.045680 systemd[1]: Started sshd@5-10.244.21.90:22-139.178.68.195:51776.service - OpenSSH per-connection server daemon (139.178.68.195:51776). Jul 7 01:23:03.044308 sshd[1853]: Accepted publickey for core from 139.178.68.195 port 51776 ssh2: RSA SHA256:OzzIFs54pJXMP2eymQNEzIb/qF+YzQ98zvMT1AG90zI Jul 7 01:23:03.046496 sshd[1853]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:23:03.053061 systemd-logind[1596]: New session 8 of user core. Jul 7 01:23:03.063815 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 7 01:23:03.571902 sudo[1858]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 7 01:23:03.572960 sudo[1858]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 01:23:03.578988 sudo[1858]: pam_unix(sudo:session): session closed for user root Jul 7 01:23:03.588156 sudo[1857]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 7 01:23:03.588740 sudo[1857]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 01:23:03.610750 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 7 01:23:03.614539 auditctl[1861]: No rules Jul 7 01:23:03.616116 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 01:23:03.617564 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 7 01:23:03.628840 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 7 01:23:03.663489 augenrules[1880]: No rules Jul 7 01:23:03.665757 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 7 01:23:03.667646 sudo[1857]: pam_unix(sudo:session): session closed for user root Jul 7 01:23:03.829646 sshd[1853]: pam_unix(sshd:session): session closed for user core Jul 7 01:23:03.833978 systemd[1]: sshd@5-10.244.21.90:22-139.178.68.195:51776.service: Deactivated successfully. Jul 7 01:23:03.837319 systemd-logind[1596]: Session 8 logged out. Waiting for processes to exit. Jul 7 01:23:03.840628 systemd[1]: session-8.scope: Deactivated successfully. Jul 7 01:23:03.841664 systemd-logind[1596]: Removed session 8. Jul 7 01:23:04.004917 systemd[1]: Started sshd@6-10.244.21.90:22-139.178.68.195:51780.service - OpenSSH per-connection server daemon (139.178.68.195:51780). Jul 7 01:23:05.006004 sshd[1889]: Accepted publickey for core from 139.178.68.195 port 51780 ssh2: RSA SHA256:OzzIFs54pJXMP2eymQNEzIb/qF+YzQ98zvMT1AG90zI Jul 7 01:23:05.009264 sshd[1889]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:23:05.030585 systemd-logind[1596]: New session 9 of user core. Jul 7 01:23:05.038933 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 7 01:23:05.540623 sudo[1893]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 7 01:23:05.541088 sudo[1893]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 01:23:06.031888 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 7 01:23:06.035270 (dockerd)[1909]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 7 01:23:06.504855 dockerd[1909]: time="2025-07-07T01:23:06.504227760Z" level=info msg="Starting up" Jul 7 01:23:06.789503 dockerd[1909]: time="2025-07-07T01:23:06.788659607Z" level=info msg="Loading containers: start." Jul 7 01:23:06.934528 kernel: Initializing XFRM netlink socket Jul 7 01:23:07.050732 systemd-networkd[1256]: docker0: Link UP Jul 7 01:23:07.076791 dockerd[1909]: time="2025-07-07T01:23:07.076724807Z" level=info msg="Loading containers: done." Jul 7 01:23:07.098779 dockerd[1909]: time="2025-07-07T01:23:07.098693575Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 7 01:23:07.098990 dockerd[1909]: time="2025-07-07T01:23:07.098851367Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jul 7 01:23:07.099088 dockerd[1909]: time="2025-07-07T01:23:07.099055012Z" level=info msg="Daemon has completed initialization" Jul 7 01:23:07.142391 dockerd[1909]: time="2025-07-07T01:23:07.141264089Z" level=info msg="API listen on /run/docker.sock" Jul 7 01:23:07.141731 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 7 01:23:08.005263 containerd[1624]: time="2025-07-07T01:23:08.005151698Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" Jul 7 01:23:09.221573 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1271501168.mount: Deactivated successfully. Jul 7 01:23:11.099860 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 7 01:23:11.107640 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 01:23:11.327533 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 01:23:11.335858 (kubelet)[2120]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 01:23:11.444823 kubelet[2120]: E0707 01:23:11.443159 2120 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 01:23:11.445511 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 01:23:11.445814 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 01:23:12.157360 containerd[1624]: time="2025-07-07T01:23:12.157020423Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:23:12.159798 containerd[1624]: time="2025-07-07T01:23:12.159697894Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.8: active requests=0, bytes read=27960995" Jul 7 01:23:12.162543 containerd[1624]: time="2025-07-07T01:23:12.160915465Z" level=info msg="ImageCreate event name:\"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:23:12.182787 containerd[1624]: time="2025-07-07T01:23:12.182715953Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:23:12.184538 containerd[1624]: time="2025-07-07T01:23:12.184482629Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.8\" with image id \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\", size \"27957787\" in 4.179214167s" Jul 7 01:23:12.184698 containerd[1624]: time="2025-07-07T01:23:12.184667876Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\"" Jul 7 01:23:12.186559 containerd[1624]: time="2025-07-07T01:23:12.186523116Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" Jul 7 01:23:14.832589 containerd[1624]: time="2025-07-07T01:23:14.832400757Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:23:14.835081 containerd[1624]: time="2025-07-07T01:23:14.834984057Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.8: active requests=0, bytes read=24713784" Jul 7 01:23:14.836632 containerd[1624]: time="2025-07-07T01:23:14.836570343Z" level=info msg="ImageCreate event name:\"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:23:14.841134 containerd[1624]: time="2025-07-07T01:23:14.841064307Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:23:14.843469 containerd[1624]: time="2025-07-07T01:23:14.842618578Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.8\" with image id \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\", size \"26202149\" in 2.656043994s" Jul 7 01:23:14.843469 containerd[1624]: time="2025-07-07T01:23:14.842695465Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\"" Jul 7 01:23:14.844016 containerd[1624]: time="2025-07-07T01:23:14.843986869Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" Jul 7 01:23:17.405358 containerd[1624]: time="2025-07-07T01:23:17.404437602Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:23:17.423669 containerd[1624]: time="2025-07-07T01:23:17.423506697Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.8: active requests=0, bytes read=18780394" Jul 7 01:23:17.425653 containerd[1624]: time="2025-07-07T01:23:17.425561547Z" level=info msg="ImageCreate event name:\"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:23:17.432256 containerd[1624]: time="2025-07-07T01:23:17.432150673Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:23:17.434190 containerd[1624]: time="2025-07-07T01:23:17.433157318Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.8\" with image id \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\", size \"20268777\" in 2.588807827s" Jul 7 01:23:17.434190 containerd[1624]: time="2025-07-07T01:23:17.433226478Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\"" Jul 7 01:23:17.435123 containerd[1624]: time="2025-07-07T01:23:17.435089728Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" Jul 7 01:23:18.863667 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jul 7 01:23:20.073194 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1914965058.mount: Deactivated successfully. Jul 7 01:23:20.833781 containerd[1624]: time="2025-07-07T01:23:20.832471562Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:23:20.833781 containerd[1624]: time="2025-07-07T01:23:20.833583211Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.8: active requests=0, bytes read=30354633" Jul 7 01:23:20.834700 containerd[1624]: time="2025-07-07T01:23:20.834651492Z" level=info msg="ImageCreate event name:\"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:23:20.842662 containerd[1624]: time="2025-07-07T01:23:20.839599909Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:23:20.845189 containerd[1624]: time="2025-07-07T01:23:20.840155522Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.8\" with image id \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\", size \"30353644\" in 3.404903093s" Jul 7 01:23:20.845189 containerd[1624]: time="2025-07-07T01:23:20.843098888Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\"" Jul 7 01:23:20.847950 containerd[1624]: time="2025-07-07T01:23:20.847915592Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 7 01:23:21.599696 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 7 01:23:21.613105 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 01:23:21.869558 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 01:23:21.877023 (kubelet)[2160]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 01:23:21.953315 kubelet[2160]: E0707 01:23:21.952267 2160 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 01:23:21.957242 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 01:23:21.957809 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 01:23:22.030491 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1304291676.mount: Deactivated successfully. Jul 7 01:23:23.665392 containerd[1624]: time="2025-07-07T01:23:23.665231924Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:23:23.667560 containerd[1624]: time="2025-07-07T01:23:23.667412147Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" Jul 7 01:23:23.670306 containerd[1624]: time="2025-07-07T01:23:23.668959288Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:23:23.675540 containerd[1624]: time="2025-07-07T01:23:23.675471235Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:23:23.679232 containerd[1624]: time="2025-07-07T01:23:23.679167030Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.831067498s" Jul 7 01:23:23.679490 containerd[1624]: time="2025-07-07T01:23:23.679448101Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 7 01:23:23.683502 containerd[1624]: time="2025-07-07T01:23:23.683448796Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 7 01:23:25.055886 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount46184458.mount: Deactivated successfully. Jul 7 01:23:25.066447 containerd[1624]: time="2025-07-07T01:23:25.065034836Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:23:25.084251 containerd[1624]: time="2025-07-07T01:23:25.084087157Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Jul 7 01:23:25.095658 containerd[1624]: time="2025-07-07T01:23:25.095577956Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:23:25.101092 containerd[1624]: time="2025-07-07T01:23:25.100925191Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:23:25.101992 containerd[1624]: time="2025-07-07T01:23:25.101771200Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.418066995s" Jul 7 01:23:25.101992 containerd[1624]: time="2025-07-07T01:23:25.101817576Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 7 01:23:25.102839 containerd[1624]: time="2025-07-07T01:23:25.102709692Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 7 01:23:26.671672 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1693631909.mount: Deactivated successfully. Jul 7 01:23:32.100112 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 7 01:23:32.115688 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 01:23:32.573423 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 01:23:32.591031 (kubelet)[2288]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 01:23:32.706651 kubelet[2288]: E0707 01:23:32.706518 2288 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 01:23:32.709681 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 01:23:32.712134 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 01:23:33.356721 update_engine[1600]: I20250707 01:23:33.356493 1600 update_attempter.cc:509] Updating boot flags... Jul 7 01:23:33.558707 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2303) Jul 7 01:23:34.237329 containerd[1624]: time="2025-07-07T01:23:34.234755889Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:23:34.238344 containerd[1624]: time="2025-07-07T01:23:34.237629940Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780021" Jul 7 01:23:34.240172 containerd[1624]: time="2025-07-07T01:23:34.240096378Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:23:34.244670 containerd[1624]: time="2025-07-07T01:23:34.244611361Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:23:34.247392 containerd[1624]: time="2025-07-07T01:23:34.247334478Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 9.144578266s" Jul 7 01:23:34.247484 containerd[1624]: time="2025-07-07T01:23:34.247416182Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jul 7 01:23:37.546089 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 01:23:37.556721 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 01:23:37.614777 systemd[1]: Reloading requested from client PID 2339 ('systemctl') (unit session-9.scope)... Jul 7 01:23:37.614822 systemd[1]: Reloading... Jul 7 01:23:37.817397 zram_generator::config[2378]: No configuration found. Jul 7 01:23:37.973999 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 01:23:38.080547 systemd[1]: Reloading finished in 465 ms. Jul 7 01:23:38.153998 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 7 01:23:38.154163 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 7 01:23:38.154669 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 01:23:38.161927 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 01:23:38.320527 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 01:23:38.335959 (kubelet)[2456]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 01:23:38.401008 kubelet[2456]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 01:23:38.401008 kubelet[2456]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 7 01:23:38.401008 kubelet[2456]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 01:23:38.401761 kubelet[2456]: I0707 01:23:38.401077 2456 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 01:23:39.115841 kubelet[2456]: I0707 01:23:39.115745 2456 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 7 01:23:39.115841 kubelet[2456]: I0707 01:23:39.115803 2456 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 01:23:39.116161 kubelet[2456]: I0707 01:23:39.116137 2456 server.go:934] "Client rotation is on, will bootstrap in background" Jul 7 01:23:39.145833 kubelet[2456]: I0707 01:23:39.145765 2456 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 01:23:39.147343 kubelet[2456]: E0707 01:23:39.146917 2456 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.244.21.90:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.244.21.90:6443: connect: connection refused" logger="UnhandledError" Jul 7 01:23:39.158051 kubelet[2456]: E0707 01:23:39.157895 2456 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 7 01:23:39.158051 kubelet[2456]: I0707 01:23:39.157940 2456 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 7 01:23:39.168528 kubelet[2456]: I0707 01:23:39.168482 2456 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 01:23:39.171642 kubelet[2456]: I0707 01:23:39.171606 2456 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 7 01:23:39.172120 kubelet[2456]: I0707 01:23:39.172075 2456 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 01:23:39.172513 kubelet[2456]: I0707 01:23:39.172121 2456 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-3dgpq.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jul 7 01:23:39.172513 kubelet[2456]: I0707 01:23:39.172490 2456 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 01:23:39.172513 kubelet[2456]: I0707 01:23:39.172515 2456 container_manager_linux.go:300] "Creating device plugin manager" Jul 7 01:23:39.172884 kubelet[2456]: I0707 01:23:39.172714 2456 state_mem.go:36] "Initialized new in-memory state store" Jul 7 01:23:39.178030 kubelet[2456]: I0707 01:23:39.177609 2456 kubelet.go:408] "Attempting to sync node with API server" Jul 7 01:23:39.178030 kubelet[2456]: I0707 01:23:39.177653 2456 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 01:23:39.178030 kubelet[2456]: I0707 01:23:39.177723 2456 kubelet.go:314] "Adding apiserver pod source" Jul 7 01:23:39.178030 kubelet[2456]: I0707 01:23:39.177774 2456 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 01:23:39.192264 kubelet[2456]: W0707 01:23:39.192016 2456 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.244.21.90:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-3dgpq.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.244.21.90:6443: connect: connection refused Jul 7 01:23:39.192264 kubelet[2456]: E0707 01:23:39.192109 2456 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.244.21.90:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-3dgpq.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.244.21.90:6443: connect: connection refused" logger="UnhandledError" Jul 7 01:23:39.192848 kubelet[2456]: I0707 01:23:39.192668 2456 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 7 01:23:39.199747 kubelet[2456]: W0707 01:23:39.199468 2456 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.244.21.90:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.244.21.90:6443: connect: connection refused Jul 7 01:23:39.199747 kubelet[2456]: E0707 01:23:39.199547 2456 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.244.21.90:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.244.21.90:6443: connect: connection refused" logger="UnhandledError" Jul 7 01:23:39.200528 kubelet[2456]: I0707 01:23:39.200467 2456 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 7 01:23:39.201408 kubelet[2456]: W0707 01:23:39.201373 2456 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 7 01:23:39.203276 kubelet[2456]: I0707 01:23:39.203236 2456 server.go:1274] "Started kubelet" Jul 7 01:23:39.204971 kubelet[2456]: I0707 01:23:39.204530 2456 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 01:23:39.206074 kubelet[2456]: I0707 01:23:39.206036 2456 server.go:449] "Adding debug handlers to kubelet server" Jul 7 01:23:39.212713 kubelet[2456]: I0707 01:23:39.211961 2456 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 01:23:39.212713 kubelet[2456]: I0707 01:23:39.212513 2456 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 01:23:39.213314 kubelet[2456]: I0707 01:23:39.213253 2456 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 01:23:39.221272 kubelet[2456]: E0707 01:23:39.215627 2456 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.244.21.90:6443/api/v1/namespaces/default/events\": dial tcp 10.244.21.90:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-3dgpq.gb1.brightbox.com.184fd39584dee645 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-3dgpq.gb1.brightbox.com,UID:srv-3dgpq.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-3dgpq.gb1.brightbox.com,},FirstTimestamp:2025-07-07 01:23:39.203200581 +0000 UTC m=+0.862143356,LastTimestamp:2025-07-07 01:23:39.203200581 +0000 UTC m=+0.862143356,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-3dgpq.gb1.brightbox.com,}" Jul 7 01:23:39.223235 kubelet[2456]: I0707 01:23:39.223208 2456 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 01:23:39.224327 kubelet[2456]: I0707 01:23:39.224137 2456 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 7 01:23:39.224621 kubelet[2456]: E0707 01:23:39.224591 2456 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-3dgpq.gb1.brightbox.com\" not found" Jul 7 01:23:39.229217 kubelet[2456]: E0707 01:23:39.228464 2456 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.21.90:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-3dgpq.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.21.90:6443: connect: connection refused" interval="200ms" Jul 7 01:23:39.230170 kubelet[2456]: I0707 01:23:39.230148 2456 reconciler.go:26] "Reconciler: start to sync state" Jul 7 01:23:39.230382 kubelet[2456]: I0707 01:23:39.230363 2456 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 7 01:23:39.230921 kubelet[2456]: W0707 01:23:39.230874 2456 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.244.21.90:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.244.21.90:6443: connect: connection refused Jul 7 01:23:39.231063 kubelet[2456]: E0707 01:23:39.231034 2456 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.244.21.90:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.244.21.90:6443: connect: connection refused" logger="UnhandledError" Jul 7 01:23:39.231618 kubelet[2456]: I0707 01:23:39.231595 2456 factory.go:221] Registration of the systemd container factory successfully Jul 7 01:23:39.231828 kubelet[2456]: I0707 01:23:39.231802 2456 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 01:23:39.233951 kubelet[2456]: E0707 01:23:39.233926 2456 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 01:23:39.234666 kubelet[2456]: I0707 01:23:39.234645 2456 factory.go:221] Registration of the containerd container factory successfully Jul 7 01:23:39.246485 kubelet[2456]: I0707 01:23:39.246380 2456 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 7 01:23:39.248375 kubelet[2456]: I0707 01:23:39.248177 2456 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 7 01:23:39.248375 kubelet[2456]: I0707 01:23:39.248256 2456 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 7 01:23:39.248375 kubelet[2456]: I0707 01:23:39.248317 2456 kubelet.go:2321] "Starting kubelet main sync loop" Jul 7 01:23:39.248581 kubelet[2456]: E0707 01:23:39.248411 2456 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 01:23:39.271119 kubelet[2456]: W0707 01:23:39.271028 2456 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.244.21.90:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.244.21.90:6443: connect: connection refused Jul 7 01:23:39.271119 kubelet[2456]: E0707 01:23:39.271115 2456 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.244.21.90:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.244.21.90:6443: connect: connection refused" logger="UnhandledError" Jul 7 01:23:39.296901 kubelet[2456]: I0707 01:23:39.296813 2456 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 7 01:23:39.296901 kubelet[2456]: I0707 01:23:39.296864 2456 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 7 01:23:39.296901 kubelet[2456]: I0707 01:23:39.296921 2456 state_mem.go:36] "Initialized new in-memory state store" Jul 7 01:23:39.299473 kubelet[2456]: I0707 01:23:39.299432 2456 policy_none.go:49] "None policy: Start" Jul 7 01:23:39.300401 kubelet[2456]: I0707 01:23:39.300361 2456 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 7 01:23:39.300401 kubelet[2456]: I0707 01:23:39.300404 2456 state_mem.go:35] "Initializing new in-memory state store" Jul 7 01:23:39.310102 kubelet[2456]: I0707 01:23:39.310022 2456 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 7 01:23:39.310511 kubelet[2456]: I0707 01:23:39.310471 2456 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 01:23:39.310598 kubelet[2456]: I0707 01:23:39.310510 2456 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 01:23:39.315274 kubelet[2456]: I0707 01:23:39.313070 2456 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 01:23:39.320359 kubelet[2456]: E0707 01:23:39.320264 2456 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-3dgpq.gb1.brightbox.com\" not found" Jul 7 01:23:39.414844 kubelet[2456]: I0707 01:23:39.414149 2456 kubelet_node_status.go:72] "Attempting to register node" node="srv-3dgpq.gb1.brightbox.com" Jul 7 01:23:39.414844 kubelet[2456]: E0707 01:23:39.414682 2456 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.244.21.90:6443/api/v1/nodes\": dial tcp 10.244.21.90:6443: connect: connection refused" node="srv-3dgpq.gb1.brightbox.com" Jul 7 01:23:39.430261 kubelet[2456]: E0707 01:23:39.430154 2456 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.21.90:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-3dgpq.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.21.90:6443: connect: connection refused" interval="400ms" Jul 7 01:23:39.532475 kubelet[2456]: I0707 01:23:39.532351 2456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b7a1a5c9d32c6e6809fed85f0348ad23-ca-certs\") pod \"kube-apiserver-srv-3dgpq.gb1.brightbox.com\" (UID: \"b7a1a5c9d32c6e6809fed85f0348ad23\") " pod="kube-system/kube-apiserver-srv-3dgpq.gb1.brightbox.com" Jul 7 01:23:39.532475 kubelet[2456]: I0707 01:23:39.532474 2456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b7a1a5c9d32c6e6809fed85f0348ad23-k8s-certs\") pod \"kube-apiserver-srv-3dgpq.gb1.brightbox.com\" (UID: \"b7a1a5c9d32c6e6809fed85f0348ad23\") " pod="kube-system/kube-apiserver-srv-3dgpq.gb1.brightbox.com" Jul 7 01:23:39.533032 kubelet[2456]: I0707 01:23:39.532567 2456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b7a1a5c9d32c6e6809fed85f0348ad23-usr-share-ca-certificates\") pod \"kube-apiserver-srv-3dgpq.gb1.brightbox.com\" (UID: \"b7a1a5c9d32c6e6809fed85f0348ad23\") " pod="kube-system/kube-apiserver-srv-3dgpq.gb1.brightbox.com" Jul 7 01:23:39.533032 kubelet[2456]: I0707 01:23:39.532616 2456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c99d3427ca64a34a3c7b10a984f113f9-flexvolume-dir\") pod \"kube-controller-manager-srv-3dgpq.gb1.brightbox.com\" (UID: \"c99d3427ca64a34a3c7b10a984f113f9\") " pod="kube-system/kube-controller-manager-srv-3dgpq.gb1.brightbox.com" Jul 7 01:23:39.533032 kubelet[2456]: I0707 01:23:39.532661 2456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c99d3427ca64a34a3c7b10a984f113f9-k8s-certs\") pod \"kube-controller-manager-srv-3dgpq.gb1.brightbox.com\" (UID: \"c99d3427ca64a34a3c7b10a984f113f9\") " pod="kube-system/kube-controller-manager-srv-3dgpq.gb1.brightbox.com" Jul 7 01:23:39.533032 kubelet[2456]: I0707 01:23:39.532704 2456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c99d3427ca64a34a3c7b10a984f113f9-kubeconfig\") pod \"kube-controller-manager-srv-3dgpq.gb1.brightbox.com\" (UID: \"c99d3427ca64a34a3c7b10a984f113f9\") " pod="kube-system/kube-controller-manager-srv-3dgpq.gb1.brightbox.com" Jul 7 01:23:39.533315 kubelet[2456]: I0707 01:23:39.532745 2456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c99d3427ca64a34a3c7b10a984f113f9-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-3dgpq.gb1.brightbox.com\" (UID: \"c99d3427ca64a34a3c7b10a984f113f9\") " pod="kube-system/kube-controller-manager-srv-3dgpq.gb1.brightbox.com" Jul 7 01:23:39.533483 kubelet[2456]: I0707 01:23:39.532807 2456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ef75e3ba9aea58162febda624ae01a4-kubeconfig\") pod \"kube-scheduler-srv-3dgpq.gb1.brightbox.com\" (UID: \"6ef75e3ba9aea58162febda624ae01a4\") " pod="kube-system/kube-scheduler-srv-3dgpq.gb1.brightbox.com" Jul 7 01:23:39.533483 kubelet[2456]: I0707 01:23:39.533452 2456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c99d3427ca64a34a3c7b10a984f113f9-ca-certs\") pod \"kube-controller-manager-srv-3dgpq.gb1.brightbox.com\" (UID: \"c99d3427ca64a34a3c7b10a984f113f9\") " pod="kube-system/kube-controller-manager-srv-3dgpq.gb1.brightbox.com" Jul 7 01:23:39.618702 kubelet[2456]: I0707 01:23:39.618610 2456 kubelet_node_status.go:72] "Attempting to register node" node="srv-3dgpq.gb1.brightbox.com" Jul 7 01:23:39.619232 kubelet[2456]: E0707 01:23:39.619131 2456 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.244.21.90:6443/api/v1/nodes\": dial tcp 10.244.21.90:6443: connect: connection refused" node="srv-3dgpq.gb1.brightbox.com" Jul 7 01:23:39.670231 containerd[1624]: time="2025-07-07T01:23:39.669528186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-3dgpq.gb1.brightbox.com,Uid:b7a1a5c9d32c6e6809fed85f0348ad23,Namespace:kube-system,Attempt:0,}" Jul 7 01:23:39.675930 containerd[1624]: time="2025-07-07T01:23:39.675826239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-3dgpq.gb1.brightbox.com,Uid:c99d3427ca64a34a3c7b10a984f113f9,Namespace:kube-system,Attempt:0,}" Jul 7 01:23:39.676377 containerd[1624]: time="2025-07-07T01:23:39.675829123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-3dgpq.gb1.brightbox.com,Uid:6ef75e3ba9aea58162febda624ae01a4,Namespace:kube-system,Attempt:0,}" Jul 7 01:23:39.831736 kubelet[2456]: E0707 01:23:39.831641 2456 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.21.90:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-3dgpq.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.21.90:6443: connect: connection refused" interval="800ms" Jul 7 01:23:40.023354 kubelet[2456]: I0707 01:23:40.022780 2456 kubelet_node_status.go:72] "Attempting to register node" node="srv-3dgpq.gb1.brightbox.com" Jul 7 01:23:40.023354 kubelet[2456]: E0707 01:23:40.023207 2456 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.244.21.90:6443/api/v1/nodes\": dial tcp 10.244.21.90:6443: connect: connection refused" node="srv-3dgpq.gb1.brightbox.com" Jul 7 01:23:40.084613 kubelet[2456]: W0707 01:23:40.084522 2456 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.244.21.90:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.244.21.90:6443: connect: connection refused Jul 7 01:23:40.084813 kubelet[2456]: E0707 01:23:40.084632 2456 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.244.21.90:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.244.21.90:6443: connect: connection refused" logger="UnhandledError" Jul 7 01:23:40.204088 kubelet[2456]: W0707 01:23:40.203948 2456 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.244.21.90:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-3dgpq.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.244.21.90:6443: connect: connection refused Jul 7 01:23:40.204327 kubelet[2456]: E0707 01:23:40.204103 2456 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.244.21.90:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-3dgpq.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.244.21.90:6443: connect: connection refused" logger="UnhandledError" Jul 7 01:23:40.270387 kubelet[2456]: W0707 01:23:40.270209 2456 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.244.21.90:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.244.21.90:6443: connect: connection refused Jul 7 01:23:40.270387 kubelet[2456]: E0707 01:23:40.270343 2456 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.244.21.90:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.244.21.90:6443: connect: connection refused" logger="UnhandledError" Jul 7 01:23:40.633322 kubelet[2456]: E0707 01:23:40.633137 2456 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.21.90:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-3dgpq.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.21.90:6443: connect: connection refused" interval="1.6s" Jul 7 01:23:40.777899 kubelet[2456]: W0707 01:23:40.777767 2456 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.244.21.90:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.244.21.90:6443: connect: connection refused Jul 7 01:23:40.777899 kubelet[2456]: E0707 01:23:40.777860 2456 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.244.21.90:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.244.21.90:6443: connect: connection refused" logger="UnhandledError" Jul 7 01:23:40.827063 kubelet[2456]: I0707 01:23:40.826994 2456 kubelet_node_status.go:72] "Attempting to register node" node="srv-3dgpq.gb1.brightbox.com" Jul 7 01:23:40.827450 kubelet[2456]: E0707 01:23:40.827415 2456 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.244.21.90:6443/api/v1/nodes\": dial tcp 10.244.21.90:6443: connect: connection refused" node="srv-3dgpq.gb1.brightbox.com" Jul 7 01:23:40.947696 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1844800711.mount: Deactivated successfully. Jul 7 01:23:40.958315 containerd[1624]: time="2025-07-07T01:23:40.956931225Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 01:23:40.958964 containerd[1624]: time="2025-07-07T01:23:40.958930441Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 01:23:40.959598 containerd[1624]: time="2025-07-07T01:23:40.959533894Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 7 01:23:40.960347 containerd[1624]: time="2025-07-07T01:23:40.960311468Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 01:23:40.961528 containerd[1624]: time="2025-07-07T01:23:40.961482197Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jul 7 01:23:40.962274 containerd[1624]: time="2025-07-07T01:23:40.962241025Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 01:23:40.962463 containerd[1624]: time="2025-07-07T01:23:40.962423742Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 7 01:23:40.968266 containerd[1624]: time="2025-07-07T01:23:40.968224226Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 01:23:40.970350 containerd[1624]: time="2025-07-07T01:23:40.970309034Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.294360466s" Jul 7 01:23:40.971978 containerd[1624]: time="2025-07-07T01:23:40.971941934Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.30218603s" Jul 7 01:23:40.974841 containerd[1624]: time="2025-07-07T01:23:40.974800994Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.298666397s" Jul 7 01:23:41.176003 kubelet[2456]: E0707 01:23:41.175918 2456 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.244.21.90:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.244.21.90:6443: connect: connection refused" logger="UnhandledError" Jul 7 01:23:41.337723 containerd[1624]: time="2025-07-07T01:23:41.337449362Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 01:23:41.337968 containerd[1624]: time="2025-07-07T01:23:41.337531247Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 01:23:41.337968 containerd[1624]: time="2025-07-07T01:23:41.337555714Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 01:23:41.337968 containerd[1624]: time="2025-07-07T01:23:41.337720934Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 01:23:41.356336 containerd[1624]: time="2025-07-07T01:23:41.356071379Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 01:23:41.356336 containerd[1624]: time="2025-07-07T01:23:41.356142536Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 01:23:41.356336 containerd[1624]: time="2025-07-07T01:23:41.356220393Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 01:23:41.356336 containerd[1624]: time="2025-07-07T01:23:41.356239293Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 01:23:41.356996 containerd[1624]: time="2025-07-07T01:23:41.356478793Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 01:23:41.357201 containerd[1624]: time="2025-07-07T01:23:41.356512880Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 01:23:41.357733 containerd[1624]: time="2025-07-07T01:23:41.357668011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 01:23:41.358711 containerd[1624]: time="2025-07-07T01:23:41.358644527Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 01:23:41.498477 containerd[1624]: time="2025-07-07T01:23:41.498360109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-3dgpq.gb1.brightbox.com,Uid:c99d3427ca64a34a3c7b10a984f113f9,Namespace:kube-system,Attempt:0,} returns sandbox id \"60937cb84e5895b330a1b0aaf038621a059766504259ecc320eb8eb14a8efd70\"" Jul 7 01:23:41.508325 containerd[1624]: time="2025-07-07T01:23:41.507496979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-3dgpq.gb1.brightbox.com,Uid:b7a1a5c9d32c6e6809fed85f0348ad23,Namespace:kube-system,Attempt:0,} returns sandbox id \"a645dfb246baf90ceddd70fe5e9ea1485387b33a0695b504503bb5600b3791af\"" Jul 7 01:23:41.509618 containerd[1624]: time="2025-07-07T01:23:41.509582713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-3dgpq.gb1.brightbox.com,Uid:6ef75e3ba9aea58162febda624ae01a4,Namespace:kube-system,Attempt:0,} returns sandbox id \"db6ddfa0300938d5f35878f4a234e7fe8a657b898f4bd05d1441f412a469cc7e\"" Jul 7 01:23:41.512086 containerd[1624]: time="2025-07-07T01:23:41.512049492Z" level=info msg="CreateContainer within sandbox \"60937cb84e5895b330a1b0aaf038621a059766504259ecc320eb8eb14a8efd70\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 7 01:23:41.517848 containerd[1624]: time="2025-07-07T01:23:41.517625559Z" level=info msg="CreateContainer within sandbox \"a645dfb246baf90ceddd70fe5e9ea1485387b33a0695b504503bb5600b3791af\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 7 01:23:41.519813 containerd[1624]: time="2025-07-07T01:23:41.519772641Z" level=info msg="CreateContainer within sandbox \"db6ddfa0300938d5f35878f4a234e7fe8a657b898f4bd05d1441f412a469cc7e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 7 01:23:41.534392 containerd[1624]: time="2025-07-07T01:23:41.534334872Z" level=info msg="CreateContainer within sandbox \"60937cb84e5895b330a1b0aaf038621a059766504259ecc320eb8eb14a8efd70\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"977966b83917081f41bd1e7ef55d677e07c2575e77a5ffc373f02610a538aaf0\"" Jul 7 01:23:41.535830 containerd[1624]: time="2025-07-07T01:23:41.535795365Z" level=info msg="StartContainer for \"977966b83917081f41bd1e7ef55d677e07c2575e77a5ffc373f02610a538aaf0\"" Jul 7 01:23:41.538056 containerd[1624]: time="2025-07-07T01:23:41.538015552Z" level=info msg="CreateContainer within sandbox \"a645dfb246baf90ceddd70fe5e9ea1485387b33a0695b504503bb5600b3791af\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ea758824e2145df26539e4e9b5f048a0900582474ccdbe382b32c4af281a4131\"" Jul 7 01:23:41.540340 containerd[1624]: time="2025-07-07T01:23:41.539044943Z" level=info msg="StartContainer for \"ea758824e2145df26539e4e9b5f048a0900582474ccdbe382b32c4af281a4131\"" Jul 7 01:23:41.547939 containerd[1624]: time="2025-07-07T01:23:41.547872230Z" level=info msg="CreateContainer within sandbox \"db6ddfa0300938d5f35878f4a234e7fe8a657b898f4bd05d1441f412a469cc7e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2c0645800825c97fb66da1a8e9bd7810c57984e40b34c6471eafdf695ec13ca9\"" Jul 7 01:23:41.548726 containerd[1624]: time="2025-07-07T01:23:41.548694940Z" level=info msg="StartContainer for \"2c0645800825c97fb66da1a8e9bd7810c57984e40b34c6471eafdf695ec13ca9\"" Jul 7 01:23:41.688535 containerd[1624]: time="2025-07-07T01:23:41.688459564Z" level=info msg="StartContainer for \"977966b83917081f41bd1e7ef55d677e07c2575e77a5ffc373f02610a538aaf0\" returns successfully" Jul 7 01:23:41.744319 containerd[1624]: time="2025-07-07T01:23:41.741912950Z" level=info msg="StartContainer for \"ea758824e2145df26539e4e9b5f048a0900582474ccdbe382b32c4af281a4131\" returns successfully" Jul 7 01:23:41.744319 containerd[1624]: time="2025-07-07T01:23:41.741913129Z" level=info msg="StartContainer for \"2c0645800825c97fb66da1a8e9bd7810c57984e40b34c6471eafdf695ec13ca9\" returns successfully" Jul 7 01:23:42.011402 kubelet[2456]: W0707 01:23:42.009872 2456 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.244.21.90:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-3dgpq.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.244.21.90:6443: connect: connection refused Jul 7 01:23:42.014828 kubelet[2456]: E0707 01:23:42.013644 2456 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.244.21.90:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-3dgpq.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.244.21.90:6443: connect: connection refused" logger="UnhandledError" Jul 7 01:23:42.234496 kubelet[2456]: E0707 01:23:42.234416 2456 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.21.90:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-3dgpq.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.21.90:6443: connect: connection refused" interval="3.2s" Jul 7 01:23:42.435312 kubelet[2456]: I0707 01:23:42.434878 2456 kubelet_node_status.go:72] "Attempting to register node" node="srv-3dgpq.gb1.brightbox.com" Jul 7 01:23:44.465504 kubelet[2456]: I0707 01:23:44.465044 2456 kubelet_node_status.go:75] "Successfully registered node" node="srv-3dgpq.gb1.brightbox.com" Jul 7 01:23:44.466842 kubelet[2456]: E0707 01:23:44.466804 2456 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"srv-3dgpq.gb1.brightbox.com\": node \"srv-3dgpq.gb1.brightbox.com\" not found" Jul 7 01:23:44.506640 kubelet[2456]: E0707 01:23:44.506583 2456 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-3dgpq.gb1.brightbox.com\" not found" Jul 7 01:23:44.607181 kubelet[2456]: E0707 01:23:44.607096 2456 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-3dgpq.gb1.brightbox.com\" not found" Jul 7 01:23:44.708379 kubelet[2456]: E0707 01:23:44.708230 2456 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-3dgpq.gb1.brightbox.com\" not found" Jul 7 01:23:44.809432 kubelet[2456]: E0707 01:23:44.809057 2456 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-3dgpq.gb1.brightbox.com\" not found" Jul 7 01:23:44.910089 kubelet[2456]: E0707 01:23:44.909971 2456 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-3dgpq.gb1.brightbox.com\" not found" Jul 7 01:23:45.015224 kubelet[2456]: E0707 01:23:45.010891 2456 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-3dgpq.gb1.brightbox.com\" not found" Jul 7 01:23:45.112099 kubelet[2456]: E0707 01:23:45.111862 2456 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-3dgpq.gb1.brightbox.com\" not found" Jul 7 01:23:45.213119 kubelet[2456]: E0707 01:23:45.212957 2456 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-3dgpq.gb1.brightbox.com\" not found" Jul 7 01:23:45.313504 kubelet[2456]: E0707 01:23:45.313400 2456 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-3dgpq.gb1.brightbox.com\" not found" Jul 7 01:23:45.413776 kubelet[2456]: E0707 01:23:45.413587 2456 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-3dgpq.gb1.brightbox.com\" not found" Jul 7 01:23:45.514247 kubelet[2456]: E0707 01:23:45.514170 2456 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-3dgpq.gb1.brightbox.com\" not found" Jul 7 01:23:45.616054 kubelet[2456]: E0707 01:23:45.614788 2456 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-3dgpq.gb1.brightbox.com\" not found" Jul 7 01:23:45.715239 kubelet[2456]: E0707 01:23:45.715060 2456 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-3dgpq.gb1.brightbox.com\" not found" Jul 7 01:23:45.815933 kubelet[2456]: E0707 01:23:45.815854 2456 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-3dgpq.gb1.brightbox.com\" not found" Jul 7 01:23:45.916743 kubelet[2456]: E0707 01:23:45.916621 2456 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-3dgpq.gb1.brightbox.com\" not found" Jul 7 01:23:46.018828 kubelet[2456]: E0707 01:23:46.017894 2456 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-3dgpq.gb1.brightbox.com\" not found" Jul 7 01:23:46.118341 kubelet[2456]: E0707 01:23:46.118116 2456 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-3dgpq.gb1.brightbox.com\" not found" Jul 7 01:23:46.218398 kubelet[2456]: E0707 01:23:46.218341 2456 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-3dgpq.gb1.brightbox.com\" not found" Jul 7 01:23:46.319753 kubelet[2456]: E0707 01:23:46.319241 2456 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-3dgpq.gb1.brightbox.com\" not found" Jul 7 01:23:46.420508 kubelet[2456]: E0707 01:23:46.420437 2456 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-3dgpq.gb1.brightbox.com\" not found" Jul 7 01:23:46.521451 kubelet[2456]: E0707 01:23:46.521368 2456 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-3dgpq.gb1.brightbox.com\" not found" Jul 7 01:23:46.623803 kubelet[2456]: E0707 01:23:46.623714 2456 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-3dgpq.gb1.brightbox.com\" not found" Jul 7 01:23:46.724349 kubelet[2456]: E0707 01:23:46.724245 2456 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-3dgpq.gb1.brightbox.com\" not found" Jul 7 01:23:46.824562 kubelet[2456]: E0707 01:23:46.824488 2456 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-3dgpq.gb1.brightbox.com\" not found" Jul 7 01:23:46.891457 systemd[1]: Reloading requested from client PID 2735 ('systemctl') (unit session-9.scope)... Jul 7 01:23:46.891505 systemd[1]: Reloading... Jul 7 01:23:46.926425 kubelet[2456]: E0707 01:23:46.925634 2456 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-3dgpq.gb1.brightbox.com\" not found" Jul 7 01:23:47.009441 zram_generator::config[2774]: No configuration found. Jul 7 01:23:47.026192 kubelet[2456]: E0707 01:23:47.026119 2456 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-3dgpq.gb1.brightbox.com\" not found" Jul 7 01:23:47.126422 kubelet[2456]: E0707 01:23:47.126343 2456 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-3dgpq.gb1.brightbox.com\" not found" Jul 7 01:23:47.206386 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 01:23:47.226562 kubelet[2456]: E0707 01:23:47.226501 2456 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-3dgpq.gb1.brightbox.com\" not found" Jul 7 01:23:47.326641 kubelet[2456]: E0707 01:23:47.326598 2456 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-3dgpq.gb1.brightbox.com\" not found" Jul 7 01:23:47.327267 systemd[1]: Reloading finished in 435 ms. Jul 7 01:23:47.377606 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 01:23:47.396055 systemd[1]: kubelet.service: Deactivated successfully. Jul 7 01:23:47.397125 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 01:23:47.402934 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 01:23:47.677541 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 01:23:47.693970 (kubelet)[2848]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 01:23:47.805647 kubelet[2848]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 01:23:47.807325 kubelet[2848]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 7 01:23:47.807325 kubelet[2848]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 01:23:47.807325 kubelet[2848]: I0707 01:23:47.806631 2848 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 01:23:47.816348 kubelet[2848]: I0707 01:23:47.815725 2848 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 7 01:23:47.816348 kubelet[2848]: I0707 01:23:47.815767 2848 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 01:23:47.816348 kubelet[2848]: I0707 01:23:47.816088 2848 server.go:934] "Client rotation is on, will bootstrap in background" Jul 7 01:23:47.818565 kubelet[2848]: I0707 01:23:47.818412 2848 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 7 01:23:47.826598 kubelet[2848]: I0707 01:23:47.825051 2848 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 01:23:47.835422 kubelet[2848]: E0707 01:23:47.835371 2848 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 7 01:23:47.835422 kubelet[2848]: I0707 01:23:47.835420 2848 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 7 01:23:47.843562 kubelet[2848]: I0707 01:23:47.843529 2848 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 01:23:47.844095 kubelet[2848]: I0707 01:23:47.844045 2848 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 7 01:23:47.844756 kubelet[2848]: I0707 01:23:47.844249 2848 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 01:23:47.844756 kubelet[2848]: I0707 01:23:47.844313 2848 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-3dgpq.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jul 7 01:23:47.847489 kubelet[2848]: I0707 01:23:47.847441 2848 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 01:23:47.847489 kubelet[2848]: I0707 01:23:47.847482 2848 container_manager_linux.go:300] "Creating device plugin manager" Jul 7 01:23:47.847642 kubelet[2848]: I0707 01:23:47.847568 2848 state_mem.go:36] "Initialized new in-memory state store" Jul 7 01:23:47.852649 kubelet[2848]: I0707 01:23:47.852623 2848 kubelet.go:408] "Attempting to sync node with API server" Jul 7 01:23:47.852649 kubelet[2848]: I0707 01:23:47.852659 2848 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 01:23:47.852797 kubelet[2848]: I0707 01:23:47.852724 2848 kubelet.go:314] "Adding apiserver pod source" Jul 7 01:23:47.852797 kubelet[2848]: I0707 01:23:47.852765 2848 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 01:23:47.867936 kubelet[2848]: I0707 01:23:47.867795 2848 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 7 01:23:47.871821 kubelet[2848]: I0707 01:23:47.871794 2848 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 7 01:23:47.878330 kubelet[2848]: I0707 01:23:47.878160 2848 server.go:1274] "Started kubelet" Jul 7 01:23:47.888333 kubelet[2848]: I0707 01:23:47.887058 2848 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 01:23:47.888333 kubelet[2848]: I0707 01:23:47.887780 2848 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 01:23:47.888333 kubelet[2848]: I0707 01:23:47.887931 2848 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 01:23:47.891441 kubelet[2848]: I0707 01:23:47.891143 2848 server.go:449] "Adding debug handlers to kubelet server" Jul 7 01:23:47.892174 kubelet[2848]: I0707 01:23:47.892122 2848 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 01:23:47.900120 kubelet[2848]: I0707 01:23:47.899683 2848 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 01:23:47.906894 kubelet[2848]: I0707 01:23:47.906554 2848 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 7 01:23:47.909651 kubelet[2848]: I0707 01:23:47.909625 2848 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 7 01:23:47.910721 kubelet[2848]: I0707 01:23:47.910699 2848 reconciler.go:26] "Reconciler: start to sync state" Jul 7 01:23:47.912525 kubelet[2848]: I0707 01:23:47.912494 2848 factory.go:221] Registration of the systemd container factory successfully Jul 7 01:23:47.912780 kubelet[2848]: I0707 01:23:47.912743 2848 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 01:23:47.924064 kubelet[2848]: I0707 01:23:47.923785 2848 factory.go:221] Registration of the containerd container factory successfully Jul 7 01:23:47.928109 sudo[2863]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 7 01:23:47.928818 sudo[2863]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 7 01:23:47.937872 kubelet[2848]: I0707 01:23:47.937821 2848 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 7 01:23:47.958263 kubelet[2848]: I0707 01:23:47.958220 2848 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 7 01:23:47.958447 kubelet[2848]: I0707 01:23:47.958310 2848 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 7 01:23:47.958447 kubelet[2848]: I0707 01:23:47.958360 2848 kubelet.go:2321] "Starting kubelet main sync loop" Jul 7 01:23:47.958573 kubelet[2848]: E0707 01:23:47.958443 2848 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 01:23:47.964329 kubelet[2848]: E0707 01:23:47.962973 2848 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 01:23:48.060671 kubelet[2848]: E0707 01:23:48.060618 2848 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 7 01:23:48.092516 kubelet[2848]: I0707 01:23:48.092467 2848 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 7 01:23:48.092516 kubelet[2848]: I0707 01:23:48.092502 2848 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 7 01:23:48.092733 kubelet[2848]: I0707 01:23:48.092542 2848 state_mem.go:36] "Initialized new in-memory state store" Jul 7 01:23:48.094197 kubelet[2848]: I0707 01:23:48.094141 2848 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 7 01:23:48.094197 kubelet[2848]: I0707 01:23:48.094175 2848 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 7 01:23:48.094378 kubelet[2848]: I0707 01:23:48.094216 2848 policy_none.go:49] "None policy: Start" Jul 7 01:23:48.099723 kubelet[2848]: I0707 01:23:48.099507 2848 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 7 01:23:48.099723 kubelet[2848]: I0707 01:23:48.099604 2848 state_mem.go:35] "Initializing new in-memory state store" Jul 7 01:23:48.100421 kubelet[2848]: I0707 01:23:48.100188 2848 state_mem.go:75] "Updated machine memory state" Jul 7 01:23:48.107450 kubelet[2848]: I0707 01:23:48.107421 2848 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 7 01:23:48.108078 kubelet[2848]: I0707 01:23:48.107970 2848 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 01:23:48.108078 kubelet[2848]: I0707 01:23:48.108019 2848 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 01:23:48.109692 kubelet[2848]: I0707 01:23:48.109531 2848 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 01:23:48.235834 kubelet[2848]: I0707 01:23:48.235603 2848 kubelet_node_status.go:72] "Attempting to register node" node="srv-3dgpq.gb1.brightbox.com" Jul 7 01:23:48.257567 kubelet[2848]: I0707 01:23:48.255416 2848 kubelet_node_status.go:111] "Node was previously registered" node="srv-3dgpq.gb1.brightbox.com" Jul 7 01:23:48.257567 kubelet[2848]: I0707 01:23:48.255553 2848 kubelet_node_status.go:75] "Successfully registered node" node="srv-3dgpq.gb1.brightbox.com" Jul 7 01:23:48.282733 kubelet[2848]: W0707 01:23:48.282540 2848 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 7 01:23:48.283464 kubelet[2848]: W0707 01:23:48.282922 2848 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 7 01:23:48.284747 kubelet[2848]: W0707 01:23:48.284039 2848 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 7 01:23:48.413130 kubelet[2848]: I0707 01:23:48.413054 2848 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c99d3427ca64a34a3c7b10a984f113f9-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-3dgpq.gb1.brightbox.com\" (UID: \"c99d3427ca64a34a3c7b10a984f113f9\") " pod="kube-system/kube-controller-manager-srv-3dgpq.gb1.brightbox.com" Jul 7 01:23:48.413130 kubelet[2848]: I0707 01:23:48.413130 2848 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ef75e3ba9aea58162febda624ae01a4-kubeconfig\") pod \"kube-scheduler-srv-3dgpq.gb1.brightbox.com\" (UID: \"6ef75e3ba9aea58162febda624ae01a4\") " pod="kube-system/kube-scheduler-srv-3dgpq.gb1.brightbox.com" Jul 7 01:23:48.414514 kubelet[2848]: I0707 01:23:48.413165 2848 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b7a1a5c9d32c6e6809fed85f0348ad23-ca-certs\") pod \"kube-apiserver-srv-3dgpq.gb1.brightbox.com\" (UID: \"b7a1a5c9d32c6e6809fed85f0348ad23\") " pod="kube-system/kube-apiserver-srv-3dgpq.gb1.brightbox.com" Jul 7 01:23:48.414514 kubelet[2848]: I0707 01:23:48.413195 2848 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b7a1a5c9d32c6e6809fed85f0348ad23-k8s-certs\") pod \"kube-apiserver-srv-3dgpq.gb1.brightbox.com\" (UID: \"b7a1a5c9d32c6e6809fed85f0348ad23\") " pod="kube-system/kube-apiserver-srv-3dgpq.gb1.brightbox.com" Jul 7 01:23:48.414514 kubelet[2848]: I0707 01:23:48.413223 2848 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c99d3427ca64a34a3c7b10a984f113f9-ca-certs\") pod \"kube-controller-manager-srv-3dgpq.gb1.brightbox.com\" (UID: \"c99d3427ca64a34a3c7b10a984f113f9\") " pod="kube-system/kube-controller-manager-srv-3dgpq.gb1.brightbox.com" Jul 7 01:23:48.414514 kubelet[2848]: I0707 01:23:48.413250 2848 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c99d3427ca64a34a3c7b10a984f113f9-flexvolume-dir\") pod \"kube-controller-manager-srv-3dgpq.gb1.brightbox.com\" (UID: \"c99d3427ca64a34a3c7b10a984f113f9\") " pod="kube-system/kube-controller-manager-srv-3dgpq.gb1.brightbox.com" Jul 7 01:23:48.414514 kubelet[2848]: I0707 01:23:48.413280 2848 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c99d3427ca64a34a3c7b10a984f113f9-k8s-certs\") pod \"kube-controller-manager-srv-3dgpq.gb1.brightbox.com\" (UID: \"c99d3427ca64a34a3c7b10a984f113f9\") " pod="kube-system/kube-controller-manager-srv-3dgpq.gb1.brightbox.com" Jul 7 01:23:48.414933 kubelet[2848]: I0707 01:23:48.413351 2848 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c99d3427ca64a34a3c7b10a984f113f9-kubeconfig\") pod \"kube-controller-manager-srv-3dgpq.gb1.brightbox.com\" (UID: \"c99d3427ca64a34a3c7b10a984f113f9\") " pod="kube-system/kube-controller-manager-srv-3dgpq.gb1.brightbox.com" Jul 7 01:23:48.414933 kubelet[2848]: I0707 01:23:48.413437 2848 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b7a1a5c9d32c6e6809fed85f0348ad23-usr-share-ca-certificates\") pod \"kube-apiserver-srv-3dgpq.gb1.brightbox.com\" (UID: \"b7a1a5c9d32c6e6809fed85f0348ad23\") " pod="kube-system/kube-apiserver-srv-3dgpq.gb1.brightbox.com" Jul 7 01:23:48.854098 kubelet[2848]: I0707 01:23:48.854038 2848 apiserver.go:52] "Watching apiserver" Jul 7 01:23:48.869084 sudo[2863]: pam_unix(sudo:session): session closed for user root Jul 7 01:23:48.911387 kubelet[2848]: I0707 01:23:48.911294 2848 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 7 01:23:48.918579 kubelet[2848]: I0707 01:23:48.918499 2848 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-3dgpq.gb1.brightbox.com" podStartSLOduration=0.918471646 podStartE2EDuration="918.471646ms" podCreationTimestamp="2025-07-07 01:23:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 01:23:48.917190233 +0000 UTC m=+1.200315927" watchObservedRunningTime="2025-07-07 01:23:48.918471646 +0000 UTC m=+1.201597327" Jul 7 01:23:48.951689 kubelet[2848]: I0707 01:23:48.951400 2848 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-3dgpq.gb1.brightbox.com" podStartSLOduration=0.951376211 podStartE2EDuration="951.376211ms" podCreationTimestamp="2025-07-07 01:23:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 01:23:48.934823381 +0000 UTC m=+1.217949071" watchObservedRunningTime="2025-07-07 01:23:48.951376211 +0000 UTC m=+1.234501888" Jul 7 01:23:48.966707 kubelet[2848]: I0707 01:23:48.966215 2848 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-3dgpq.gb1.brightbox.com" podStartSLOduration=0.966192026 podStartE2EDuration="966.192026ms" podCreationTimestamp="2025-07-07 01:23:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 01:23:48.949430287 +0000 UTC m=+1.232555971" watchObservedRunningTime="2025-07-07 01:23:48.966192026 +0000 UTC m=+1.249317707" Jul 7 01:23:51.465002 sudo[1893]: pam_unix(sudo:session): session closed for user root Jul 7 01:23:51.629168 sshd[1889]: pam_unix(sshd:session): session closed for user core Jul 7 01:23:51.635994 systemd[1]: sshd@6-10.244.21.90:22-139.178.68.195:51780.service: Deactivated successfully. Jul 7 01:23:51.640732 systemd[1]: session-9.scope: Deactivated successfully. Jul 7 01:23:51.641066 systemd-logind[1596]: Session 9 logged out. Waiting for processes to exit. Jul 7 01:23:51.644358 systemd-logind[1596]: Removed session 9. Jul 7 01:23:52.960549 kubelet[2848]: I0707 01:23:52.960480 2848 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 7 01:23:52.962414 containerd[1624]: time="2025-07-07T01:23:52.962152541Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 7 01:23:52.963519 kubelet[2848]: I0707 01:23:52.963478 2848 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 7 01:23:53.649671 kubelet[2848]: I0707 01:23:53.649607 2848 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72sjc\" (UniqueName: \"kubernetes.io/projected/1c4d68a1-122b-4a78-8b0a-aeabfb6db347-kube-api-access-72sjc\") pod \"kube-proxy-tvx2p\" (UID: \"1c4d68a1-122b-4a78-8b0a-aeabfb6db347\") " pod="kube-system/kube-proxy-tvx2p" Jul 7 01:23:53.649671 kubelet[2848]: I0707 01:23:53.649679 2848 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w92pm\" (UniqueName: \"kubernetes.io/projected/354092e9-2ae2-4702-aff4-78efbc4772d7-kube-api-access-w92pm\") pod \"cilium-8dwsv\" (UID: \"354092e9-2ae2-4702-aff4-78efbc4772d7\") " pod="kube-system/cilium-8dwsv" Jul 7 01:23:53.649671 kubelet[2848]: I0707 01:23:53.649762 2848 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1c4d68a1-122b-4a78-8b0a-aeabfb6db347-lib-modules\") pod \"kube-proxy-tvx2p\" (UID: \"1c4d68a1-122b-4a78-8b0a-aeabfb6db347\") " pod="kube-system/kube-proxy-tvx2p" Jul 7 01:23:53.650435 kubelet[2848]: I0707 01:23:53.649799 2848 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/354092e9-2ae2-4702-aff4-78efbc4772d7-cni-path\") pod \"cilium-8dwsv\" (UID: \"354092e9-2ae2-4702-aff4-78efbc4772d7\") " pod="kube-system/cilium-8dwsv" Jul 7 01:23:53.650435 kubelet[2848]: I0707 01:23:53.649827 2848 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/354092e9-2ae2-4702-aff4-78efbc4772d7-etc-cni-netd\") pod \"cilium-8dwsv\" (UID: \"354092e9-2ae2-4702-aff4-78efbc4772d7\") " pod="kube-system/cilium-8dwsv" Jul 7 01:23:53.650435 kubelet[2848]: I0707 01:23:53.649854 2848 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/354092e9-2ae2-4702-aff4-78efbc4772d7-lib-modules\") pod \"cilium-8dwsv\" (UID: \"354092e9-2ae2-4702-aff4-78efbc4772d7\") " pod="kube-system/cilium-8dwsv" Jul 7 01:23:53.652315 kubelet[2848]: I0707 01:23:53.651940 2848 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/354092e9-2ae2-4702-aff4-78efbc4772d7-cilium-cgroup\") pod \"cilium-8dwsv\" (UID: \"354092e9-2ae2-4702-aff4-78efbc4772d7\") " pod="kube-system/cilium-8dwsv" Jul 7 01:23:53.652315 kubelet[2848]: I0707 01:23:53.652021 2848 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/354092e9-2ae2-4702-aff4-78efbc4772d7-host-proc-sys-kernel\") pod \"cilium-8dwsv\" (UID: \"354092e9-2ae2-4702-aff4-78efbc4772d7\") " pod="kube-system/cilium-8dwsv" Jul 7 01:23:53.652315 kubelet[2848]: I0707 01:23:53.652055 2848 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1c4d68a1-122b-4a78-8b0a-aeabfb6db347-xtables-lock\") pod \"kube-proxy-tvx2p\" (UID: \"1c4d68a1-122b-4a78-8b0a-aeabfb6db347\") " pod="kube-system/kube-proxy-tvx2p" Jul 7 01:23:53.652315 kubelet[2848]: I0707 01:23:53.652116 2848 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/354092e9-2ae2-4702-aff4-78efbc4772d7-bpf-maps\") pod \"cilium-8dwsv\" (UID: \"354092e9-2ae2-4702-aff4-78efbc4772d7\") " pod="kube-system/cilium-8dwsv" Jul 7 01:23:53.652315 kubelet[2848]: I0707 01:23:53.652149 2848 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/354092e9-2ae2-4702-aff4-78efbc4772d7-hostproc\") pod \"cilium-8dwsv\" (UID: \"354092e9-2ae2-4702-aff4-78efbc4772d7\") " pod="kube-system/cilium-8dwsv" Jul 7 01:23:53.652315 kubelet[2848]: I0707 01:23:53.652204 2848 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/354092e9-2ae2-4702-aff4-78efbc4772d7-xtables-lock\") pod \"cilium-8dwsv\" (UID: \"354092e9-2ae2-4702-aff4-78efbc4772d7\") " pod="kube-system/cilium-8dwsv" Jul 7 01:23:53.652909 kubelet[2848]: I0707 01:23:53.652261 2848 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/354092e9-2ae2-4702-aff4-78efbc4772d7-clustermesh-secrets\") pod \"cilium-8dwsv\" (UID: \"354092e9-2ae2-4702-aff4-78efbc4772d7\") " pod="kube-system/cilium-8dwsv" Jul 7 01:23:53.652909 kubelet[2848]: I0707 01:23:53.652353 2848 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/354092e9-2ae2-4702-aff4-78efbc4772d7-cilium-config-path\") pod \"cilium-8dwsv\" (UID: \"354092e9-2ae2-4702-aff4-78efbc4772d7\") " pod="kube-system/cilium-8dwsv" Jul 7 01:23:53.652909 kubelet[2848]: I0707 01:23:53.652442 2848 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/354092e9-2ae2-4702-aff4-78efbc4772d7-cilium-run\") pod \"cilium-8dwsv\" (UID: \"354092e9-2ae2-4702-aff4-78efbc4772d7\") " pod="kube-system/cilium-8dwsv" Jul 7 01:23:53.652909 kubelet[2848]: I0707 01:23:53.652486 2848 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/354092e9-2ae2-4702-aff4-78efbc4772d7-host-proc-sys-net\") pod \"cilium-8dwsv\" (UID: \"354092e9-2ae2-4702-aff4-78efbc4772d7\") " pod="kube-system/cilium-8dwsv" Jul 7 01:23:53.652909 kubelet[2848]: I0707 01:23:53.652514 2848 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/354092e9-2ae2-4702-aff4-78efbc4772d7-hubble-tls\") pod \"cilium-8dwsv\" (UID: \"354092e9-2ae2-4702-aff4-78efbc4772d7\") " pod="kube-system/cilium-8dwsv" Jul 7 01:23:53.652909 kubelet[2848]: I0707 01:23:53.652538 2848 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1c4d68a1-122b-4a78-8b0a-aeabfb6db347-kube-proxy\") pod \"kube-proxy-tvx2p\" (UID: \"1c4d68a1-122b-4a78-8b0a-aeabfb6db347\") " pod="kube-system/kube-proxy-tvx2p" Jul 7 01:23:53.892854 containerd[1624]: time="2025-07-07T01:23:53.892756180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tvx2p,Uid:1c4d68a1-122b-4a78-8b0a-aeabfb6db347,Namespace:kube-system,Attempt:0,}" Jul 7 01:23:53.907604 containerd[1624]: time="2025-07-07T01:23:53.906952970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8dwsv,Uid:354092e9-2ae2-4702-aff4-78efbc4772d7,Namespace:kube-system,Attempt:0,}" Jul 7 01:23:54.018930 containerd[1624]: time="2025-07-07T01:23:54.018759249Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 01:23:54.018930 containerd[1624]: time="2025-07-07T01:23:54.018877764Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 01:23:54.021125 containerd[1624]: time="2025-07-07T01:23:54.018899199Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 01:23:54.024044 containerd[1624]: time="2025-07-07T01:23:54.022792463Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 01:23:54.026660 containerd[1624]: time="2025-07-07T01:23:54.026026237Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 01:23:54.026660 containerd[1624]: time="2025-07-07T01:23:54.026262683Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 01:23:54.026660 containerd[1624]: time="2025-07-07T01:23:54.026429071Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 01:23:54.030399 containerd[1624]: time="2025-07-07T01:23:54.027004639Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 01:23:54.141611 containerd[1624]: time="2025-07-07T01:23:54.141528795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tvx2p,Uid:1c4d68a1-122b-4a78-8b0a-aeabfb6db347,Namespace:kube-system,Attempt:0,} returns sandbox id \"cfc2687812824c80a3f5d25f6df7ef085cd678949a1e271be55fa8ddeb0b5f39\"" Jul 7 01:23:54.150396 containerd[1624]: time="2025-07-07T01:23:54.150033821Z" level=info msg="CreateContainer within sandbox \"cfc2687812824c80a3f5d25f6df7ef085cd678949a1e271be55fa8ddeb0b5f39\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 7 01:23:54.151022 containerd[1624]: time="2025-07-07T01:23:54.150970838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8dwsv,Uid:354092e9-2ae2-4702-aff4-78efbc4772d7,Namespace:kube-system,Attempt:0,} returns sandbox id \"bbc14ada1ff8f815dc35e06674fad123609cb8913c58274b9411b2e3e7d0bcc6\"" Jul 7 01:23:54.154719 containerd[1624]: time="2025-07-07T01:23:54.154630553Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 7 01:23:54.156136 kubelet[2848]: I0707 01:23:54.155738 2848 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7c84fc14-1648-4d03-9542-b41b6dcff7c6-cilium-config-path\") pod \"cilium-operator-5d85765b45-t46hz\" (UID: \"7c84fc14-1648-4d03-9542-b41b6dcff7c6\") " pod="kube-system/cilium-operator-5d85765b45-t46hz" Jul 7 01:23:54.156136 kubelet[2848]: I0707 01:23:54.156087 2848 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28g4t\" (UniqueName: \"kubernetes.io/projected/7c84fc14-1648-4d03-9542-b41b6dcff7c6-kube-api-access-28g4t\") pod \"cilium-operator-5d85765b45-t46hz\" (UID: \"7c84fc14-1648-4d03-9542-b41b6dcff7c6\") " pod="kube-system/cilium-operator-5d85765b45-t46hz" Jul 7 01:23:54.173216 containerd[1624]: time="2025-07-07T01:23:54.172682735Z" level=info msg="CreateContainer within sandbox \"cfc2687812824c80a3f5d25f6df7ef085cd678949a1e271be55fa8ddeb0b5f39\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"68bd5fbd20aaa4a05342f4293f5dc0ec48a05714e6cbf1d916922bb2cc2e2e40\"" Jul 7 01:23:54.175036 containerd[1624]: time="2025-07-07T01:23:54.175000679Z" level=info msg="StartContainer for \"68bd5fbd20aaa4a05342f4293f5dc0ec48a05714e6cbf1d916922bb2cc2e2e40\"" Jul 7 01:23:54.255984 containerd[1624]: time="2025-07-07T01:23:54.255933673Z" level=info msg="StartContainer for \"68bd5fbd20aaa4a05342f4293f5dc0ec48a05714e6cbf1d916922bb2cc2e2e40\" returns successfully" Jul 7 01:23:54.375039 containerd[1624]: time="2025-07-07T01:23:54.374971131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-t46hz,Uid:7c84fc14-1648-4d03-9542-b41b6dcff7c6,Namespace:kube-system,Attempt:0,}" Jul 7 01:23:54.415784 containerd[1624]: time="2025-07-07T01:23:54.415253031Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 01:23:54.415784 containerd[1624]: time="2025-07-07T01:23:54.415372428Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 01:23:54.415784 containerd[1624]: time="2025-07-07T01:23:54.415398357Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 01:23:54.416354 containerd[1624]: time="2025-07-07T01:23:54.415679877Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 01:23:54.513877 containerd[1624]: time="2025-07-07T01:23:54.512970894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-t46hz,Uid:7c84fc14-1648-4d03-9542-b41b6dcff7c6,Namespace:kube-system,Attempt:0,} returns sandbox id \"1211318ef63d57fe39c75a8be8ed6ffbb4d9bc44a299f82777dcdfa786828819\"" Jul 7 01:23:58.956835 kubelet[2848]: I0707 01:23:58.956466 2848 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tvx2p" podStartSLOduration=5.95615331 podStartE2EDuration="5.95615331s" podCreationTimestamp="2025-07-07 01:23:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 01:23:55.066726658 +0000 UTC m=+7.349852351" watchObservedRunningTime="2025-07-07 01:23:58.95615331 +0000 UTC m=+11.239278996" Jul 7 01:24:01.204572 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount31358054.mount: Deactivated successfully. Jul 7 01:24:04.943468 containerd[1624]: time="2025-07-07T01:24:04.943333420Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:24:04.946406 containerd[1624]: time="2025-07-07T01:24:04.944493323Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jul 7 01:24:04.946618 containerd[1624]: time="2025-07-07T01:24:04.946567397Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:24:04.949738 containerd[1624]: time="2025-07-07T01:24:04.949697427Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.79478651s" Jul 7 01:24:04.950398 containerd[1624]: time="2025-07-07T01:24:04.949762712Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 7 01:24:04.952619 containerd[1624]: time="2025-07-07T01:24:04.952569656Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 7 01:24:04.955601 containerd[1624]: time="2025-07-07T01:24:04.955316585Z" level=info msg="CreateContainer within sandbox \"bbc14ada1ff8f815dc35e06674fad123609cb8913c58274b9411b2e3e7d0bcc6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 7 01:24:05.054110 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount504976551.mount: Deactivated successfully. Jul 7 01:24:05.057954 containerd[1624]: time="2025-07-07T01:24:05.057775564Z" level=info msg="CreateContainer within sandbox \"bbc14ada1ff8f815dc35e06674fad123609cb8913c58274b9411b2e3e7d0bcc6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8f03651d43dcafb999660a855dcae2ce236ad2da316578d537994927535cc543\"" Jul 7 01:24:05.059382 containerd[1624]: time="2025-07-07T01:24:05.059339675Z" level=info msg="StartContainer for \"8f03651d43dcafb999660a855dcae2ce236ad2da316578d537994927535cc543\"" Jul 7 01:24:05.335566 systemd[1]: run-containerd-runc-k8s.io-8f03651d43dcafb999660a855dcae2ce236ad2da316578d537994927535cc543-runc.iBqRsr.mount: Deactivated successfully. Jul 7 01:24:05.404653 containerd[1624]: time="2025-07-07T01:24:05.404592249Z" level=info msg="StartContainer for \"8f03651d43dcafb999660a855dcae2ce236ad2da316578d537994927535cc543\" returns successfully" Jul 7 01:24:05.619843 containerd[1624]: time="2025-07-07T01:24:05.610361181Z" level=info msg="shim disconnected" id=8f03651d43dcafb999660a855dcae2ce236ad2da316578d537994927535cc543 namespace=k8s.io Jul 7 01:24:05.620590 containerd[1624]: time="2025-07-07T01:24:05.620062192Z" level=warning msg="cleaning up after shim disconnected" id=8f03651d43dcafb999660a855dcae2ce236ad2da316578d537994927535cc543 namespace=k8s.io Jul 7 01:24:05.620590 containerd[1624]: time="2025-07-07T01:24:05.620099550Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 01:24:06.049005 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8f03651d43dcafb999660a855dcae2ce236ad2da316578d537994927535cc543-rootfs.mount: Deactivated successfully. Jul 7 01:24:06.069833 systemd-resolved[1511]: Under memory pressure, flushing caches. Jul 7 01:24:06.072560 systemd-journald[1178]: Under memory pressure, flushing caches. Jul 7 01:24:06.069920 systemd-resolved[1511]: Flushed all caches. Jul 7 01:24:06.158377 containerd[1624]: time="2025-07-07T01:24:06.157871085Z" level=info msg="CreateContainer within sandbox \"bbc14ada1ff8f815dc35e06674fad123609cb8913c58274b9411b2e3e7d0bcc6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 7 01:24:06.224502 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3956542471.mount: Deactivated successfully. Jul 7 01:24:06.229136 containerd[1624]: time="2025-07-07T01:24:06.229070182Z" level=info msg="CreateContainer within sandbox \"bbc14ada1ff8f815dc35e06674fad123609cb8913c58274b9411b2e3e7d0bcc6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2823d0c48d0619fe4f4f63d68188c0d6627e42e203d5aebe2fc819213dd11156\"" Jul 7 01:24:06.230884 containerd[1624]: time="2025-07-07T01:24:06.230850481Z" level=info msg="StartContainer for \"2823d0c48d0619fe4f4f63d68188c0d6627e42e203d5aebe2fc819213dd11156\"" Jul 7 01:24:06.330484 containerd[1624]: time="2025-07-07T01:24:06.328239339Z" level=info msg="StartContainer for \"2823d0c48d0619fe4f4f63d68188c0d6627e42e203d5aebe2fc819213dd11156\" returns successfully" Jul 7 01:24:06.352701 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 7 01:24:06.353229 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 7 01:24:06.353412 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 7 01:24:06.365121 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 01:24:06.406642 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 01:24:06.410505 containerd[1624]: time="2025-07-07T01:24:06.410241407Z" level=info msg="shim disconnected" id=2823d0c48d0619fe4f4f63d68188c0d6627e42e203d5aebe2fc819213dd11156 namespace=k8s.io Jul 7 01:24:06.410505 containerd[1624]: time="2025-07-07T01:24:06.410382754Z" level=warning msg="cleaning up after shim disconnected" id=2823d0c48d0619fe4f4f63d68188c0d6627e42e203d5aebe2fc819213dd11156 namespace=k8s.io Jul 7 01:24:06.410505 containerd[1624]: time="2025-07-07T01:24:06.410405197Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 01:24:07.049512 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2823d0c48d0619fe4f4f63d68188c0d6627e42e203d5aebe2fc819213dd11156-rootfs.mount: Deactivated successfully. Jul 7 01:24:07.171149 containerd[1624]: time="2025-07-07T01:24:07.170724203Z" level=info msg="CreateContainer within sandbox \"bbc14ada1ff8f815dc35e06674fad123609cb8913c58274b9411b2e3e7d0bcc6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 7 01:24:07.210801 containerd[1624]: time="2025-07-07T01:24:07.209200740Z" level=info msg="CreateContainer within sandbox \"bbc14ada1ff8f815dc35e06674fad123609cb8913c58274b9411b2e3e7d0bcc6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4e5cb6f1fc43ea5f0ae02d69f421d256ccb38d70f9dc1632d2b7dfc7ff11801f\"" Jul 7 01:24:07.210801 containerd[1624]: time="2025-07-07T01:24:07.210061659Z" level=info msg="StartContainer for \"4e5cb6f1fc43ea5f0ae02d69f421d256ccb38d70f9dc1632d2b7dfc7ff11801f\"" Jul 7 01:24:07.386213 containerd[1624]: time="2025-07-07T01:24:07.386165291Z" level=info msg="StartContainer for \"4e5cb6f1fc43ea5f0ae02d69f421d256ccb38d70f9dc1632d2b7dfc7ff11801f\" returns successfully" Jul 7 01:24:07.492528 containerd[1624]: time="2025-07-07T01:24:07.492433355Z" level=info msg="shim disconnected" id=4e5cb6f1fc43ea5f0ae02d69f421d256ccb38d70f9dc1632d2b7dfc7ff11801f namespace=k8s.io Jul 7 01:24:07.493037 containerd[1624]: time="2025-07-07T01:24:07.492692632Z" level=warning msg="cleaning up after shim disconnected" id=4e5cb6f1fc43ea5f0ae02d69f421d256ccb38d70f9dc1632d2b7dfc7ff11801f namespace=k8s.io Jul 7 01:24:07.493037 containerd[1624]: time="2025-07-07T01:24:07.492715485Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 01:24:07.853193 containerd[1624]: time="2025-07-07T01:24:07.853123371Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:24:07.856672 containerd[1624]: time="2025-07-07T01:24:07.856383010Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jul 7 01:24:07.860337 containerd[1624]: time="2025-07-07T01:24:07.860018006Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:24:07.861933 containerd[1624]: time="2025-07-07T01:24:07.861892539Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.909251404s" Jul 7 01:24:07.862192 containerd[1624]: time="2025-07-07T01:24:07.862053425Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 7 01:24:07.868368 containerd[1624]: time="2025-07-07T01:24:07.868310082Z" level=info msg="CreateContainer within sandbox \"1211318ef63d57fe39c75a8be8ed6ffbb4d9bc44a299f82777dcdfa786828819\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 7 01:24:07.910594 containerd[1624]: time="2025-07-07T01:24:07.910391157Z" level=info msg="CreateContainer within sandbox \"1211318ef63d57fe39c75a8be8ed6ffbb4d9bc44a299f82777dcdfa786828819\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"42c92c2d6fa12783cd99fcd9363ac3a664cabb1637d781a4b401a957deb70c40\"" Jul 7 01:24:07.912471 containerd[1624]: time="2025-07-07T01:24:07.911370852Z" level=info msg="StartContainer for \"42c92c2d6fa12783cd99fcd9363ac3a664cabb1637d781a4b401a957deb70c40\"" Jul 7 01:24:08.007137 containerd[1624]: time="2025-07-07T01:24:08.007080761Z" level=info msg="StartContainer for \"42c92c2d6fa12783cd99fcd9363ac3a664cabb1637d781a4b401a957deb70c40\" returns successfully" Jul 7 01:24:08.049493 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4e5cb6f1fc43ea5f0ae02d69f421d256ccb38d70f9dc1632d2b7dfc7ff11801f-rootfs.mount: Deactivated successfully. Jul 7 01:24:08.191424 containerd[1624]: time="2025-07-07T01:24:08.190742600Z" level=info msg="CreateContainer within sandbox \"bbc14ada1ff8f815dc35e06674fad123609cb8913c58274b9411b2e3e7d0bcc6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 7 01:24:08.199321 kubelet[2848]: I0707 01:24:08.197905 2848 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-t46hz" podStartSLOduration=1.849193745 podStartE2EDuration="15.197827648s" podCreationTimestamp="2025-07-07 01:23:53 +0000 UTC" firstStartedPulling="2025-07-07 01:23:54.514829096 +0000 UTC m=+6.797954768" lastFinishedPulling="2025-07-07 01:24:07.863462981 +0000 UTC m=+20.146588671" observedRunningTime="2025-07-07 01:24:08.195523872 +0000 UTC m=+20.478649572" watchObservedRunningTime="2025-07-07 01:24:08.197827648 +0000 UTC m=+20.480953332" Jul 7 01:24:08.274069 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount729388290.mount: Deactivated successfully. Jul 7 01:24:08.290200 containerd[1624]: time="2025-07-07T01:24:08.290061845Z" level=info msg="CreateContainer within sandbox \"bbc14ada1ff8f815dc35e06674fad123609cb8913c58274b9411b2e3e7d0bcc6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"fe577804ba9c2ced641b6f7b9e01498a2d5b1554997487a2a5a9faf1dcffbbdc\"" Jul 7 01:24:08.304648 containerd[1624]: time="2025-07-07T01:24:08.304592173Z" level=info msg="StartContainer for \"fe577804ba9c2ced641b6f7b9e01498a2d5b1554997487a2a5a9faf1dcffbbdc\"" Jul 7 01:24:08.535036 containerd[1624]: time="2025-07-07T01:24:08.534792670Z" level=info msg="StartContainer for \"fe577804ba9c2ced641b6f7b9e01498a2d5b1554997487a2a5a9faf1dcffbbdc\" returns successfully" Jul 7 01:24:08.684343 containerd[1624]: time="2025-07-07T01:24:08.682769007Z" level=info msg="shim disconnected" id=fe577804ba9c2ced641b6f7b9e01498a2d5b1554997487a2a5a9faf1dcffbbdc namespace=k8s.io Jul 7 01:24:08.684343 containerd[1624]: time="2025-07-07T01:24:08.682847249Z" level=warning msg="cleaning up after shim disconnected" id=fe577804ba9c2ced641b6f7b9e01498a2d5b1554997487a2a5a9faf1dcffbbdc namespace=k8s.io Jul 7 01:24:08.684343 containerd[1624]: time="2025-07-07T01:24:08.682873224Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 01:24:09.052985 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fe577804ba9c2ced641b6f7b9e01498a2d5b1554997487a2a5a9faf1dcffbbdc-rootfs.mount: Deactivated successfully. Jul 7 01:24:09.202048 containerd[1624]: time="2025-07-07T01:24:09.201992398Z" level=info msg="CreateContainer within sandbox \"bbc14ada1ff8f815dc35e06674fad123609cb8913c58274b9411b2e3e7d0bcc6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 7 01:24:09.247336 containerd[1624]: time="2025-07-07T01:24:09.245569431Z" level=info msg="CreateContainer within sandbox \"bbc14ada1ff8f815dc35e06674fad123609cb8913c58274b9411b2e3e7d0bcc6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"fc3af6bcb908d457754edbf6d99117ad9893e68aea2af382f51b1960fa0c0633\"" Jul 7 01:24:09.247548 containerd[1624]: time="2025-07-07T01:24:09.247443090Z" level=info msg="StartContainer for \"fc3af6bcb908d457754edbf6d99117ad9893e68aea2af382f51b1960fa0c0633\"" Jul 7 01:24:09.418097 containerd[1624]: time="2025-07-07T01:24:09.417941650Z" level=info msg="StartContainer for \"fc3af6bcb908d457754edbf6d99117ad9893e68aea2af382f51b1960fa0c0633\" returns successfully" Jul 7 01:24:09.721197 kubelet[2848]: I0707 01:24:09.721038 2848 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 7 01:24:09.897593 kubelet[2848]: I0707 01:24:09.897531 2848 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1cd0ff99-1946-4440-9c16-4b5437b3a197-config-volume\") pod \"coredns-7c65d6cfc9-zv6rn\" (UID: \"1cd0ff99-1946-4440-9c16-4b5437b3a197\") " pod="kube-system/coredns-7c65d6cfc9-zv6rn" Jul 7 01:24:09.897593 kubelet[2848]: I0707 01:24:09.897597 2848 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsxdz\" (UniqueName: \"kubernetes.io/projected/2dd003a7-03d0-4742-a227-56812239a7b5-kube-api-access-hsxdz\") pod \"coredns-7c65d6cfc9-wf72m\" (UID: \"2dd003a7-03d0-4742-a227-56812239a7b5\") " pod="kube-system/coredns-7c65d6cfc9-wf72m" Jul 7 01:24:09.897872 kubelet[2848]: I0707 01:24:09.897633 2848 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2dd003a7-03d0-4742-a227-56812239a7b5-config-volume\") pod \"coredns-7c65d6cfc9-wf72m\" (UID: \"2dd003a7-03d0-4742-a227-56812239a7b5\") " pod="kube-system/coredns-7c65d6cfc9-wf72m" Jul 7 01:24:09.897872 kubelet[2848]: I0707 01:24:09.897701 2848 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpwlm\" (UniqueName: \"kubernetes.io/projected/1cd0ff99-1946-4440-9c16-4b5437b3a197-kube-api-access-gpwlm\") pod \"coredns-7c65d6cfc9-zv6rn\" (UID: \"1cd0ff99-1946-4440-9c16-4b5437b3a197\") " pod="kube-system/coredns-7c65d6cfc9-zv6rn" Jul 7 01:24:10.119128 containerd[1624]: time="2025-07-07T01:24:10.119004855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-wf72m,Uid:2dd003a7-03d0-4742-a227-56812239a7b5,Namespace:kube-system,Attempt:0,}" Jul 7 01:24:10.127544 containerd[1624]: time="2025-07-07T01:24:10.127079508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-zv6rn,Uid:1cd0ff99-1946-4440-9c16-4b5437b3a197,Namespace:kube-system,Attempt:0,}" Jul 7 01:24:10.264484 kubelet[2848]: I0707 01:24:10.262759 2848 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-8dwsv" podStartSLOduration=6.464142329 podStartE2EDuration="17.262727163s" podCreationTimestamp="2025-07-07 01:23:53 +0000 UTC" firstStartedPulling="2025-07-07 01:23:54.152837314 +0000 UTC m=+6.435962986" lastFinishedPulling="2025-07-07 01:24:04.951422129 +0000 UTC m=+17.234547820" observedRunningTime="2025-07-07 01:24:10.257886959 +0000 UTC m=+22.541012650" watchObservedRunningTime="2025-07-07 01:24:10.262727163 +0000 UTC m=+22.545852850" Jul 7 01:24:12.227758 systemd-networkd[1256]: cilium_host: Link UP Jul 7 01:24:12.228060 systemd-networkd[1256]: cilium_net: Link UP Jul 7 01:24:12.228067 systemd-networkd[1256]: cilium_net: Gained carrier Jul 7 01:24:12.228469 systemd-networkd[1256]: cilium_host: Gained carrier Jul 7 01:24:12.402975 systemd-networkd[1256]: cilium_vxlan: Link UP Jul 7 01:24:12.403120 systemd-networkd[1256]: cilium_vxlan: Gained carrier Jul 7 01:24:12.853757 systemd-networkd[1256]: cilium_host: Gained IPv6LL Jul 7 01:24:12.991546 kernel: NET: Registered PF_ALG protocol family Jul 7 01:24:13.045543 systemd-networkd[1256]: cilium_net: Gained IPv6LL Jul 7 01:24:14.116062 systemd-networkd[1256]: lxc_health: Link UP Jul 7 01:24:14.127248 systemd-networkd[1256]: lxc_health: Gained carrier Jul 7 01:24:14.360189 systemd-networkd[1256]: lxcdeb276a5b433: Link UP Jul 7 01:24:14.367564 kernel: eth0: renamed from tmp02def Jul 7 01:24:14.374865 systemd-networkd[1256]: lxcdeb276a5b433: Gained carrier Jul 7 01:24:14.392416 systemd-networkd[1256]: cilium_vxlan: Gained IPv6LL Jul 7 01:24:14.508426 kernel: eth0: renamed from tmp20315 Jul 7 01:24:14.519737 systemd-networkd[1256]: lxc3381d6de756e: Link UP Jul 7 01:24:14.520311 systemd-networkd[1256]: lxc3381d6de756e: Gained carrier Jul 7 01:24:15.349606 systemd-networkd[1256]: lxc_health: Gained IPv6LL Jul 7 01:24:15.733843 systemd-networkd[1256]: lxc3381d6de756e: Gained IPv6LL Jul 7 01:24:15.736552 systemd-networkd[1256]: lxcdeb276a5b433: Gained IPv6LL Jul 7 01:24:20.332117 containerd[1624]: time="2025-07-07T01:24:20.330981886Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 01:24:20.336208 containerd[1624]: time="2025-07-07T01:24:20.332771386Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 01:24:20.336208 containerd[1624]: time="2025-07-07T01:24:20.332842870Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 01:24:20.336208 containerd[1624]: time="2025-07-07T01:24:20.332865606Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 01:24:20.336208 containerd[1624]: time="2025-07-07T01:24:20.333135570Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 01:24:20.336208 containerd[1624]: time="2025-07-07T01:24:20.332058205Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 01:24:20.336208 containerd[1624]: time="2025-07-07T01:24:20.332134671Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 01:24:20.336208 containerd[1624]: time="2025-07-07T01:24:20.332354752Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 01:24:20.535251 containerd[1624]: time="2025-07-07T01:24:20.535077137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-wf72m,Uid:2dd003a7-03d0-4742-a227-56812239a7b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"20315f99ef4ba57a85768a3977c6d014b49f5e64f552d4b881eb493cc0c4836a\"" Jul 7 01:24:20.551209 containerd[1624]: time="2025-07-07T01:24:20.550963329Z" level=info msg="CreateContainer within sandbox \"20315f99ef4ba57a85768a3977c6d014b49f5e64f552d4b881eb493cc0c4836a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 01:24:20.554171 containerd[1624]: time="2025-07-07T01:24:20.553802082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-zv6rn,Uid:1cd0ff99-1946-4440-9c16-4b5437b3a197,Namespace:kube-system,Attempt:0,} returns sandbox id \"02def28bc5da2d830b38f5abf9397e36c841de63bda862b8e98523e5f17132ea\"" Jul 7 01:24:20.558347 containerd[1624]: time="2025-07-07T01:24:20.558183028Z" level=info msg="CreateContainer within sandbox \"02def28bc5da2d830b38f5abf9397e36c841de63bda862b8e98523e5f17132ea\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 01:24:20.597145 containerd[1624]: time="2025-07-07T01:24:20.595672997Z" level=info msg="CreateContainer within sandbox \"02def28bc5da2d830b38f5abf9397e36c841de63bda862b8e98523e5f17132ea\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"73bd73ef320f30b048301a2f82939f4f2841873c2bc6b171f1ec6c0a2fc789da\"" Jul 7 01:24:20.599205 containerd[1624]: time="2025-07-07T01:24:20.599160636Z" level=info msg="StartContainer for \"73bd73ef320f30b048301a2f82939f4f2841873c2bc6b171f1ec6c0a2fc789da\"" Jul 7 01:24:20.599774 containerd[1624]: time="2025-07-07T01:24:20.599506464Z" level=info msg="CreateContainer within sandbox \"20315f99ef4ba57a85768a3977c6d014b49f5e64f552d4b881eb493cc0c4836a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"be28c5ae50bdf011db52111fe73378efeea87781f2ab44f2ada28832f67c0437\"" Jul 7 01:24:20.600408 containerd[1624]: time="2025-07-07T01:24:20.600376513Z" level=info msg="StartContainer for \"be28c5ae50bdf011db52111fe73378efeea87781f2ab44f2ada28832f67c0437\"" Jul 7 01:24:20.716851 containerd[1624]: time="2025-07-07T01:24:20.716793933Z" level=info msg="StartContainer for \"73bd73ef320f30b048301a2f82939f4f2841873c2bc6b171f1ec6c0a2fc789da\" returns successfully" Jul 7 01:24:20.722863 containerd[1624]: time="2025-07-07T01:24:20.722809109Z" level=info msg="StartContainer for \"be28c5ae50bdf011db52111fe73378efeea87781f2ab44f2ada28832f67c0437\" returns successfully" Jul 7 01:24:21.291872 kubelet[2848]: I0707 01:24:21.291735 2848 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-zv6rn" podStartSLOduration=28.291657695 podStartE2EDuration="28.291657695s" podCreationTimestamp="2025-07-07 01:23:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 01:24:21.290264126 +0000 UTC m=+33.573389829" watchObservedRunningTime="2025-07-07 01:24:21.291657695 +0000 UTC m=+33.574783382" Jul 7 01:24:21.314896 kubelet[2848]: I0707 01:24:21.314812 2848 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-wf72m" podStartSLOduration=28.314789571 podStartE2EDuration="28.314789571s" podCreationTimestamp="2025-07-07 01:23:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 01:24:21.313625525 +0000 UTC m=+33.596751216" watchObservedRunningTime="2025-07-07 01:24:21.314789571 +0000 UTC m=+33.597915259" Jul 7 01:24:21.352423 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1809913726.mount: Deactivated successfully. Jul 7 01:24:44.937821 systemd[1]: Started sshd@7-10.244.21.90:22-212.30.33.16:65377.service - OpenSSH per-connection server daemon (212.30.33.16:65377). Jul 7 01:24:45.064882 sshd[4226]: Connection closed by 212.30.33.16 port 65377 Jul 7 01:24:45.064666 systemd[1]: sshd@7-10.244.21.90:22-212.30.33.16:65377.service: Deactivated successfully. Jul 7 01:25:12.591704 systemd[1]: Started sshd@8-10.244.21.90:22-139.178.68.195:38118.service - OpenSSH per-connection server daemon (139.178.68.195:38118). Jul 7 01:25:13.598600 sshd[4234]: Accepted publickey for core from 139.178.68.195 port 38118 ssh2: RSA SHA256:OzzIFs54pJXMP2eymQNEzIb/qF+YzQ98zvMT1AG90zI Jul 7 01:25:13.602751 sshd[4234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:25:13.630506 systemd-logind[1596]: New session 10 of user core. Jul 7 01:25:13.639135 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 7 01:25:14.894931 sshd[4234]: pam_unix(sshd:session): session closed for user core Jul 7 01:25:14.904591 systemd[1]: sshd@8-10.244.21.90:22-139.178.68.195:38118.service: Deactivated successfully. Jul 7 01:25:14.905228 systemd-logind[1596]: Session 10 logged out. Waiting for processes to exit. Jul 7 01:25:14.910569 systemd[1]: session-10.scope: Deactivated successfully. Jul 7 01:25:14.912030 systemd-logind[1596]: Removed session 10. Jul 7 01:25:20.057996 systemd[1]: Started sshd@9-10.244.21.90:22-139.178.68.195:59814.service - OpenSSH per-connection server daemon (139.178.68.195:59814). Jul 7 01:25:21.035386 sshd[4249]: Accepted publickey for core from 139.178.68.195 port 59814 ssh2: RSA SHA256:OzzIFs54pJXMP2eymQNEzIb/qF+YzQ98zvMT1AG90zI Jul 7 01:25:21.037630 sshd[4249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:25:21.045385 systemd-logind[1596]: New session 11 of user core. Jul 7 01:25:21.050788 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 7 01:25:21.840723 sshd[4249]: pam_unix(sshd:session): session closed for user core Jul 7 01:25:21.847081 systemd[1]: sshd@9-10.244.21.90:22-139.178.68.195:59814.service: Deactivated successfully. Jul 7 01:25:21.852131 systemd[1]: session-11.scope: Deactivated successfully. Jul 7 01:25:21.854165 systemd-logind[1596]: Session 11 logged out. Waiting for processes to exit. Jul 7 01:25:21.856831 systemd-logind[1596]: Removed session 11. Jul 7 01:25:27.001984 systemd[1]: Started sshd@10-10.244.21.90:22-139.178.68.195:59818.service - OpenSSH per-connection server daemon (139.178.68.195:59818). Jul 7 01:25:27.974769 sshd[4266]: Accepted publickey for core from 139.178.68.195 port 59818 ssh2: RSA SHA256:OzzIFs54pJXMP2eymQNEzIb/qF+YzQ98zvMT1AG90zI Jul 7 01:25:27.978053 sshd[4266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:25:27.998920 systemd-logind[1596]: New session 12 of user core. Jul 7 01:25:28.006861 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 7 01:25:28.743567 sshd[4266]: pam_unix(sshd:session): session closed for user core Jul 7 01:25:28.750038 systemd[1]: sshd@10-10.244.21.90:22-139.178.68.195:59818.service: Deactivated successfully. Jul 7 01:25:28.750570 systemd-logind[1596]: Session 12 logged out. Waiting for processes to exit. Jul 7 01:25:28.755187 systemd[1]: session-12.scope: Deactivated successfully. Jul 7 01:25:28.756912 systemd-logind[1596]: Removed session 12. Jul 7 01:25:33.917700 systemd[1]: Started sshd@11-10.244.21.90:22-139.178.68.195:56722.service - OpenSSH per-connection server daemon (139.178.68.195:56722). Jul 7 01:25:34.936850 sshd[4281]: Accepted publickey for core from 139.178.68.195 port 56722 ssh2: RSA SHA256:OzzIFs54pJXMP2eymQNEzIb/qF+YzQ98zvMT1AG90zI Jul 7 01:25:34.939328 sshd[4281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:25:34.948555 systemd-logind[1596]: New session 13 of user core. Jul 7 01:25:34.954883 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 7 01:25:35.733687 sshd[4281]: pam_unix(sshd:session): session closed for user core Jul 7 01:25:35.737849 systemd-logind[1596]: Session 13 logged out. Waiting for processes to exit. Jul 7 01:25:35.739423 systemd[1]: sshd@11-10.244.21.90:22-139.178.68.195:56722.service: Deactivated successfully. Jul 7 01:25:35.745420 systemd[1]: session-13.scope: Deactivated successfully. Jul 7 01:25:35.747704 systemd-logind[1596]: Removed session 13. Jul 7 01:25:35.911741 systemd[1]: Started sshd@12-10.244.21.90:22-139.178.68.195:56724.service - OpenSSH per-connection server daemon (139.178.68.195:56724). Jul 7 01:25:36.920887 sshd[4295]: Accepted publickey for core from 139.178.68.195 port 56724 ssh2: RSA SHA256:OzzIFs54pJXMP2eymQNEzIb/qF+YzQ98zvMT1AG90zI Jul 7 01:25:36.920173 sshd[4295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:25:36.926931 systemd-logind[1596]: New session 14 of user core. Jul 7 01:25:36.933766 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 7 01:25:37.779424 sshd[4295]: pam_unix(sshd:session): session closed for user core Jul 7 01:25:37.784375 systemd[1]: sshd@12-10.244.21.90:22-139.178.68.195:56724.service: Deactivated successfully. Jul 7 01:25:37.791379 systemd-logind[1596]: Session 14 logged out. Waiting for processes to exit. Jul 7 01:25:37.793416 systemd[1]: session-14.scope: Deactivated successfully. Jul 7 01:25:37.795037 systemd-logind[1596]: Removed session 14. Jul 7 01:25:37.936746 systemd[1]: Started sshd@13-10.244.21.90:22-139.178.68.195:56738.service - OpenSSH per-connection server daemon (139.178.68.195:56738). Jul 7 01:25:38.918842 sshd[4306]: Accepted publickey for core from 139.178.68.195 port 56738 ssh2: RSA SHA256:OzzIFs54pJXMP2eymQNEzIb/qF+YzQ98zvMT1AG90zI Jul 7 01:25:38.921356 sshd[4306]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:25:38.929204 systemd-logind[1596]: New session 15 of user core. Jul 7 01:25:38.935749 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 7 01:25:39.716138 sshd[4306]: pam_unix(sshd:session): session closed for user core Jul 7 01:25:39.720381 systemd-logind[1596]: Session 15 logged out. Waiting for processes to exit. Jul 7 01:25:39.721384 systemd[1]: sshd@13-10.244.21.90:22-139.178.68.195:56738.service: Deactivated successfully. Jul 7 01:25:39.727980 systemd[1]: session-15.scope: Deactivated successfully. Jul 7 01:25:39.729645 systemd-logind[1596]: Removed session 15. Jul 7 01:25:44.898155 systemd[1]: Started sshd@14-10.244.21.90:22-139.178.68.195:33516.service - OpenSSH per-connection server daemon (139.178.68.195:33516). Jul 7 01:25:45.880353 sshd[4321]: Accepted publickey for core from 139.178.68.195 port 33516 ssh2: RSA SHA256:OzzIFs54pJXMP2eymQNEzIb/qF+YzQ98zvMT1AG90zI Jul 7 01:25:45.882965 sshd[4321]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:25:45.893759 systemd-logind[1596]: New session 16 of user core. Jul 7 01:25:45.899734 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 7 01:25:46.692680 sshd[4321]: pam_unix(sshd:session): session closed for user core Jul 7 01:25:46.699220 systemd[1]: sshd@14-10.244.21.90:22-139.178.68.195:33516.service: Deactivated successfully. Jul 7 01:25:46.700481 systemd-logind[1596]: Session 16 logged out. Waiting for processes to exit. Jul 7 01:25:46.704991 systemd[1]: session-16.scope: Deactivated successfully. Jul 7 01:25:46.706169 systemd-logind[1596]: Removed session 16. Jul 7 01:25:51.858226 systemd[1]: Started sshd@15-10.244.21.90:22-139.178.68.195:42260.service - OpenSSH per-connection server daemon (139.178.68.195:42260). Jul 7 01:25:52.842813 sshd[4337]: Accepted publickey for core from 139.178.68.195 port 42260 ssh2: RSA SHA256:OzzIFs54pJXMP2eymQNEzIb/qF+YzQ98zvMT1AG90zI Jul 7 01:25:52.845018 sshd[4337]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:25:52.852441 systemd-logind[1596]: New session 17 of user core. Jul 7 01:25:52.861920 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 7 01:25:53.626309 sshd[4337]: pam_unix(sshd:session): session closed for user core Jul 7 01:25:53.632356 systemd[1]: sshd@15-10.244.21.90:22-139.178.68.195:42260.service: Deactivated successfully. Jul 7 01:25:53.632901 systemd-logind[1596]: Session 17 logged out. Waiting for processes to exit. Jul 7 01:25:53.638007 systemd[1]: session-17.scope: Deactivated successfully. Jul 7 01:25:53.641525 systemd-logind[1596]: Removed session 17. Jul 7 01:25:53.796700 systemd[1]: Started sshd@16-10.244.21.90:22-139.178.68.195:42266.service - OpenSSH per-connection server daemon (139.178.68.195:42266). Jul 7 01:25:54.775888 sshd[4350]: Accepted publickey for core from 139.178.68.195 port 42266 ssh2: RSA SHA256:OzzIFs54pJXMP2eymQNEzIb/qF+YzQ98zvMT1AG90zI Jul 7 01:25:54.778094 sshd[4350]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:25:54.785364 systemd-logind[1596]: New session 18 of user core. Jul 7 01:25:54.797028 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 7 01:25:55.870551 sshd[4350]: pam_unix(sshd:session): session closed for user core Jul 7 01:25:55.875195 systemd[1]: sshd@16-10.244.21.90:22-139.178.68.195:42266.service: Deactivated successfully. Jul 7 01:25:55.875798 systemd-logind[1596]: Session 18 logged out. Waiting for processes to exit. Jul 7 01:25:55.880931 systemd[1]: session-18.scope: Deactivated successfully. Jul 7 01:25:55.883325 systemd-logind[1596]: Removed session 18. Jul 7 01:25:56.028667 systemd[1]: Started sshd@17-10.244.21.90:22-139.178.68.195:42280.service - OpenSSH per-connection server daemon (139.178.68.195:42280). Jul 7 01:25:57.021327 sshd[4364]: Accepted publickey for core from 139.178.68.195 port 42280 ssh2: RSA SHA256:OzzIFs54pJXMP2eymQNEzIb/qF+YzQ98zvMT1AG90zI Jul 7 01:25:57.026341 sshd[4364]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:25:57.033953 systemd-logind[1596]: New session 19 of user core. Jul 7 01:25:57.037714 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 7 01:26:00.010278 sshd[4364]: pam_unix(sshd:session): session closed for user core Jul 7 01:26:00.017349 systemd-logind[1596]: Session 19 logged out. Waiting for processes to exit. Jul 7 01:26:00.021060 systemd[1]: sshd@17-10.244.21.90:22-139.178.68.195:42280.service: Deactivated successfully. Jul 7 01:26:00.027489 systemd[1]: session-19.scope: Deactivated successfully. Jul 7 01:26:00.029515 systemd-logind[1596]: Removed session 19. Jul 7 01:26:00.181687 systemd[1]: Started sshd@18-10.244.21.90:22-139.178.68.195:57004.service - OpenSSH per-connection server daemon (139.178.68.195:57004). Jul 7 01:26:01.199641 sshd[4384]: Accepted publickey for core from 139.178.68.195 port 57004 ssh2: RSA SHA256:OzzIFs54pJXMP2eymQNEzIb/qF+YzQ98zvMT1AG90zI Jul 7 01:26:01.201952 sshd[4384]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:26:01.210763 systemd-logind[1596]: New session 20 of user core. Jul 7 01:26:01.218021 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 7 01:26:02.215685 sshd[4384]: pam_unix(sshd:session): session closed for user core Jul 7 01:26:02.229892 systemd[1]: sshd@18-10.244.21.90:22-139.178.68.195:57004.service: Deactivated successfully. Jul 7 01:26:02.234969 systemd[1]: session-20.scope: Deactivated successfully. Jul 7 01:26:02.237808 systemd-logind[1596]: Session 20 logged out. Waiting for processes to exit. Jul 7 01:26:02.239621 systemd-logind[1596]: Removed session 20. Jul 7 01:26:02.385844 systemd[1]: Started sshd@19-10.244.21.90:22-139.178.68.195:57018.service - OpenSSH per-connection server daemon (139.178.68.195:57018). Jul 7 01:26:03.420407 sshd[4396]: Accepted publickey for core from 139.178.68.195 port 57018 ssh2: RSA SHA256:OzzIFs54pJXMP2eymQNEzIb/qF+YzQ98zvMT1AG90zI Jul 7 01:26:03.422318 sshd[4396]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:26:03.431954 systemd-logind[1596]: New session 21 of user core. Jul 7 01:26:03.434787 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 7 01:26:04.198861 sshd[4396]: pam_unix(sshd:session): session closed for user core Jul 7 01:26:04.204385 systemd[1]: sshd@19-10.244.21.90:22-139.178.68.195:57018.service: Deactivated successfully. Jul 7 01:26:04.208913 systemd[1]: session-21.scope: Deactivated successfully. Jul 7 01:26:04.208988 systemd-logind[1596]: Session 21 logged out. Waiting for processes to exit. Jul 7 01:26:04.212534 systemd-logind[1596]: Removed session 21. Jul 7 01:26:09.367970 systemd[1]: Started sshd@20-10.244.21.90:22-139.178.68.195:34230.service - OpenSSH per-connection server daemon (139.178.68.195:34230). Jul 7 01:26:10.342025 sshd[4413]: Accepted publickey for core from 139.178.68.195 port 34230 ssh2: RSA SHA256:OzzIFs54pJXMP2eymQNEzIb/qF+YzQ98zvMT1AG90zI Jul 7 01:26:10.344390 sshd[4413]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:26:10.351141 systemd-logind[1596]: New session 22 of user core. Jul 7 01:26:10.358852 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 7 01:26:11.111589 sshd[4413]: pam_unix(sshd:session): session closed for user core Jul 7 01:26:11.116280 systemd[1]: sshd@20-10.244.21.90:22-139.178.68.195:34230.service: Deactivated successfully. Jul 7 01:26:11.120859 systemd-logind[1596]: Session 22 logged out. Waiting for processes to exit. Jul 7 01:26:11.121742 systemd[1]: session-22.scope: Deactivated successfully. Jul 7 01:26:11.124251 systemd-logind[1596]: Removed session 22. Jul 7 01:26:16.283688 systemd[1]: Started sshd@21-10.244.21.90:22-139.178.68.195:34244.service - OpenSSH per-connection server daemon (139.178.68.195:34244). Jul 7 01:26:17.315366 sshd[4427]: Accepted publickey for core from 139.178.68.195 port 34244 ssh2: RSA SHA256:OzzIFs54pJXMP2eymQNEzIb/qF+YzQ98zvMT1AG90zI Jul 7 01:26:17.318207 sshd[4427]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:26:17.330457 systemd-logind[1596]: New session 23 of user core. Jul 7 01:26:17.334880 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 7 01:26:18.107396 sshd[4427]: pam_unix(sshd:session): session closed for user core Jul 7 01:26:18.112722 systemd[1]: sshd@21-10.244.21.90:22-139.178.68.195:34244.service: Deactivated successfully. Jul 7 01:26:18.118484 systemd[1]: session-23.scope: Deactivated successfully. Jul 7 01:26:18.119934 systemd-logind[1596]: Session 23 logged out. Waiting for processes to exit. Jul 7 01:26:18.122250 systemd-logind[1596]: Removed session 23. Jul 7 01:26:23.277735 systemd[1]: Started sshd@22-10.244.21.90:22-139.178.68.195:57942.service - OpenSSH per-connection server daemon (139.178.68.195:57942). Jul 7 01:26:24.246200 sshd[4441]: Accepted publickey for core from 139.178.68.195 port 57942 ssh2: RSA SHA256:OzzIFs54pJXMP2eymQNEzIb/qF+YzQ98zvMT1AG90zI Jul 7 01:26:24.248620 sshd[4441]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:26:24.256476 systemd-logind[1596]: New session 24 of user core. Jul 7 01:26:24.263146 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 7 01:26:25.022818 sshd[4441]: pam_unix(sshd:session): session closed for user core Jul 7 01:26:25.030882 systemd[1]: sshd@22-10.244.21.90:22-139.178.68.195:57942.service: Deactivated successfully. Jul 7 01:26:25.036082 systemd-logind[1596]: Session 24 logged out. Waiting for processes to exit. Jul 7 01:26:25.036583 systemd[1]: session-24.scope: Deactivated successfully. Jul 7 01:26:25.039385 systemd-logind[1596]: Removed session 24. Jul 7 01:26:25.194777 systemd[1]: Started sshd@23-10.244.21.90:22-139.178.68.195:57944.service - OpenSSH per-connection server daemon (139.178.68.195:57944). Jul 7 01:26:26.165727 sshd[4457]: Accepted publickey for core from 139.178.68.195 port 57944 ssh2: RSA SHA256:OzzIFs54pJXMP2eymQNEzIb/qF+YzQ98zvMT1AG90zI Jul 7 01:26:26.168067 sshd[4457]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:26:26.175810 systemd-logind[1596]: New session 25 of user core. Jul 7 01:26:26.185002 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 7 01:26:28.318930 containerd[1624]: time="2025-07-07T01:26:28.318741309Z" level=info msg="StopContainer for \"42c92c2d6fa12783cd99fcd9363ac3a664cabb1637d781a4b401a957deb70c40\" with timeout 30 (s)" Jul 7 01:26:28.324717 containerd[1624]: time="2025-07-07T01:26:28.324558691Z" level=info msg="Stop container \"42c92c2d6fa12783cd99fcd9363ac3a664cabb1637d781a4b401a957deb70c40\" with signal terminated" Jul 7 01:26:28.433619 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-42c92c2d6fa12783cd99fcd9363ac3a664cabb1637d781a4b401a957deb70c40-rootfs.mount: Deactivated successfully. Jul 7 01:26:28.440353 containerd[1624]: time="2025-07-07T01:26:28.439629149Z" level=info msg="shim disconnected" id=42c92c2d6fa12783cd99fcd9363ac3a664cabb1637d781a4b401a957deb70c40 namespace=k8s.io Jul 7 01:26:28.440353 containerd[1624]: time="2025-07-07T01:26:28.439786369Z" level=warning msg="cleaning up after shim disconnected" id=42c92c2d6fa12783cd99fcd9363ac3a664cabb1637d781a4b401a957deb70c40 namespace=k8s.io Jul 7 01:26:28.440353 containerd[1624]: time="2025-07-07T01:26:28.439814906Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 01:26:28.443723 containerd[1624]: time="2025-07-07T01:26:28.443659770Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 01:26:28.449078 containerd[1624]: time="2025-07-07T01:26:28.449018258Z" level=info msg="StopContainer for \"fc3af6bcb908d457754edbf6d99117ad9893e68aea2af382f51b1960fa0c0633\" with timeout 2 (s)" Jul 7 01:26:28.449849 containerd[1624]: time="2025-07-07T01:26:28.449811024Z" level=info msg="Stop container \"fc3af6bcb908d457754edbf6d99117ad9893e68aea2af382f51b1960fa0c0633\" with signal terminated" Jul 7 01:26:28.463670 systemd-networkd[1256]: lxc_health: Link DOWN Jul 7 01:26:28.463683 systemd-networkd[1256]: lxc_health: Lost carrier Jul 7 01:26:28.508414 containerd[1624]: time="2025-07-07T01:26:28.508184659Z" level=info msg="StopContainer for \"42c92c2d6fa12783cd99fcd9363ac3a664cabb1637d781a4b401a957deb70c40\" returns successfully" Jul 7 01:26:28.510052 containerd[1624]: time="2025-07-07T01:26:28.509478725Z" level=info msg="StopPodSandbox for \"1211318ef63d57fe39c75a8be8ed6ffbb4d9bc44a299f82777dcdfa786828819\"" Jul 7 01:26:28.515032 containerd[1624]: time="2025-07-07T01:26:28.514976558Z" level=info msg="Container to stop \"42c92c2d6fa12783cd99fcd9363ac3a664cabb1637d781a4b401a957deb70c40\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 01:26:28.522326 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1211318ef63d57fe39c75a8be8ed6ffbb4d9bc44a299f82777dcdfa786828819-shm.mount: Deactivated successfully. Jul 7 01:26:28.540057 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fc3af6bcb908d457754edbf6d99117ad9893e68aea2af382f51b1960fa0c0633-rootfs.mount: Deactivated successfully. Jul 7 01:26:28.549109 containerd[1624]: time="2025-07-07T01:26:28.548755883Z" level=info msg="shim disconnected" id=fc3af6bcb908d457754edbf6d99117ad9893e68aea2af382f51b1960fa0c0633 namespace=k8s.io Jul 7 01:26:28.549109 containerd[1624]: time="2025-07-07T01:26:28.548832702Z" level=warning msg="cleaning up after shim disconnected" id=fc3af6bcb908d457754edbf6d99117ad9893e68aea2af382f51b1960fa0c0633 namespace=k8s.io Jul 7 01:26:28.549109 containerd[1624]: time="2025-07-07T01:26:28.548851674Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 01:26:28.581840 containerd[1624]: time="2025-07-07T01:26:28.581645100Z" level=info msg="shim disconnected" id=1211318ef63d57fe39c75a8be8ed6ffbb4d9bc44a299f82777dcdfa786828819 namespace=k8s.io Jul 7 01:26:28.581840 containerd[1624]: time="2025-07-07T01:26:28.581747856Z" level=warning msg="cleaning up after shim disconnected" id=1211318ef63d57fe39c75a8be8ed6ffbb4d9bc44a299f82777dcdfa786828819 namespace=k8s.io Jul 7 01:26:28.581840 containerd[1624]: time="2025-07-07T01:26:28.581793308Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 01:26:28.595072 containerd[1624]: time="2025-07-07T01:26:28.593631804Z" level=info msg="StopContainer for \"fc3af6bcb908d457754edbf6d99117ad9893e68aea2af382f51b1960fa0c0633\" returns successfully" Jul 7 01:26:28.596532 containerd[1624]: time="2025-07-07T01:26:28.596501597Z" level=info msg="StopPodSandbox for \"bbc14ada1ff8f815dc35e06674fad123609cb8913c58274b9411b2e3e7d0bcc6\"" Jul 7 01:26:28.597057 containerd[1624]: time="2025-07-07T01:26:28.596826620Z" level=info msg="Container to stop \"8f03651d43dcafb999660a855dcae2ce236ad2da316578d537994927535cc543\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 01:26:28.597057 containerd[1624]: time="2025-07-07T01:26:28.596856766Z" level=info msg="Container to stop \"2823d0c48d0619fe4f4f63d68188c0d6627e42e203d5aebe2fc819213dd11156\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 01:26:28.597057 containerd[1624]: time="2025-07-07T01:26:28.596886237Z" level=info msg="Container to stop \"fe577804ba9c2ced641b6f7b9e01498a2d5b1554997487a2a5a9faf1dcffbbdc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 01:26:28.597057 containerd[1624]: time="2025-07-07T01:26:28.596907324Z" level=info msg="Container to stop \"4e5cb6f1fc43ea5f0ae02d69f421d256ccb38d70f9dc1632d2b7dfc7ff11801f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 01:26:28.597057 containerd[1624]: time="2025-07-07T01:26:28.596936730Z" level=info msg="Container to stop \"fc3af6bcb908d457754edbf6d99117ad9893e68aea2af382f51b1960fa0c0633\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 01:26:28.606986 containerd[1624]: time="2025-07-07T01:26:28.606886810Z" level=warning msg="cleanup warnings time=\"2025-07-07T01:26:28Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 7 01:26:28.610317 containerd[1624]: time="2025-07-07T01:26:28.610153534Z" level=info msg="TearDown network for sandbox \"1211318ef63d57fe39c75a8be8ed6ffbb4d9bc44a299f82777dcdfa786828819\" successfully" Jul 7 01:26:28.610317 containerd[1624]: time="2025-07-07T01:26:28.610191470Z" level=info msg="StopPodSandbox for \"1211318ef63d57fe39c75a8be8ed6ffbb4d9bc44a299f82777dcdfa786828819\" returns successfully" Jul 7 01:26:28.670953 containerd[1624]: time="2025-07-07T01:26:28.670827418Z" level=info msg="shim disconnected" id=bbc14ada1ff8f815dc35e06674fad123609cb8913c58274b9411b2e3e7d0bcc6 namespace=k8s.io Jul 7 01:26:28.670953 containerd[1624]: time="2025-07-07T01:26:28.670949634Z" level=warning msg="cleaning up after shim disconnected" id=bbc14ada1ff8f815dc35e06674fad123609cb8913c58274b9411b2e3e7d0bcc6 namespace=k8s.io Jul 7 01:26:28.671243 containerd[1624]: time="2025-07-07T01:26:28.670967258Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 01:26:28.699429 kubelet[2848]: I0707 01:26:28.699373 2848 scope.go:117] "RemoveContainer" containerID="42c92c2d6fa12783cd99fcd9363ac3a664cabb1637d781a4b401a957deb70c40" Jul 7 01:26:28.700704 containerd[1624]: time="2025-07-07T01:26:28.700581089Z" level=info msg="TearDown network for sandbox \"bbc14ada1ff8f815dc35e06674fad123609cb8913c58274b9411b2e3e7d0bcc6\" successfully" Jul 7 01:26:28.700704 containerd[1624]: time="2025-07-07T01:26:28.700622416Z" level=info msg="StopPodSandbox for \"bbc14ada1ff8f815dc35e06674fad123609cb8913c58274b9411b2e3e7d0bcc6\" returns successfully" Jul 7 01:26:28.718603 containerd[1624]: time="2025-07-07T01:26:28.718527241Z" level=info msg="RemoveContainer for \"42c92c2d6fa12783cd99fcd9363ac3a664cabb1637d781a4b401a957deb70c40\"" Jul 7 01:26:28.725395 containerd[1624]: time="2025-07-07T01:26:28.725078465Z" level=info msg="RemoveContainer for \"42c92c2d6fa12783cd99fcd9363ac3a664cabb1637d781a4b401a957deb70c40\" returns successfully" Jul 7 01:26:28.726007 kubelet[2848]: I0707 01:26:28.725977 2848 scope.go:117] "RemoveContainer" containerID="42c92c2d6fa12783cd99fcd9363ac3a664cabb1637d781a4b401a957deb70c40" Jul 7 01:26:28.740823 containerd[1624]: time="2025-07-07T01:26:28.730358684Z" level=error msg="ContainerStatus for \"42c92c2d6fa12783cd99fcd9363ac3a664cabb1637d781a4b401a957deb70c40\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"42c92c2d6fa12783cd99fcd9363ac3a664cabb1637d781a4b401a957deb70c40\": not found" Jul 7 01:26:28.756929 kubelet[2848]: E0707 01:26:28.756164 2848 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"42c92c2d6fa12783cd99fcd9363ac3a664cabb1637d781a4b401a957deb70c40\": not found" containerID="42c92c2d6fa12783cd99fcd9363ac3a664cabb1637d781a4b401a957deb70c40" Jul 7 01:26:28.756929 kubelet[2848]: I0707 01:26:28.756424 2848 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"42c92c2d6fa12783cd99fcd9363ac3a664cabb1637d781a4b401a957deb70c40"} err="failed to get container status \"42c92c2d6fa12783cd99fcd9363ac3a664cabb1637d781a4b401a957deb70c40\": rpc error: code = NotFound desc = an error occurred when try to find container \"42c92c2d6fa12783cd99fcd9363ac3a664cabb1637d781a4b401a957deb70c40\": not found" Jul 7 01:26:28.832724 kubelet[2848]: I0707 01:26:28.832093 2848 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7c84fc14-1648-4d03-9542-b41b6dcff7c6-cilium-config-path\") pod \"7c84fc14-1648-4d03-9542-b41b6dcff7c6\" (UID: \"7c84fc14-1648-4d03-9542-b41b6dcff7c6\") " Jul 7 01:26:28.832724 kubelet[2848]: I0707 01:26:28.832171 2848 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/354092e9-2ae2-4702-aff4-78efbc4772d7-cilium-cgroup\") pod \"354092e9-2ae2-4702-aff4-78efbc4772d7\" (UID: \"354092e9-2ae2-4702-aff4-78efbc4772d7\") " Jul 7 01:26:28.832724 kubelet[2848]: I0707 01:26:28.832204 2848 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/354092e9-2ae2-4702-aff4-78efbc4772d7-hostproc\") pod \"354092e9-2ae2-4702-aff4-78efbc4772d7\" (UID: \"354092e9-2ae2-4702-aff4-78efbc4772d7\") " Jul 7 01:26:28.832724 kubelet[2848]: I0707 01:26:28.832230 2848 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/354092e9-2ae2-4702-aff4-78efbc4772d7-cilium-run\") pod \"354092e9-2ae2-4702-aff4-78efbc4772d7\" (UID: \"354092e9-2ae2-4702-aff4-78efbc4772d7\") " Jul 7 01:26:28.832724 kubelet[2848]: I0707 01:26:28.832257 2848 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/354092e9-2ae2-4702-aff4-78efbc4772d7-host-proc-sys-kernel\") pod \"354092e9-2ae2-4702-aff4-78efbc4772d7\" (UID: \"354092e9-2ae2-4702-aff4-78efbc4772d7\") " Jul 7 01:26:28.832724 kubelet[2848]: I0707 01:26:28.832282 2848 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/354092e9-2ae2-4702-aff4-78efbc4772d7-cni-path\") pod \"354092e9-2ae2-4702-aff4-78efbc4772d7\" (UID: \"354092e9-2ae2-4702-aff4-78efbc4772d7\") " Jul 7 01:26:28.833185 kubelet[2848]: I0707 01:26:28.832337 2848 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/354092e9-2ae2-4702-aff4-78efbc4772d7-etc-cni-netd\") pod \"354092e9-2ae2-4702-aff4-78efbc4772d7\" (UID: \"354092e9-2ae2-4702-aff4-78efbc4772d7\") " Jul 7 01:26:28.833185 kubelet[2848]: I0707 01:26:28.832362 2848 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/354092e9-2ae2-4702-aff4-78efbc4772d7-lib-modules\") pod \"354092e9-2ae2-4702-aff4-78efbc4772d7\" (UID: \"354092e9-2ae2-4702-aff4-78efbc4772d7\") " Jul 7 01:26:28.833185 kubelet[2848]: I0707 01:26:28.832395 2848 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/354092e9-2ae2-4702-aff4-78efbc4772d7-clustermesh-secrets\") pod \"354092e9-2ae2-4702-aff4-78efbc4772d7\" (UID: \"354092e9-2ae2-4702-aff4-78efbc4772d7\") " Jul 7 01:26:28.833185 kubelet[2848]: I0707 01:26:28.832436 2848 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28g4t\" (UniqueName: \"kubernetes.io/projected/7c84fc14-1648-4d03-9542-b41b6dcff7c6-kube-api-access-28g4t\") pod \"7c84fc14-1648-4d03-9542-b41b6dcff7c6\" (UID: \"7c84fc14-1648-4d03-9542-b41b6dcff7c6\") " Jul 7 01:26:28.833185 kubelet[2848]: I0707 01:26:28.832466 2848 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/354092e9-2ae2-4702-aff4-78efbc4772d7-hubble-tls\") pod \"354092e9-2ae2-4702-aff4-78efbc4772d7\" (UID: \"354092e9-2ae2-4702-aff4-78efbc4772d7\") " Jul 7 01:26:28.853076 kubelet[2848]: I0707 01:26:28.851341 2848 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/354092e9-2ae2-4702-aff4-78efbc4772d7-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "354092e9-2ae2-4702-aff4-78efbc4772d7" (UID: "354092e9-2ae2-4702-aff4-78efbc4772d7"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 7 01:26:28.853076 kubelet[2848]: I0707 01:26:28.851229 2848 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/354092e9-2ae2-4702-aff4-78efbc4772d7-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "354092e9-2ae2-4702-aff4-78efbc4772d7" (UID: "354092e9-2ae2-4702-aff4-78efbc4772d7"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 01:26:28.853076 kubelet[2848]: I0707 01:26:28.852542 2848 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c84fc14-1648-4d03-9542-b41b6dcff7c6-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7c84fc14-1648-4d03-9542-b41b6dcff7c6" (UID: "7c84fc14-1648-4d03-9542-b41b6dcff7c6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 7 01:26:28.853076 kubelet[2848]: I0707 01:26:28.852608 2848 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/354092e9-2ae2-4702-aff4-78efbc4772d7-hostproc" (OuterVolumeSpecName: "hostproc") pod "354092e9-2ae2-4702-aff4-78efbc4772d7" (UID: "354092e9-2ae2-4702-aff4-78efbc4772d7"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 01:26:28.853076 kubelet[2848]: I0707 01:26:28.852621 2848 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/354092e9-2ae2-4702-aff4-78efbc4772d7-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "354092e9-2ae2-4702-aff4-78efbc4772d7" (UID: "354092e9-2ae2-4702-aff4-78efbc4772d7"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 01:26:28.857157 kubelet[2848]: I0707 01:26:28.852646 2848 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/354092e9-2ae2-4702-aff4-78efbc4772d7-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "354092e9-2ae2-4702-aff4-78efbc4772d7" (UID: "354092e9-2ae2-4702-aff4-78efbc4772d7"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 01:26:28.857157 kubelet[2848]: I0707 01:26:28.852658 2848 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/354092e9-2ae2-4702-aff4-78efbc4772d7-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "354092e9-2ae2-4702-aff4-78efbc4772d7" (UID: "354092e9-2ae2-4702-aff4-78efbc4772d7"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 01:26:28.857157 kubelet[2848]: I0707 01:26:28.852688 2848 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/354092e9-2ae2-4702-aff4-78efbc4772d7-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "354092e9-2ae2-4702-aff4-78efbc4772d7" (UID: "354092e9-2ae2-4702-aff4-78efbc4772d7"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 01:26:28.857157 kubelet[2848]: I0707 01:26:28.852720 2848 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/354092e9-2ae2-4702-aff4-78efbc4772d7-cni-path" (OuterVolumeSpecName: "cni-path") pod "354092e9-2ae2-4702-aff4-78efbc4772d7" (UID: "354092e9-2ae2-4702-aff4-78efbc4772d7"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 01:26:28.859322 kubelet[2848]: I0707 01:26:28.859195 2848 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/354092e9-2ae2-4702-aff4-78efbc4772d7-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "354092e9-2ae2-4702-aff4-78efbc4772d7" (UID: "354092e9-2ae2-4702-aff4-78efbc4772d7"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 7 01:26:28.859447 kubelet[2848]: I0707 01:26:28.859343 2848 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c84fc14-1648-4d03-9542-b41b6dcff7c6-kube-api-access-28g4t" (OuterVolumeSpecName: "kube-api-access-28g4t") pod "7c84fc14-1648-4d03-9542-b41b6dcff7c6" (UID: "7c84fc14-1648-4d03-9542-b41b6dcff7c6"). InnerVolumeSpecName "kube-api-access-28g4t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 7 01:26:28.933045 kubelet[2848]: I0707 01:26:28.932954 2848 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w92pm\" (UniqueName: \"kubernetes.io/projected/354092e9-2ae2-4702-aff4-78efbc4772d7-kube-api-access-w92pm\") pod \"354092e9-2ae2-4702-aff4-78efbc4772d7\" (UID: \"354092e9-2ae2-4702-aff4-78efbc4772d7\") " Jul 7 01:26:28.933045 kubelet[2848]: I0707 01:26:28.933031 2848 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/354092e9-2ae2-4702-aff4-78efbc4772d7-bpf-maps\") pod \"354092e9-2ae2-4702-aff4-78efbc4772d7\" (UID: \"354092e9-2ae2-4702-aff4-78efbc4772d7\") " Jul 7 01:26:28.933045 kubelet[2848]: I0707 01:26:28.933061 2848 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/354092e9-2ae2-4702-aff4-78efbc4772d7-host-proc-sys-net\") pod \"354092e9-2ae2-4702-aff4-78efbc4772d7\" (UID: \"354092e9-2ae2-4702-aff4-78efbc4772d7\") " Jul 7 01:26:28.933473 kubelet[2848]: I0707 01:26:28.933088 2848 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/354092e9-2ae2-4702-aff4-78efbc4772d7-xtables-lock\") pod \"354092e9-2ae2-4702-aff4-78efbc4772d7\" (UID: \"354092e9-2ae2-4702-aff4-78efbc4772d7\") " Jul 7 01:26:28.933473 kubelet[2848]: I0707 01:26:28.933118 2848 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/354092e9-2ae2-4702-aff4-78efbc4772d7-cilium-config-path\") pod \"354092e9-2ae2-4702-aff4-78efbc4772d7\" (UID: \"354092e9-2ae2-4702-aff4-78efbc4772d7\") " Jul 7 01:26:28.933473 kubelet[2848]: I0707 01:26:28.933178 2848 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/354092e9-2ae2-4702-aff4-78efbc4772d7-hubble-tls\") on node \"srv-3dgpq.gb1.brightbox.com\" DevicePath \"\"" Jul 7 01:26:28.933473 kubelet[2848]: I0707 01:26:28.933199 2848 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/354092e9-2ae2-4702-aff4-78efbc4772d7-cilium-cgroup\") on node \"srv-3dgpq.gb1.brightbox.com\" DevicePath \"\"" Jul 7 01:26:28.933473 kubelet[2848]: I0707 01:26:28.933221 2848 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7c84fc14-1648-4d03-9542-b41b6dcff7c6-cilium-config-path\") on node \"srv-3dgpq.gb1.brightbox.com\" DevicePath \"\"" Jul 7 01:26:28.933473 kubelet[2848]: I0707 01:26:28.933247 2848 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/354092e9-2ae2-4702-aff4-78efbc4772d7-hostproc\") on node \"srv-3dgpq.gb1.brightbox.com\" DevicePath \"\"" Jul 7 01:26:28.933473 kubelet[2848]: I0707 01:26:28.933265 2848 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/354092e9-2ae2-4702-aff4-78efbc4772d7-cilium-run\") on node \"srv-3dgpq.gb1.brightbox.com\" DevicePath \"\"" Jul 7 01:26:28.933834 kubelet[2848]: I0707 01:26:28.933283 2848 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/354092e9-2ae2-4702-aff4-78efbc4772d7-host-proc-sys-kernel\") on node \"srv-3dgpq.gb1.brightbox.com\" DevicePath \"\"" Jul 7 01:26:28.933834 kubelet[2848]: I0707 01:26:28.933350 2848 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/354092e9-2ae2-4702-aff4-78efbc4772d7-cni-path\") on node \"srv-3dgpq.gb1.brightbox.com\" DevicePath \"\"" Jul 7 01:26:28.933834 kubelet[2848]: I0707 01:26:28.933395 2848 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/354092e9-2ae2-4702-aff4-78efbc4772d7-etc-cni-netd\") on node \"srv-3dgpq.gb1.brightbox.com\" DevicePath \"\"" Jul 7 01:26:28.933834 kubelet[2848]: I0707 01:26:28.933412 2848 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/354092e9-2ae2-4702-aff4-78efbc4772d7-lib-modules\") on node \"srv-3dgpq.gb1.brightbox.com\" DevicePath \"\"" Jul 7 01:26:28.933834 kubelet[2848]: I0707 01:26:28.933427 2848 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/354092e9-2ae2-4702-aff4-78efbc4772d7-clustermesh-secrets\") on node \"srv-3dgpq.gb1.brightbox.com\" DevicePath \"\"" Jul 7 01:26:28.933834 kubelet[2848]: I0707 01:26:28.933443 2848 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-28g4t\" (UniqueName: \"kubernetes.io/projected/7c84fc14-1648-4d03-9542-b41b6dcff7c6-kube-api-access-28g4t\") on node \"srv-3dgpq.gb1.brightbox.com\" DevicePath \"\"" Jul 7 01:26:28.934642 kubelet[2848]: I0707 01:26:28.934200 2848 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/354092e9-2ae2-4702-aff4-78efbc4772d7-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "354092e9-2ae2-4702-aff4-78efbc4772d7" (UID: "354092e9-2ae2-4702-aff4-78efbc4772d7"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 01:26:28.937544 kubelet[2848]: I0707 01:26:28.937502 2848 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/354092e9-2ae2-4702-aff4-78efbc4772d7-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "354092e9-2ae2-4702-aff4-78efbc4772d7" (UID: "354092e9-2ae2-4702-aff4-78efbc4772d7"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 7 01:26:28.937638 kubelet[2848]: I0707 01:26:28.937569 2848 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/354092e9-2ae2-4702-aff4-78efbc4772d7-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "354092e9-2ae2-4702-aff4-78efbc4772d7" (UID: "354092e9-2ae2-4702-aff4-78efbc4772d7"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 01:26:28.937638 kubelet[2848]: I0707 01:26:28.937604 2848 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/354092e9-2ae2-4702-aff4-78efbc4772d7-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "354092e9-2ae2-4702-aff4-78efbc4772d7" (UID: "354092e9-2ae2-4702-aff4-78efbc4772d7"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 01:26:28.938152 kubelet[2848]: I0707 01:26:28.938115 2848 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/354092e9-2ae2-4702-aff4-78efbc4772d7-kube-api-access-w92pm" (OuterVolumeSpecName: "kube-api-access-w92pm") pod "354092e9-2ae2-4702-aff4-78efbc4772d7" (UID: "354092e9-2ae2-4702-aff4-78efbc4772d7"). InnerVolumeSpecName "kube-api-access-w92pm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 7 01:26:29.034513 kubelet[2848]: I0707 01:26:29.034473 2848 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w92pm\" (UniqueName: \"kubernetes.io/projected/354092e9-2ae2-4702-aff4-78efbc4772d7-kube-api-access-w92pm\") on node \"srv-3dgpq.gb1.brightbox.com\" DevicePath \"\"" Jul 7 01:26:29.035055 kubelet[2848]: I0707 01:26:29.034830 2848 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/354092e9-2ae2-4702-aff4-78efbc4772d7-bpf-maps\") on node \"srv-3dgpq.gb1.brightbox.com\" DevicePath \"\"" Jul 7 01:26:29.035251 kubelet[2848]: I0707 01:26:29.034863 2848 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/354092e9-2ae2-4702-aff4-78efbc4772d7-host-proc-sys-net\") on node \"srv-3dgpq.gb1.brightbox.com\" DevicePath \"\"" Jul 7 01:26:29.036443 kubelet[2848]: I0707 01:26:29.035274 2848 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/354092e9-2ae2-4702-aff4-78efbc4772d7-xtables-lock\") on node \"srv-3dgpq.gb1.brightbox.com\" DevicePath \"\"" Jul 7 01:26:29.036443 kubelet[2848]: I0707 01:26:29.035721 2848 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/354092e9-2ae2-4702-aff4-78efbc4772d7-cilium-config-path\") on node \"srv-3dgpq.gb1.brightbox.com\" DevicePath \"\"" Jul 7 01:26:29.384434 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1211318ef63d57fe39c75a8be8ed6ffbb4d9bc44a299f82777dcdfa786828819-rootfs.mount: Deactivated successfully. Jul 7 01:26:29.384699 systemd[1]: var-lib-kubelet-pods-7c84fc14\x2d1648\x2d4d03\x2d9542\x2db41b6dcff7c6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d28g4t.mount: Deactivated successfully. Jul 7 01:26:29.384931 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bbc14ada1ff8f815dc35e06674fad123609cb8913c58274b9411b2e3e7d0bcc6-rootfs.mount: Deactivated successfully. Jul 7 01:26:29.385112 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bbc14ada1ff8f815dc35e06674fad123609cb8913c58274b9411b2e3e7d0bcc6-shm.mount: Deactivated successfully. Jul 7 01:26:29.385317 systemd[1]: var-lib-kubelet-pods-354092e9\x2d2ae2\x2d4702\x2daff4\x2d78efbc4772d7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dw92pm.mount: Deactivated successfully. Jul 7 01:26:29.385493 systemd[1]: var-lib-kubelet-pods-354092e9\x2d2ae2\x2d4702\x2daff4\x2d78efbc4772d7-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 7 01:26:29.385678 systemd[1]: var-lib-kubelet-pods-354092e9\x2d2ae2\x2d4702\x2daff4\x2d78efbc4772d7-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 7 01:26:29.733418 kubelet[2848]: I0707 01:26:29.731321 2848 scope.go:117] "RemoveContainer" containerID="fc3af6bcb908d457754edbf6d99117ad9893e68aea2af382f51b1960fa0c0633" Jul 7 01:26:29.734383 containerd[1624]: time="2025-07-07T01:26:29.734019555Z" level=info msg="RemoveContainer for \"fc3af6bcb908d457754edbf6d99117ad9893e68aea2af382f51b1960fa0c0633\"" Jul 7 01:26:29.742118 containerd[1624]: time="2025-07-07T01:26:29.742036382Z" level=info msg="RemoveContainer for \"fc3af6bcb908d457754edbf6d99117ad9893e68aea2af382f51b1960fa0c0633\" returns successfully" Jul 7 01:26:29.743048 kubelet[2848]: I0707 01:26:29.742276 2848 scope.go:117] "RemoveContainer" containerID="fe577804ba9c2ced641b6f7b9e01498a2d5b1554997487a2a5a9faf1dcffbbdc" Jul 7 01:26:29.743978 containerd[1624]: time="2025-07-07T01:26:29.743931782Z" level=info msg="RemoveContainer for \"fe577804ba9c2ced641b6f7b9e01498a2d5b1554997487a2a5a9faf1dcffbbdc\"" Jul 7 01:26:29.749342 containerd[1624]: time="2025-07-07T01:26:29.749274577Z" level=info msg="RemoveContainer for \"fe577804ba9c2ced641b6f7b9e01498a2d5b1554997487a2a5a9faf1dcffbbdc\" returns successfully" Jul 7 01:26:29.749621 kubelet[2848]: I0707 01:26:29.749551 2848 scope.go:117] "RemoveContainer" containerID="4e5cb6f1fc43ea5f0ae02d69f421d256ccb38d70f9dc1632d2b7dfc7ff11801f" Jul 7 01:26:29.752060 containerd[1624]: time="2025-07-07T01:26:29.751993651Z" level=info msg="RemoveContainer for \"4e5cb6f1fc43ea5f0ae02d69f421d256ccb38d70f9dc1632d2b7dfc7ff11801f\"" Jul 7 01:26:29.772089 containerd[1624]: time="2025-07-07T01:26:29.771958122Z" level=info msg="RemoveContainer for \"4e5cb6f1fc43ea5f0ae02d69f421d256ccb38d70f9dc1632d2b7dfc7ff11801f\" returns successfully" Jul 7 01:26:29.772583 kubelet[2848]: I0707 01:26:29.772531 2848 scope.go:117] "RemoveContainer" containerID="2823d0c48d0619fe4f4f63d68188c0d6627e42e203d5aebe2fc819213dd11156" Jul 7 01:26:29.774500 containerd[1624]: time="2025-07-07T01:26:29.774167032Z" level=info msg="RemoveContainer for \"2823d0c48d0619fe4f4f63d68188c0d6627e42e203d5aebe2fc819213dd11156\"" Jul 7 01:26:29.787061 containerd[1624]: time="2025-07-07T01:26:29.786928490Z" level=info msg="RemoveContainer for \"2823d0c48d0619fe4f4f63d68188c0d6627e42e203d5aebe2fc819213dd11156\" returns successfully" Jul 7 01:26:29.787645 kubelet[2848]: I0707 01:26:29.787597 2848 scope.go:117] "RemoveContainer" containerID="8f03651d43dcafb999660a855dcae2ce236ad2da316578d537994927535cc543" Jul 7 01:26:29.790044 containerd[1624]: time="2025-07-07T01:26:29.789994216Z" level=info msg="RemoveContainer for \"8f03651d43dcafb999660a855dcae2ce236ad2da316578d537994927535cc543\"" Jul 7 01:26:29.797583 containerd[1624]: time="2025-07-07T01:26:29.797182259Z" level=info msg="RemoveContainer for \"8f03651d43dcafb999660a855dcae2ce236ad2da316578d537994927535cc543\" returns successfully" Jul 7 01:26:29.963231 kubelet[2848]: I0707 01:26:29.963012 2848 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="354092e9-2ae2-4702-aff4-78efbc4772d7" path="/var/lib/kubelet/pods/354092e9-2ae2-4702-aff4-78efbc4772d7/volumes" Jul 7 01:26:29.966603 kubelet[2848]: I0707 01:26:29.966562 2848 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c84fc14-1648-4d03-9542-b41b6dcff7c6" path="/var/lib/kubelet/pods/7c84fc14-1648-4d03-9542-b41b6dcff7c6/volumes" Jul 7 01:26:30.319811 sshd[4457]: pam_unix(sshd:session): session closed for user core Jul 7 01:26:30.326011 systemd[1]: sshd@23-10.244.21.90:22-139.178.68.195:57944.service: Deactivated successfully. Jul 7 01:26:30.327680 systemd-logind[1596]: Session 25 logged out. Waiting for processes to exit. Jul 7 01:26:30.332754 systemd[1]: session-25.scope: Deactivated successfully. Jul 7 01:26:30.335033 systemd-logind[1596]: Removed session 25. Jul 7 01:26:30.492701 systemd[1]: Started sshd@24-10.244.21.90:22-139.178.68.195:52786.service - OpenSSH per-connection server daemon (139.178.68.195:52786). Jul 7 01:26:31.527080 sshd[4626]: Accepted publickey for core from 139.178.68.195 port 52786 ssh2: RSA SHA256:OzzIFs54pJXMP2eymQNEzIb/qF+YzQ98zvMT1AG90zI Jul 7 01:26:31.530032 sshd[4626]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:26:31.537415 systemd-logind[1596]: New session 26 of user core. Jul 7 01:26:31.543737 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 7 01:26:33.149487 kubelet[2848]: E0707 01:26:33.149396 2848 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="354092e9-2ae2-4702-aff4-78efbc4772d7" containerName="mount-cgroup" Jul 7 01:26:33.151054 kubelet[2848]: E0707 01:26:33.149454 2848 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="354092e9-2ae2-4702-aff4-78efbc4772d7" containerName="apply-sysctl-overwrites" Jul 7 01:26:33.151054 kubelet[2848]: E0707 01:26:33.150268 2848 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="354092e9-2ae2-4702-aff4-78efbc4772d7" containerName="mount-bpf-fs" Jul 7 01:26:33.151054 kubelet[2848]: E0707 01:26:33.150319 2848 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="354092e9-2ae2-4702-aff4-78efbc4772d7" containerName="cilium-agent" Jul 7 01:26:33.151054 kubelet[2848]: E0707 01:26:33.150337 2848 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7c84fc14-1648-4d03-9542-b41b6dcff7c6" containerName="cilium-operator" Jul 7 01:26:33.151054 kubelet[2848]: E0707 01:26:33.150353 2848 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="354092e9-2ae2-4702-aff4-78efbc4772d7" containerName="clean-cilium-state" Jul 7 01:26:33.151054 kubelet[2848]: I0707 01:26:33.150460 2848 memory_manager.go:354] "RemoveStaleState removing state" podUID="354092e9-2ae2-4702-aff4-78efbc4772d7" containerName="cilium-agent" Jul 7 01:26:33.151054 kubelet[2848]: I0707 01:26:33.150493 2848 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c84fc14-1648-4d03-9542-b41b6dcff7c6" containerName="cilium-operator" Jul 7 01:26:33.211649 kubelet[2848]: E0707 01:26:33.210587 2848 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 7 01:26:33.260319 sshd[4626]: pam_unix(sshd:session): session closed for user core Jul 7 01:26:33.266155 systemd[1]: sshd@24-10.244.21.90:22-139.178.68.195:52786.service: Deactivated successfully. Jul 7 01:26:33.270934 systemd[1]: session-26.scope: Deactivated successfully. Jul 7 01:26:33.270983 systemd-logind[1596]: Session 26 logged out. Waiting for processes to exit. Jul 7 01:26:33.274956 systemd-logind[1596]: Removed session 26. Jul 7 01:26:33.285253 kubelet[2848]: I0707 01:26:33.285076 2848 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/450dd952-393a-43c5-b407-7785334f24dd-lib-modules\") pod \"cilium-5dstj\" (UID: \"450dd952-393a-43c5-b407-7785334f24dd\") " pod="kube-system/cilium-5dstj" Jul 7 01:26:33.285253 kubelet[2848]: I0707 01:26:33.285146 2848 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/450dd952-393a-43c5-b407-7785334f24dd-clustermesh-secrets\") pod \"cilium-5dstj\" (UID: \"450dd952-393a-43c5-b407-7785334f24dd\") " pod="kube-system/cilium-5dstj" Jul 7 01:26:33.285253 kubelet[2848]: I0707 01:26:33.285190 2848 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/450dd952-393a-43c5-b407-7785334f24dd-cilium-config-path\") pod \"cilium-5dstj\" (UID: \"450dd952-393a-43c5-b407-7785334f24dd\") " pod="kube-system/cilium-5dstj" Jul 7 01:26:33.285253 kubelet[2848]: I0707 01:26:33.285239 2848 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/450dd952-393a-43c5-b407-7785334f24dd-cilium-cgroup\") pod \"cilium-5dstj\" (UID: \"450dd952-393a-43c5-b407-7785334f24dd\") " pod="kube-system/cilium-5dstj" Jul 7 01:26:33.285503 kubelet[2848]: I0707 01:26:33.285275 2848 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/450dd952-393a-43c5-b407-7785334f24dd-host-proc-sys-kernel\") pod \"cilium-5dstj\" (UID: \"450dd952-393a-43c5-b407-7785334f24dd\") " pod="kube-system/cilium-5dstj" Jul 7 01:26:33.285503 kubelet[2848]: I0707 01:26:33.285352 2848 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/450dd952-393a-43c5-b407-7785334f24dd-hostproc\") pod \"cilium-5dstj\" (UID: \"450dd952-393a-43c5-b407-7785334f24dd\") " pod="kube-system/cilium-5dstj" Jul 7 01:26:33.285503 kubelet[2848]: I0707 01:26:33.285390 2848 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/450dd952-393a-43c5-b407-7785334f24dd-etc-cni-netd\") pod \"cilium-5dstj\" (UID: \"450dd952-393a-43c5-b407-7785334f24dd\") " pod="kube-system/cilium-5dstj" Jul 7 01:26:33.285503 kubelet[2848]: I0707 01:26:33.285418 2848 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/450dd952-393a-43c5-b407-7785334f24dd-xtables-lock\") pod \"cilium-5dstj\" (UID: \"450dd952-393a-43c5-b407-7785334f24dd\") " pod="kube-system/cilium-5dstj" Jul 7 01:26:33.285503 kubelet[2848]: I0707 01:26:33.285446 2848 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/450dd952-393a-43c5-b407-7785334f24dd-bpf-maps\") pod \"cilium-5dstj\" (UID: \"450dd952-393a-43c5-b407-7785334f24dd\") " pod="kube-system/cilium-5dstj" Jul 7 01:26:33.285503 kubelet[2848]: I0707 01:26:33.285474 2848 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/450dd952-393a-43c5-b407-7785334f24dd-cni-path\") pod \"cilium-5dstj\" (UID: \"450dd952-393a-43c5-b407-7785334f24dd\") " pod="kube-system/cilium-5dstj" Jul 7 01:26:33.286035 kubelet[2848]: I0707 01:26:33.285502 2848 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/450dd952-393a-43c5-b407-7785334f24dd-cilium-ipsec-secrets\") pod \"cilium-5dstj\" (UID: \"450dd952-393a-43c5-b407-7785334f24dd\") " pod="kube-system/cilium-5dstj" Jul 7 01:26:33.286035 kubelet[2848]: I0707 01:26:33.285528 2848 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/450dd952-393a-43c5-b407-7785334f24dd-hubble-tls\") pod \"cilium-5dstj\" (UID: \"450dd952-393a-43c5-b407-7785334f24dd\") " pod="kube-system/cilium-5dstj" Jul 7 01:26:33.286035 kubelet[2848]: I0707 01:26:33.285553 2848 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gnnk\" (UniqueName: \"kubernetes.io/projected/450dd952-393a-43c5-b407-7785334f24dd-kube-api-access-2gnnk\") pod \"cilium-5dstj\" (UID: \"450dd952-393a-43c5-b407-7785334f24dd\") " pod="kube-system/cilium-5dstj" Jul 7 01:26:33.286035 kubelet[2848]: I0707 01:26:33.285586 2848 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/450dd952-393a-43c5-b407-7785334f24dd-cilium-run\") pod \"cilium-5dstj\" (UID: \"450dd952-393a-43c5-b407-7785334f24dd\") " pod="kube-system/cilium-5dstj" Jul 7 01:26:33.286035 kubelet[2848]: I0707 01:26:33.285623 2848 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/450dd952-393a-43c5-b407-7785334f24dd-host-proc-sys-net\") pod \"cilium-5dstj\" (UID: \"450dd952-393a-43c5-b407-7785334f24dd\") " pod="kube-system/cilium-5dstj" Jul 7 01:26:33.437938 systemd[1]: Started sshd@25-10.244.21.90:22-139.178.68.195:52798.service - OpenSSH per-connection server daemon (139.178.68.195:52798). Jul 7 01:26:33.489762 containerd[1624]: time="2025-07-07T01:26:33.489706842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5dstj,Uid:450dd952-393a-43c5-b407-7785334f24dd,Namespace:kube-system,Attempt:0,}" Jul 7 01:26:33.530445 containerd[1624]: time="2025-07-07T01:26:33.529375208Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 01:26:33.530445 containerd[1624]: time="2025-07-07T01:26:33.529504574Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 01:26:33.530445 containerd[1624]: time="2025-07-07T01:26:33.529532088Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 01:26:33.530445 containerd[1624]: time="2025-07-07T01:26:33.529735498Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 01:26:33.587968 containerd[1624]: time="2025-07-07T01:26:33.587906632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5dstj,Uid:450dd952-393a-43c5-b407-7785334f24dd,Namespace:kube-system,Attempt:0,} returns sandbox id \"cf87f450224728c61ee9d62212b1c9eedc50ee1edd4951a75d23346dddef6307\"" Jul 7 01:26:33.612735 containerd[1624]: time="2025-07-07T01:26:33.612659439Z" level=info msg="CreateContainer within sandbox \"cf87f450224728c61ee9d62212b1c9eedc50ee1edd4951a75d23346dddef6307\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 7 01:26:33.632917 containerd[1624]: time="2025-07-07T01:26:33.632844463Z" level=info msg="CreateContainer within sandbox \"cf87f450224728c61ee9d62212b1c9eedc50ee1edd4951a75d23346dddef6307\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c5c84d22fcfacad370e28471ccf705ad0eb882a52137e44e6275f4862573a407\"" Jul 7 01:26:33.634993 containerd[1624]: time="2025-07-07T01:26:33.633504767Z" level=info msg="StartContainer for \"c5c84d22fcfacad370e28471ccf705ad0eb882a52137e44e6275f4862573a407\"" Jul 7 01:26:33.705006 containerd[1624]: time="2025-07-07T01:26:33.704771313Z" level=info msg="StartContainer for \"c5c84d22fcfacad370e28471ccf705ad0eb882a52137e44e6275f4862573a407\" returns successfully" Jul 7 01:26:33.773838 containerd[1624]: time="2025-07-07T01:26:33.773711368Z" level=info msg="shim disconnected" id=c5c84d22fcfacad370e28471ccf705ad0eb882a52137e44e6275f4862573a407 namespace=k8s.io Jul 7 01:26:33.773838 containerd[1624]: time="2025-07-07T01:26:33.773816812Z" level=warning msg="cleaning up after shim disconnected" id=c5c84d22fcfacad370e28471ccf705ad0eb882a52137e44e6275f4862573a407 namespace=k8s.io Jul 7 01:26:33.773838 containerd[1624]: time="2025-07-07T01:26:33.773834325Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 01:26:33.801570 containerd[1624]: time="2025-07-07T01:26:33.801343837Z" level=warning msg="cleanup warnings time=\"2025-07-07T01:26:33Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 7 01:26:34.440108 sshd[4644]: Accepted publickey for core from 139.178.68.195 port 52798 ssh2: RSA SHA256:OzzIFs54pJXMP2eymQNEzIb/qF+YzQ98zvMT1AG90zI Jul 7 01:26:34.442471 sshd[4644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:26:34.449363 systemd-logind[1596]: New session 27 of user core. Jul 7 01:26:34.462900 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 7 01:26:34.756634 containerd[1624]: time="2025-07-07T01:26:34.755858486Z" level=info msg="CreateContainer within sandbox \"cf87f450224728c61ee9d62212b1c9eedc50ee1edd4951a75d23346dddef6307\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 7 01:26:34.778716 containerd[1624]: time="2025-07-07T01:26:34.778588721Z" level=info msg="CreateContainer within sandbox \"cf87f450224728c61ee9d62212b1c9eedc50ee1edd4951a75d23346dddef6307\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"154d2f36dce69077704a4b37b3a6484da508241180c892f8a9c970c2f08ab1c0\"" Jul 7 01:26:34.780411 containerd[1624]: time="2025-07-07T01:26:34.780124625Z" level=info msg="StartContainer for \"154d2f36dce69077704a4b37b3a6484da508241180c892f8a9c970c2f08ab1c0\"" Jul 7 01:26:34.870240 containerd[1624]: time="2025-07-07T01:26:34.870171922Z" level=info msg="StartContainer for \"154d2f36dce69077704a4b37b3a6484da508241180c892f8a9c970c2f08ab1c0\" returns successfully" Jul 7 01:26:34.922457 containerd[1624]: time="2025-07-07T01:26:34.922105783Z" level=info msg="shim disconnected" id=154d2f36dce69077704a4b37b3a6484da508241180c892f8a9c970c2f08ab1c0 namespace=k8s.io Jul 7 01:26:34.922457 containerd[1624]: time="2025-07-07T01:26:34.922203917Z" level=warning msg="cleaning up after shim disconnected" id=154d2f36dce69077704a4b37b3a6484da508241180c892f8a9c970c2f08ab1c0 namespace=k8s.io Jul 7 01:26:34.922457 containerd[1624]: time="2025-07-07T01:26:34.922224564Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 01:26:35.133707 sshd[4644]: pam_unix(sshd:session): session closed for user core Jul 7 01:26:35.140078 systemd[1]: sshd@25-10.244.21.90:22-139.178.68.195:52798.service: Deactivated successfully. Jul 7 01:26:35.140558 systemd-logind[1596]: Session 27 logged out. Waiting for processes to exit. Jul 7 01:26:35.144958 systemd[1]: session-27.scope: Deactivated successfully. Jul 7 01:26:35.146363 systemd-logind[1596]: Removed session 27. Jul 7 01:26:35.301725 systemd[1]: Started sshd@26-10.244.21.90:22-139.178.68.195:52814.service - OpenSSH per-connection server daemon (139.178.68.195:52814). Jul 7 01:26:35.412766 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-154d2f36dce69077704a4b37b3a6484da508241180c892f8a9c970c2f08ab1c0-rootfs.mount: Deactivated successfully. Jul 7 01:26:35.761464 containerd[1624]: time="2025-07-07T01:26:35.761327299Z" level=info msg="CreateContainer within sandbox \"cf87f450224728c61ee9d62212b1c9eedc50ee1edd4951a75d23346dddef6307\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 7 01:26:35.789640 containerd[1624]: time="2025-07-07T01:26:35.789444348Z" level=info msg="CreateContainer within sandbox \"cf87f450224728c61ee9d62212b1c9eedc50ee1edd4951a75d23346dddef6307\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c25a912bd8526eb74579d05b31c4b087aaa13c7116c4e58a630ddfcb0e88273a\"" Jul 7 01:26:35.792221 containerd[1624]: time="2025-07-07T01:26:35.790931969Z" level=info msg="StartContainer for \"c25a912bd8526eb74579d05b31c4b087aaa13c7116c4e58a630ddfcb0e88273a\"" Jul 7 01:26:35.881687 containerd[1624]: time="2025-07-07T01:26:35.881444877Z" level=info msg="StartContainer for \"c25a912bd8526eb74579d05b31c4b087aaa13c7116c4e58a630ddfcb0e88273a\" returns successfully" Jul 7 01:26:35.925757 containerd[1624]: time="2025-07-07T01:26:35.925407620Z" level=info msg="shim disconnected" id=c25a912bd8526eb74579d05b31c4b087aaa13c7116c4e58a630ddfcb0e88273a namespace=k8s.io Jul 7 01:26:35.925757 containerd[1624]: time="2025-07-07T01:26:35.925508713Z" level=warning msg="cleaning up after shim disconnected" id=c25a912bd8526eb74579d05b31c4b087aaa13c7116c4e58a630ddfcb0e88273a namespace=k8s.io Jul 7 01:26:35.925757 containerd[1624]: time="2025-07-07T01:26:35.925530585Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 01:26:35.943327 containerd[1624]: time="2025-07-07T01:26:35.942565278Z" level=warning msg="cleanup warnings time=\"2025-07-07T01:26:35Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 7 01:26:36.315305 sshd[4817]: Accepted publickey for core from 139.178.68.195 port 52814 ssh2: RSA SHA256:OzzIFs54pJXMP2eymQNEzIb/qF+YzQ98zvMT1AG90zI Jul 7 01:26:36.317801 sshd[4817]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:26:36.326553 systemd-logind[1596]: New session 28 of user core. Jul 7 01:26:36.333785 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 7 01:26:36.411934 systemd[1]: run-containerd-runc-k8s.io-c25a912bd8526eb74579d05b31c4b087aaa13c7116c4e58a630ddfcb0e88273a-runc.J4Env0.mount: Deactivated successfully. Jul 7 01:26:36.412485 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c25a912bd8526eb74579d05b31c4b087aaa13c7116c4e58a630ddfcb0e88273a-rootfs.mount: Deactivated successfully. Jul 7 01:26:36.775898 containerd[1624]: time="2025-07-07T01:26:36.775689224Z" level=info msg="CreateContainer within sandbox \"cf87f450224728c61ee9d62212b1c9eedc50ee1edd4951a75d23346dddef6307\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 7 01:26:36.798738 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2611827370.mount: Deactivated successfully. Jul 7 01:26:36.801895 containerd[1624]: time="2025-07-07T01:26:36.801842998Z" level=info msg="CreateContainer within sandbox \"cf87f450224728c61ee9d62212b1c9eedc50ee1edd4951a75d23346dddef6307\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2a1a2e61deef7125a92091b7a977c296cadc9b81b22f8e17dac03e748cb248cf\"" Jul 7 01:26:36.807314 containerd[1624]: time="2025-07-07T01:26:36.807154330Z" level=info msg="StartContainer for \"2a1a2e61deef7125a92091b7a977c296cadc9b81b22f8e17dac03e748cb248cf\"" Jul 7 01:26:36.921425 containerd[1624]: time="2025-07-07T01:26:36.921352285Z" level=info msg="StartContainer for \"2a1a2e61deef7125a92091b7a977c296cadc9b81b22f8e17dac03e748cb248cf\" returns successfully" Jul 7 01:26:36.959667 containerd[1624]: time="2025-07-07T01:26:36.959560909Z" level=info msg="shim disconnected" id=2a1a2e61deef7125a92091b7a977c296cadc9b81b22f8e17dac03e748cb248cf namespace=k8s.io Jul 7 01:26:36.959667 containerd[1624]: time="2025-07-07T01:26:36.959663294Z" level=warning msg="cleaning up after shim disconnected" id=2a1a2e61deef7125a92091b7a977c296cadc9b81b22f8e17dac03e748cb248cf namespace=k8s.io Jul 7 01:26:36.959959 containerd[1624]: time="2025-07-07T01:26:36.959683781Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 01:26:37.412111 systemd[1]: run-containerd-runc-k8s.io-2a1a2e61deef7125a92091b7a977c296cadc9b81b22f8e17dac03e748cb248cf-runc.b9wA4z.mount: Deactivated successfully. Jul 7 01:26:37.412382 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2a1a2e61deef7125a92091b7a977c296cadc9b81b22f8e17dac03e748cb248cf-rootfs.mount: Deactivated successfully. Jul 7 01:26:37.783340 containerd[1624]: time="2025-07-07T01:26:37.781611408Z" level=info msg="CreateContainer within sandbox \"cf87f450224728c61ee9d62212b1c9eedc50ee1edd4951a75d23346dddef6307\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 7 01:26:37.825314 containerd[1624]: time="2025-07-07T01:26:37.825246748Z" level=info msg="CreateContainer within sandbox \"cf87f450224728c61ee9d62212b1c9eedc50ee1edd4951a75d23346dddef6307\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c6d633aec26b22f952af8e579f7b8ac55b7b62add0dbc64721cc6ba29dd2e6c8\"" Jul 7 01:26:37.827323 containerd[1624]: time="2025-07-07T01:26:37.826215044Z" level=info msg="StartContainer for \"c6d633aec26b22f952af8e579f7b8ac55b7b62add0dbc64721cc6ba29dd2e6c8\"" Jul 7 01:26:37.945472 containerd[1624]: time="2025-07-07T01:26:37.945400797Z" level=info msg="StartContainer for \"c6d633aec26b22f952af8e579f7b8ac55b7b62add0dbc64721cc6ba29dd2e6c8\" returns successfully" Jul 7 01:26:38.661370 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jul 7 01:26:38.818238 kubelet[2848]: I0707 01:26:38.817553 2848 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-5dstj" podStartSLOduration=5.8173205679999995 podStartE2EDuration="5.817320568s" podCreationTimestamp="2025-07-07 01:26:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 01:26:38.813187506 +0000 UTC m=+171.096313201" watchObservedRunningTime="2025-07-07 01:26:38.817320568 +0000 UTC m=+171.100446251" Jul 7 01:26:41.599482 systemd[1]: run-containerd-runc-k8s.io-c6d633aec26b22f952af8e579f7b8ac55b7b62add0dbc64721cc6ba29dd2e6c8-runc.fd3nZT.mount: Deactivated successfully. Jul 7 01:26:41.671780 kubelet[2848]: E0707 01:26:41.671712 2848 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:40188->127.0.0.1:38979: write tcp 127.0.0.1:40188->127.0.0.1:38979: write: broken pipe Jul 7 01:26:42.629598 systemd-networkd[1256]: lxc_health: Link UP Jul 7 01:26:42.643704 systemd-networkd[1256]: lxc_health: Gained carrier Jul 7 01:26:43.701638 systemd-networkd[1256]: lxc_health: Gained IPv6LL Jul 7 01:26:44.208848 kubelet[2848]: E0707 01:26:44.208776 2848 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 127.0.0.1:40200->127.0.0.1:38979: write tcp 10.244.21.90:10250->10.244.21.90:48032: write: broken pipe Jul 7 01:26:47.989904 containerd[1624]: time="2025-07-07T01:26:47.989780841Z" level=info msg="StopPodSandbox for \"bbc14ada1ff8f815dc35e06674fad123609cb8913c58274b9411b2e3e7d0bcc6\"" Jul 7 01:26:47.991167 containerd[1624]: time="2025-07-07T01:26:47.990011506Z" level=info msg="TearDown network for sandbox \"bbc14ada1ff8f815dc35e06674fad123609cb8913c58274b9411b2e3e7d0bcc6\" successfully" Jul 7 01:26:47.991167 containerd[1624]: time="2025-07-07T01:26:47.990044040Z" level=info msg="StopPodSandbox for \"bbc14ada1ff8f815dc35e06674fad123609cb8913c58274b9411b2e3e7d0bcc6\" returns successfully" Jul 7 01:26:47.994436 containerd[1624]: time="2025-07-07T01:26:47.991503513Z" level=info msg="RemovePodSandbox for \"bbc14ada1ff8f815dc35e06674fad123609cb8913c58274b9411b2e3e7d0bcc6\"" Jul 7 01:26:48.000349 containerd[1624]: time="2025-07-07T01:26:47.995408931Z" level=info msg="Forcibly stopping sandbox \"bbc14ada1ff8f815dc35e06674fad123609cb8913c58274b9411b2e3e7d0bcc6\"" Jul 7 01:26:48.000349 containerd[1624]: time="2025-07-07T01:26:47.995763908Z" level=info msg="TearDown network for sandbox \"bbc14ada1ff8f815dc35e06674fad123609cb8913c58274b9411b2e3e7d0bcc6\" successfully" Jul 7 01:26:48.004311 containerd[1624]: time="2025-07-07T01:26:48.002867456Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bbc14ada1ff8f815dc35e06674fad123609cb8913c58274b9411b2e3e7d0bcc6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 01:26:48.004311 containerd[1624]: time="2025-07-07T01:26:48.002982333Z" level=info msg="RemovePodSandbox \"bbc14ada1ff8f815dc35e06674fad123609cb8913c58274b9411b2e3e7d0bcc6\" returns successfully" Jul 7 01:26:48.004311 containerd[1624]: time="2025-07-07T01:26:48.003722825Z" level=info msg="StopPodSandbox for \"1211318ef63d57fe39c75a8be8ed6ffbb4d9bc44a299f82777dcdfa786828819\"" Jul 7 01:26:48.004311 containerd[1624]: time="2025-07-07T01:26:48.003835012Z" level=info msg="TearDown network for sandbox \"1211318ef63d57fe39c75a8be8ed6ffbb4d9bc44a299f82777dcdfa786828819\" successfully" Jul 7 01:26:48.004311 containerd[1624]: time="2025-07-07T01:26:48.003855951Z" level=info msg="StopPodSandbox for \"1211318ef63d57fe39c75a8be8ed6ffbb4d9bc44a299f82777dcdfa786828819\" returns successfully" Jul 7 01:26:48.004737 containerd[1624]: time="2025-07-07T01:26:48.004636945Z" level=info msg="RemovePodSandbox for \"1211318ef63d57fe39c75a8be8ed6ffbb4d9bc44a299f82777dcdfa786828819\"" Jul 7 01:26:48.004737 containerd[1624]: time="2025-07-07T01:26:48.004667018Z" level=info msg="Forcibly stopping sandbox \"1211318ef63d57fe39c75a8be8ed6ffbb4d9bc44a299f82777dcdfa786828819\"" Jul 7 01:26:48.004853 containerd[1624]: time="2025-07-07T01:26:48.004760215Z" level=info msg="TearDown network for sandbox \"1211318ef63d57fe39c75a8be8ed6ffbb4d9bc44a299f82777dcdfa786828819\" successfully" Jul 7 01:26:48.013313 containerd[1624]: time="2025-07-07T01:26:48.011633059Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1211318ef63d57fe39c75a8be8ed6ffbb4d9bc44a299f82777dcdfa786828819\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 01:26:48.013313 containerd[1624]: time="2025-07-07T01:26:48.011783528Z" level=info msg="RemovePodSandbox \"1211318ef63d57fe39c75a8be8ed6ffbb4d9bc44a299f82777dcdfa786828819\" returns successfully" Jul 7 01:26:49.189170 sshd[4817]: pam_unix(sshd:session): session closed for user core Jul 7 01:26:49.195108 systemd-logind[1596]: Session 28 logged out. Waiting for processes to exit. Jul 7 01:26:49.195662 systemd[1]: sshd@26-10.244.21.90:22-139.178.68.195:52814.service: Deactivated successfully. Jul 7 01:26:49.202926 systemd[1]: session-28.scope: Deactivated successfully. Jul 7 01:26:49.205562 systemd-logind[1596]: Removed session 28.