Jan 13 21:05:00.021911 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 18:58:40 -00 2025 Jan 13 21:05:00.021963 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=8a11404d893165624d9716a125d997be53e2d6cdb0c50a945acda5b62a14eda5 Jan 13 21:05:00.021979 kernel: BIOS-provided physical RAM map: Jan 13 21:05:00.021996 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 13 21:05:00.022007 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 13 21:05:00.022043 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 13 21:05:00.022056 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Jan 13 21:05:00.022068 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Jan 13 21:05:00.022079 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 13 21:05:00.022090 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 13 21:05:00.022101 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 13 21:05:00.022112 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 13 21:05:00.022130 kernel: NX (Execute Disable) protection: active Jan 13 21:05:00.022142 kernel: APIC: Static calls initialized Jan 13 21:05:00.022155 kernel: SMBIOS 2.8 present. Jan 13 21:05:00.022168 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Jan 13 21:05:00.022192 kernel: Hypervisor detected: KVM Jan 13 21:05:00.022208 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 13 21:05:00.022220 kernel: kvm-clock: using sched offset of 4497613431 cycles Jan 13 21:05:00.022233 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 13 21:05:00.022245 kernel: tsc: Detected 2499.998 MHz processor Jan 13 21:05:00.022257 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 13 21:05:00.022269 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 13 21:05:00.022293 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Jan 13 21:05:00.022306 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 13 21:05:00.022318 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 13 21:05:00.022335 kernel: Using GB pages for direct mapping Jan 13 21:05:00.022348 kernel: ACPI: Early table checksum verification disabled Jan 13 21:05:00.022360 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Jan 13 21:05:00.022372 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:05:00.022384 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:05:00.022396 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:05:00.022408 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Jan 13 21:05:00.022420 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:05:00.022432 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:05:00.022449 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:05:00.022462 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:05:00.022474 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Jan 13 21:05:00.022486 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Jan 13 21:05:00.022498 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Jan 13 21:05:00.022517 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Jan 13 21:05:00.022530 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Jan 13 21:05:00.022547 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Jan 13 21:05:00.022560 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Jan 13 21:05:00.022573 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 13 21:05:00.022586 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 13 21:05:00.022598 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Jan 13 21:05:00.022611 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Jan 13 21:05:00.022623 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Jan 13 21:05:00.022635 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Jan 13 21:05:00.022653 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Jan 13 21:05:00.022665 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Jan 13 21:05:00.022678 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Jan 13 21:05:00.022690 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Jan 13 21:05:00.022702 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Jan 13 21:05:00.022715 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Jan 13 21:05:00.022727 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Jan 13 21:05:00.022739 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Jan 13 21:05:00.022752 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Jan 13 21:05:00.022769 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Jan 13 21:05:00.022782 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 13 21:05:00.022806 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 13 21:05:00.022820 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Jan 13 21:05:00.022833 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Jan 13 21:05:00.022846 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Jan 13 21:05:00.022858 kernel: Zone ranges: Jan 13 21:05:00.022871 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 13 21:05:00.022884 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Jan 13 21:05:00.022896 kernel: Normal empty Jan 13 21:05:00.022915 kernel: Movable zone start for each node Jan 13 21:05:00.022928 kernel: Early memory node ranges Jan 13 21:05:00.022940 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 13 21:05:00.022953 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Jan 13 21:05:00.022965 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Jan 13 21:05:00.022978 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 13 21:05:00.022990 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 13 21:05:00.023003 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Jan 13 21:05:00.023043 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 13 21:05:00.023064 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 13 21:05:00.023077 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 13 21:05:00.023090 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 13 21:05:00.023102 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 13 21:05:00.023115 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 13 21:05:00.023127 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 13 21:05:00.023140 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 13 21:05:00.023153 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 13 21:05:00.023165 kernel: TSC deadline timer available Jan 13 21:05:00.023182 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Jan 13 21:05:00.023196 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 13 21:05:00.023208 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 13 21:05:00.023221 kernel: Booting paravirtualized kernel on KVM Jan 13 21:05:00.023234 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 13 21:05:00.023247 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Jan 13 21:05:00.023260 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Jan 13 21:05:00.023273 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Jan 13 21:05:00.023286 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jan 13 21:05:00.023306 kernel: kvm-guest: PV spinlocks enabled Jan 13 21:05:00.023319 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 13 21:05:00.023333 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=8a11404d893165624d9716a125d997be53e2d6cdb0c50a945acda5b62a14eda5 Jan 13 21:05:00.023346 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 21:05:00.023359 kernel: random: crng init done Jan 13 21:05:00.023384 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 21:05:00.023396 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 13 21:05:00.023408 kernel: Fallback order for Node 0: 0 Jan 13 21:05:00.023425 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Jan 13 21:05:00.023437 kernel: Policy zone: DMA32 Jan 13 21:05:00.023450 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 21:05:00.023462 kernel: software IO TLB: area num 16. Jan 13 21:05:00.023474 kernel: Memory: 1899488K/2096616K available (14336K kernel code, 2299K rwdata, 22800K rodata, 43320K init, 1756K bss, 196868K reserved, 0K cma-reserved) Jan 13 21:05:00.023487 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jan 13 21:05:00.023499 kernel: Kernel/User page tables isolation: enabled Jan 13 21:05:00.023511 kernel: ftrace: allocating 37890 entries in 149 pages Jan 13 21:05:00.023523 kernel: ftrace: allocated 149 pages with 4 groups Jan 13 21:05:00.023552 kernel: Dynamic Preempt: voluntary Jan 13 21:05:00.023565 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 21:05:00.023583 kernel: rcu: RCU event tracing is enabled. Jan 13 21:05:00.023597 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jan 13 21:05:00.023610 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 21:05:00.023634 kernel: Rude variant of Tasks RCU enabled. Jan 13 21:05:00.023652 kernel: Tracing variant of Tasks RCU enabled. Jan 13 21:05:00.023666 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 21:05:00.023679 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jan 13 21:05:00.023692 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Jan 13 21:05:00.023706 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 21:05:00.023718 kernel: Console: colour VGA+ 80x25 Jan 13 21:05:00.023736 kernel: printk: console [tty0] enabled Jan 13 21:05:00.023750 kernel: printk: console [ttyS0] enabled Jan 13 21:05:00.023763 kernel: ACPI: Core revision 20230628 Jan 13 21:05:00.023783 kernel: APIC: Switch to symmetric I/O mode setup Jan 13 21:05:00.023806 kernel: x2apic enabled Jan 13 21:05:00.023826 kernel: APIC: Switched APIC routing to: physical x2apic Jan 13 21:05:00.023840 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jan 13 21:05:00.023853 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Jan 13 21:05:00.023867 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 13 21:05:00.023880 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 13 21:05:00.023893 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 13 21:05:00.023906 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 13 21:05:00.023919 kernel: Spectre V2 : Mitigation: Retpolines Jan 13 21:05:00.023932 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 13 21:05:00.023945 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 13 21:05:00.023963 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 13 21:05:00.023976 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 13 21:05:00.023989 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 13 21:05:00.024002 kernel: MDS: Mitigation: Clear CPU buffers Jan 13 21:05:00.024114 kernel: MMIO Stale Data: Unknown: No mitigations Jan 13 21:05:00.024130 kernel: SRBDS: Unknown: Dependent on hypervisor status Jan 13 21:05:00.024143 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 13 21:05:00.024156 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 13 21:05:00.024169 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 13 21:05:00.024182 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 13 21:05:00.024204 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 13 21:05:00.024226 kernel: Freeing SMP alternatives memory: 32K Jan 13 21:05:00.024239 kernel: pid_max: default: 32768 minimum: 301 Jan 13 21:05:00.024252 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 21:05:00.024268 kernel: landlock: Up and running. Jan 13 21:05:00.024281 kernel: SELinux: Initializing. Jan 13 21:05:00.024294 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 13 21:05:00.024307 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 13 21:05:00.024321 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Jan 13 21:05:00.024334 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 13 21:05:00.024348 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 13 21:05:00.024366 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 13 21:05:00.024380 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Jan 13 21:05:00.024394 kernel: signal: max sigframe size: 1776 Jan 13 21:05:00.024407 kernel: rcu: Hierarchical SRCU implementation. Jan 13 21:05:00.024421 kernel: rcu: Max phase no-delay instances is 400. Jan 13 21:05:00.024434 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 13 21:05:00.024447 kernel: smp: Bringing up secondary CPUs ... Jan 13 21:05:00.024461 kernel: smpboot: x86: Booting SMP configuration: Jan 13 21:05:00.024486 kernel: .... node #0, CPUs: #1 Jan 13 21:05:00.024503 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Jan 13 21:05:00.024517 kernel: smp: Brought up 1 node, 2 CPUs Jan 13 21:05:00.024529 kernel: smpboot: Max logical packages: 16 Jan 13 21:05:00.024542 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Jan 13 21:05:00.024555 kernel: devtmpfs: initialized Jan 13 21:05:00.024568 kernel: x86/mm: Memory block size: 128MB Jan 13 21:05:00.024586 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 21:05:00.024599 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jan 13 21:05:00.024611 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 21:05:00.024629 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 21:05:00.024642 kernel: audit: initializing netlink subsys (disabled) Jan 13 21:05:00.024655 kernel: audit: type=2000 audit(1736802298.624:1): state=initialized audit_enabled=0 res=1 Jan 13 21:05:00.024667 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 21:05:00.024680 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 13 21:05:00.024693 kernel: cpuidle: using governor menu Jan 13 21:05:00.024706 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 21:05:00.024725 kernel: dca service started, version 1.12.1 Jan 13 21:05:00.024753 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 13 21:05:00.024808 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 13 21:05:00.024823 kernel: PCI: Using configuration type 1 for base access Jan 13 21:05:00.024837 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 13 21:05:00.024858 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 21:05:00.024871 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 21:05:00.024884 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 21:05:00.024897 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 21:05:00.024910 kernel: ACPI: Added _OSI(Module Device) Jan 13 21:05:00.024924 kernel: ACPI: Added _OSI(Processor Device) Jan 13 21:05:00.024943 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 21:05:00.024957 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 21:05:00.024970 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 13 21:05:00.024984 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 13 21:05:00.024997 kernel: ACPI: Interpreter enabled Jan 13 21:05:00.025034 kernel: ACPI: PM: (supports S0 S5) Jan 13 21:05:00.025050 kernel: ACPI: Using IOAPIC for interrupt routing Jan 13 21:05:00.025064 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 13 21:05:00.025077 kernel: PCI: Using E820 reservations for host bridge windows Jan 13 21:05:00.025107 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 13 21:05:00.025121 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 13 21:05:00.025406 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 13 21:05:00.025610 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 13 21:05:00.025783 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 13 21:05:00.025817 kernel: PCI host bridge to bus 0000:00 Jan 13 21:05:00.026027 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 13 21:05:00.026291 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 13 21:05:00.026541 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 13 21:05:00.026706 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Jan 13 21:05:00.026898 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 13 21:05:00.027105 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Jan 13 21:05:00.027292 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 13 21:05:00.027492 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 13 21:05:00.027710 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Jan 13 21:05:00.027941 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Jan 13 21:05:00.028740 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Jan 13 21:05:00.028950 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Jan 13 21:05:00.029168 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 13 21:05:00.029375 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jan 13 21:05:00.029567 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Jan 13 21:05:00.029774 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jan 13 21:05:00.029976 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Jan 13 21:05:00.030189 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jan 13 21:05:00.030385 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Jan 13 21:05:00.030588 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jan 13 21:05:00.030771 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Jan 13 21:05:00.031001 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jan 13 21:05:00.031254 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Jan 13 21:05:00.031464 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jan 13 21:05:00.031647 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Jan 13 21:05:00.031859 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jan 13 21:05:00.032074 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Jan 13 21:05:00.032277 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jan 13 21:05:00.032465 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Jan 13 21:05:00.032679 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 13 21:05:00.035121 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Jan 13 21:05:00.035318 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Jan 13 21:05:00.035497 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Jan 13 21:05:00.035682 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Jan 13 21:05:00.035894 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jan 13 21:05:00.036105 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jan 13 21:05:00.036280 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Jan 13 21:05:00.036463 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Jan 13 21:05:00.036664 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 13 21:05:00.036862 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 13 21:05:00.038130 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 13 21:05:00.038315 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Jan 13 21:05:00.038502 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Jan 13 21:05:00.038693 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 13 21:05:00.038892 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 13 21:05:00.039121 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Jan 13 21:05:00.039323 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Jan 13 21:05:00.039518 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 13 21:05:00.039693 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 13 21:05:00.039901 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 13 21:05:00.042323 kernel: pci_bus 0000:02: extended config space not accessible Jan 13 21:05:00.042551 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Jan 13 21:05:00.042749 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Jan 13 21:05:00.042961 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 13 21:05:00.043174 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 13 21:05:00.043384 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Jan 13 21:05:00.043567 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Jan 13 21:05:00.043746 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 13 21:05:00.043940 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 13 21:05:00.044143 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 13 21:05:00.044385 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Jan 13 21:05:00.044581 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Jan 13 21:05:00.044775 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 13 21:05:00.044966 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 13 21:05:00.045218 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 13 21:05:00.045404 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 13 21:05:00.045572 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 13 21:05:00.045752 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 13 21:05:00.045958 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 13 21:05:00.047205 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 13 21:05:00.047400 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 13 21:05:00.047574 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 13 21:05:00.047804 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 13 21:05:00.047980 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 13 21:05:00.048182 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 13 21:05:00.048391 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 13 21:05:00.048577 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 13 21:05:00.048760 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 13 21:05:00.048957 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 13 21:05:00.056193 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 13 21:05:00.056223 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 13 21:05:00.056238 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 13 21:05:00.056252 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 13 21:05:00.056266 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 13 21:05:00.056288 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 13 21:05:00.056301 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 13 21:05:00.056315 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 13 21:05:00.056328 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 13 21:05:00.056341 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 13 21:05:00.056354 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 13 21:05:00.056368 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 13 21:05:00.056381 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 13 21:05:00.056394 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 13 21:05:00.056413 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 13 21:05:00.056426 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 13 21:05:00.056439 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 13 21:05:00.056452 kernel: iommu: Default domain type: Translated Jan 13 21:05:00.056466 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 13 21:05:00.056491 kernel: PCI: Using ACPI for IRQ routing Jan 13 21:05:00.056504 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 13 21:05:00.056517 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 13 21:05:00.056534 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Jan 13 21:05:00.056736 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 13 21:05:00.056927 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 13 21:05:00.057135 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 13 21:05:00.057156 kernel: vgaarb: loaded Jan 13 21:05:00.057171 kernel: clocksource: Switched to clocksource kvm-clock Jan 13 21:05:00.057184 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 21:05:00.057197 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 21:05:00.057210 kernel: pnp: PnP ACPI init Jan 13 21:05:00.057390 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 13 21:05:00.057411 kernel: pnp: PnP ACPI: found 5 devices Jan 13 21:05:00.057424 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 13 21:05:00.057438 kernel: NET: Registered PF_INET protocol family Jan 13 21:05:00.057451 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 21:05:00.057464 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 13 21:05:00.057477 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 21:05:00.057490 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 13 21:05:00.057510 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 13 21:05:00.057528 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 13 21:05:00.057541 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 13 21:05:00.057554 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 13 21:05:00.057567 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 21:05:00.057580 kernel: NET: Registered PF_XDP protocol family Jan 13 21:05:00.057760 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Jan 13 21:05:00.057949 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 13 21:05:00.063154 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 13 21:05:00.063357 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 13 21:05:00.063543 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 13 21:05:00.063723 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 13 21:05:00.063923 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 13 21:05:00.064130 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 13 21:05:00.064316 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Jan 13 21:05:00.064496 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Jan 13 21:05:00.064668 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Jan 13 21:05:00.064872 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Jan 13 21:05:00.065067 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Jan 13 21:05:00.065248 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Jan 13 21:05:00.065421 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Jan 13 21:05:00.065602 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Jan 13 21:05:00.065846 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 13 21:05:00.070119 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 13 21:05:00.070307 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 13 21:05:00.070503 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jan 13 21:05:00.070683 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 13 21:05:00.070894 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 13 21:05:00.071118 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 13 21:05:00.071297 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jan 13 21:05:00.071476 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 13 21:05:00.071646 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 13 21:05:00.071861 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 13 21:05:00.072083 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jan 13 21:05:00.072262 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 13 21:05:00.072447 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 13 21:05:00.072629 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 13 21:05:00.072816 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jan 13 21:05:00.072989 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 13 21:05:00.073200 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 13 21:05:00.073380 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 13 21:05:00.073566 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jan 13 21:05:00.073757 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 13 21:05:00.073945 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 13 21:05:00.079078 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 13 21:05:00.079272 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jan 13 21:05:00.079456 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 13 21:05:00.079650 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 13 21:05:00.079837 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 13 21:05:00.080050 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jan 13 21:05:00.080238 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 13 21:05:00.080409 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 13 21:05:00.080580 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 13 21:05:00.080759 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jan 13 21:05:00.080957 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 13 21:05:00.081173 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 13 21:05:00.081336 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 13 21:05:00.081515 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 13 21:05:00.081679 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 13 21:05:00.081859 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Jan 13 21:05:00.082039 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 13 21:05:00.082202 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Jan 13 21:05:00.082384 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jan 13 21:05:00.082548 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Jan 13 21:05:00.082730 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Jan 13 21:05:00.082938 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Jan 13 21:05:00.084190 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Jan 13 21:05:00.084365 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Jan 13 21:05:00.084526 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 13 21:05:00.084719 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Jan 13 21:05:00.084896 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Jan 13 21:05:00.086139 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 13 21:05:00.086350 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Jan 13 21:05:00.086535 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Jan 13 21:05:00.086720 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 13 21:05:00.086927 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Jan 13 21:05:00.088170 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Jan 13 21:05:00.088351 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 13 21:05:00.088524 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Jan 13 21:05:00.088697 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Jan 13 21:05:00.088901 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 13 21:05:00.090114 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Jan 13 21:05:00.090294 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Jan 13 21:05:00.090470 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 13 21:05:00.090664 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Jan 13 21:05:00.090844 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Jan 13 21:05:00.091947 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 13 21:05:00.091975 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 13 21:05:00.091990 kernel: PCI: CLS 0 bytes, default 64 Jan 13 21:05:00.092005 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 13 21:05:00.092046 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Jan 13 21:05:00.092061 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 13 21:05:00.092076 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jan 13 21:05:00.092090 kernel: Initialise system trusted keyrings Jan 13 21:05:00.092112 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 13 21:05:00.092126 kernel: Key type asymmetric registered Jan 13 21:05:00.092140 kernel: Asymmetric key parser 'x509' registered Jan 13 21:05:00.092154 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 13 21:05:00.092168 kernel: io scheduler mq-deadline registered Jan 13 21:05:00.092187 kernel: io scheduler kyber registered Jan 13 21:05:00.092201 kernel: io scheduler bfq registered Jan 13 21:05:00.092384 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jan 13 21:05:00.092560 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jan 13 21:05:00.092740 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 21:05:00.092928 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jan 13 21:05:00.093160 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jan 13 21:05:00.093335 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 21:05:00.093516 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jan 13 21:05:00.093688 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jan 13 21:05:00.093884 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 21:05:00.094101 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jan 13 21:05:00.094276 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jan 13 21:05:00.094445 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 21:05:00.094614 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jan 13 21:05:00.094784 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jan 13 21:05:00.094976 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 21:05:00.095187 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jan 13 21:05:00.095366 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jan 13 21:05:00.095535 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 21:05:00.095704 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jan 13 21:05:00.095891 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jan 13 21:05:00.096154 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 21:05:00.096323 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jan 13 21:05:00.096503 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jan 13 21:05:00.096672 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 21:05:00.096694 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 13 21:05:00.096709 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 13 21:05:00.096731 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 13 21:05:00.096746 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 21:05:00.096761 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 13 21:05:00.096775 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 13 21:05:00.096800 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 13 21:05:00.096816 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 13 21:05:00.096988 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 13 21:05:00.097034 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 13 21:05:00.097207 kernel: rtc_cmos 00:03: registered as rtc0 Jan 13 21:05:00.097373 kernel: rtc_cmos 00:03: setting system clock to 2025-01-13T21:04:59 UTC (1736802299) Jan 13 21:05:00.097543 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 13 21:05:00.097564 kernel: intel_pstate: CPU model not supported Jan 13 21:05:00.097578 kernel: NET: Registered PF_INET6 protocol family Jan 13 21:05:00.097592 kernel: Segment Routing with IPv6 Jan 13 21:05:00.097606 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 21:05:00.097620 kernel: NET: Registered PF_PACKET protocol family Jan 13 21:05:00.097634 kernel: Key type dns_resolver registered Jan 13 21:05:00.097655 kernel: IPI shorthand broadcast: enabled Jan 13 21:05:00.097670 kernel: sched_clock: Marking stable (1137003955, 232493567)->(1609597259, -240099737) Jan 13 21:05:00.097684 kernel: registered taskstats version 1 Jan 13 21:05:00.097698 kernel: Loading compiled-in X.509 certificates Jan 13 21:05:00.097712 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: ede78b3e719729f95eaaf7cb6a5289b567f6ee3e' Jan 13 21:05:00.097726 kernel: Key type .fscrypt registered Jan 13 21:05:00.097739 kernel: Key type fscrypt-provisioning registered Jan 13 21:05:00.097753 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 21:05:00.097767 kernel: ima: Allocated hash algorithm: sha1 Jan 13 21:05:00.097786 kernel: ima: No architecture policies found Jan 13 21:05:00.097814 kernel: clk: Disabling unused clocks Jan 13 21:05:00.097828 kernel: Freeing unused kernel image (initmem) memory: 43320K Jan 13 21:05:00.097842 kernel: Write protecting the kernel read-only data: 38912k Jan 13 21:05:00.097856 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Jan 13 21:05:00.097875 kernel: Run /init as init process Jan 13 21:05:00.097890 kernel: with arguments: Jan 13 21:05:00.097904 kernel: /init Jan 13 21:05:00.097917 kernel: with environment: Jan 13 21:05:00.097936 kernel: HOME=/ Jan 13 21:05:00.097950 kernel: TERM=linux Jan 13 21:05:00.097963 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 21:05:00.097988 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 21:05:00.098007 systemd[1]: Detected virtualization kvm. Jan 13 21:05:00.098078 systemd[1]: Detected architecture x86-64. Jan 13 21:05:00.098092 systemd[1]: Running in initrd. Jan 13 21:05:00.098106 systemd[1]: No hostname configured, using default hostname. Jan 13 21:05:00.098128 systemd[1]: Hostname set to . Jan 13 21:05:00.098143 systemd[1]: Initializing machine ID from VM UUID. Jan 13 21:05:00.098157 systemd[1]: Queued start job for default target initrd.target. Jan 13 21:05:00.098184 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:05:00.098199 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:05:00.098214 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 21:05:00.098230 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 21:05:00.098244 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 21:05:00.098265 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 21:05:00.098282 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 21:05:00.098298 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 21:05:00.098312 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:05:00.098328 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:05:00.098342 systemd[1]: Reached target paths.target - Path Units. Jan 13 21:05:00.098357 systemd[1]: Reached target slices.target - Slice Units. Jan 13 21:05:00.098378 systemd[1]: Reached target swap.target - Swaps. Jan 13 21:05:00.098392 systemd[1]: Reached target timers.target - Timer Units. Jan 13 21:05:00.098408 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:05:00.098422 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:05:00.098444 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 21:05:00.098459 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 21:05:00.098474 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:05:00.098506 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 21:05:00.098526 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:05:00.098541 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 21:05:00.098568 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 21:05:00.098583 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 21:05:00.098598 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 21:05:00.098612 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 21:05:00.098627 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 21:05:00.098642 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 21:05:00.098657 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:05:00.098717 systemd-journald[201]: Collecting audit messages is disabled. Jan 13 21:05:00.098761 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 21:05:00.098776 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:05:00.098811 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 21:05:00.098835 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 21:05:00.098851 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 21:05:00.098865 kernel: Bridge firewalling registered Jan 13 21:05:00.098886 systemd-journald[201]: Journal started Jan 13 21:05:00.098920 systemd-journald[201]: Runtime Journal (/run/log/journal/d8247e8f644e40aab4bcc457d8126048) is 4.7M, max 37.9M, 33.2M free. Jan 13 21:05:00.026483 systemd-modules-load[203]: Inserted module 'overlay' Jan 13 21:05:00.091299 systemd-modules-load[203]: Inserted module 'br_netfilter' Jan 13 21:05:00.140452 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 21:05:00.140521 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 21:05:00.142587 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:05:00.144435 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:05:00.155248 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:05:00.157207 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:05:00.162195 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 21:05:00.170998 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 21:05:00.181080 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:05:00.185445 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:05:00.192769 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:05:00.195227 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 21:05:00.206553 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:05:00.217216 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 21:05:00.220413 dracut-cmdline[236]: dracut-dracut-053 Jan 13 21:05:00.223928 dracut-cmdline[236]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=8a11404d893165624d9716a125d997be53e2d6cdb0c50a945acda5b62a14eda5 Jan 13 21:05:00.253952 systemd-resolved[239]: Positive Trust Anchors: Jan 13 21:05:00.253984 systemd-resolved[239]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 21:05:00.254090 systemd-resolved[239]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 21:05:00.258316 systemd-resolved[239]: Defaulting to hostname 'linux'. Jan 13 21:05:00.260287 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 21:05:00.264429 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:05:00.333072 kernel: SCSI subsystem initialized Jan 13 21:05:00.345071 kernel: Loading iSCSI transport class v2.0-870. Jan 13 21:05:00.359039 kernel: iscsi: registered transport (tcp) Jan 13 21:05:00.385511 kernel: iscsi: registered transport (qla4xxx) Jan 13 21:05:00.385566 kernel: QLogic iSCSI HBA Driver Jan 13 21:05:00.441251 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 21:05:00.450297 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 21:05:00.486723 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 21:05:00.486799 kernel: device-mapper: uevent: version 1.0.3 Jan 13 21:05:00.486823 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 21:05:00.537063 kernel: raid6: sse2x4 gen() 13439 MB/s Jan 13 21:05:00.555058 kernel: raid6: sse2x2 gen() 9421 MB/s Jan 13 21:05:00.573690 kernel: raid6: sse2x1 gen() 9660 MB/s Jan 13 21:05:00.573730 kernel: raid6: using algorithm sse2x4 gen() 13439 MB/s Jan 13 21:05:00.592717 kernel: raid6: .... xor() 7776 MB/s, rmw enabled Jan 13 21:05:00.592785 kernel: raid6: using ssse3x2 recovery algorithm Jan 13 21:05:00.619071 kernel: xor: automatically using best checksumming function avx Jan 13 21:05:00.788077 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 21:05:00.802706 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:05:00.809268 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:05:00.840934 systemd-udevd[421]: Using default interface naming scheme 'v255'. Jan 13 21:05:00.849601 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:05:00.857172 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 21:05:00.883781 dracut-pre-trigger[428]: rd.md=0: removing MD RAID activation Jan 13 21:05:00.922666 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:05:00.929205 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 21:05:01.045035 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:05:01.054236 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 21:05:01.082878 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 21:05:01.085685 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:05:01.087357 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:05:01.088069 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 21:05:01.098371 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 21:05:01.125665 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:05:01.165072 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Jan 13 21:05:01.235434 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 13 21:05:01.235647 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 21:05:01.235672 kernel: GPT:17805311 != 125829119 Jan 13 21:05:01.235692 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 21:05:01.235721 kernel: GPT:17805311 != 125829119 Jan 13 21:05:01.235741 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 21:05:01.235771 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:05:01.235793 kernel: cryptd: max_cpu_qlen set to 1000 Jan 13 21:05:01.235812 kernel: ACPI: bus type USB registered Jan 13 21:05:01.235830 kernel: usbcore: registered new interface driver usbfs Jan 13 21:05:01.235849 kernel: usbcore: registered new interface driver hub Jan 13 21:05:01.235868 kernel: usbcore: registered new device driver usb Jan 13 21:05:01.233173 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:05:01.233347 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:05:01.234376 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:05:01.235312 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:05:01.237221 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:05:01.243575 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:05:01.254393 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:05:01.263364 kernel: AVX version of gcm_enc/dec engaged. Jan 13 21:05:01.263397 kernel: AES CTR mode by8 optimization enabled Jan 13 21:05:01.284168 kernel: libata version 3.00 loaded. Jan 13 21:05:01.336035 kernel: ahci 0000:00:1f.2: version 3.0 Jan 13 21:05:01.368309 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (474) Jan 13 21:05:01.368340 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 13 21:05:01.368371 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 13 21:05:01.368598 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 13 21:05:01.368834 kernel: BTRFS: device fsid 7f507843-6957-466b-8fb7-5bee228b170a devid 1 transid 44 /dev/vda3 scanned by (udev-worker) (468) Jan 13 21:05:01.368858 kernel: scsi host0: ahci Jan 13 21:05:01.369146 kernel: scsi host1: ahci Jan 13 21:05:01.369373 kernel: scsi host2: ahci Jan 13 21:05:01.371366 kernel: scsi host3: ahci Jan 13 21:05:01.371578 kernel: scsi host4: ahci Jan 13 21:05:01.374164 kernel: scsi host5: ahci Jan 13 21:05:01.374385 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 38 Jan 13 21:05:01.374409 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 38 Jan 13 21:05:01.374428 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 38 Jan 13 21:05:01.374447 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 38 Jan 13 21:05:01.374466 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 38 Jan 13 21:05:01.374485 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 38 Jan 13 21:05:01.352189 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 13 21:05:01.436258 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 13 21:05:01.437571 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:05:01.445177 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 13 21:05:01.446089 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 13 21:05:01.454723 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 21:05:01.465237 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 21:05:01.469532 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:05:01.477942 disk-uuid[565]: Primary Header is updated. Jan 13 21:05:01.477942 disk-uuid[565]: Secondary Entries is updated. Jan 13 21:05:01.477942 disk-uuid[565]: Secondary Header is updated. Jan 13 21:05:01.485370 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:05:01.496929 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:05:01.506328 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:05:01.679040 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 13 21:05:01.679112 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 13 21:05:01.682482 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 13 21:05:01.682533 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 13 21:05:01.684189 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 13 21:05:01.686761 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 13 21:05:01.697951 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 13 21:05:01.715982 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Jan 13 21:05:01.716249 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 13 21:05:01.716463 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 13 21:05:01.716701 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Jan 13 21:05:01.716943 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Jan 13 21:05:01.717206 kernel: hub 1-0:1.0: USB hub found Jan 13 21:05:01.717442 kernel: hub 1-0:1.0: 4 ports detected Jan 13 21:05:01.717667 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 13 21:05:01.717925 kernel: hub 2-0:1.0: USB hub found Jan 13 21:05:01.718249 kernel: hub 2-0:1.0: 4 ports detected Jan 13 21:05:01.953058 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 13 21:05:02.095050 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 13 21:05:02.101679 kernel: usbcore: registered new interface driver usbhid Jan 13 21:05:02.101751 kernel: usbhid: USB HID core driver Jan 13 21:05:02.109181 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Jan 13 21:05:02.109222 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Jan 13 21:05:02.496694 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:05:02.497182 disk-uuid[566]: The operation has completed successfully. Jan 13 21:05:02.561979 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 21:05:02.562170 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 21:05:02.584226 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 21:05:02.588058 sh[586]: Success Jan 13 21:05:02.605092 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Jan 13 21:05:02.675426 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 21:05:02.678584 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 21:05:02.679585 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 21:05:02.712453 kernel: BTRFS info (device dm-0): first mount of filesystem 7f507843-6957-466b-8fb7-5bee228b170a Jan 13 21:05:02.712515 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:05:02.714532 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 21:05:02.717870 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 21:05:02.717911 kernel: BTRFS info (device dm-0): using free space tree Jan 13 21:05:02.729066 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 21:05:02.730548 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 21:05:02.738230 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 21:05:02.741190 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 21:05:02.754030 kernel: BTRFS info (device vda6): first mount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 21:05:02.754079 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:05:02.755394 kernel: BTRFS info (device vda6): using free space tree Jan 13 21:05:02.761054 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 21:05:02.778320 kernel: BTRFS info (device vda6): last unmount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 21:05:02.778328 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 21:05:02.791693 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 21:05:02.797256 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 21:05:02.886756 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:05:02.896363 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 21:05:02.939234 systemd-networkd[771]: lo: Link UP Jan 13 21:05:02.939248 systemd-networkd[771]: lo: Gained carrier Jan 13 21:05:02.941859 systemd-networkd[771]: Enumeration completed Jan 13 21:05:02.942036 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 21:05:02.942949 systemd[1]: Reached target network.target - Network. Jan 13 21:05:02.944268 systemd-networkd[771]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:05:02.944274 systemd-networkd[771]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:05:02.945812 systemd-networkd[771]: eth0: Link UP Jan 13 21:05:02.945819 systemd-networkd[771]: eth0: Gained carrier Jan 13 21:05:02.945831 systemd-networkd[771]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:05:02.965766 ignition[682]: Ignition 2.20.0 Jan 13 21:05:02.965802 ignition[682]: Stage: fetch-offline Jan 13 21:05:02.967997 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:05:02.965894 ignition[682]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:05:02.965914 ignition[682]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 21:05:02.966118 ignition[682]: parsed url from cmdline: "" Jan 13 21:05:02.966125 ignition[682]: no config URL provided Jan 13 21:05:02.966139 ignition[682]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 21:05:02.966157 ignition[682]: no config at "/usr/lib/ignition/user.ign" Jan 13 21:05:02.966173 ignition[682]: failed to fetch config: resource requires networking Jan 13 21:05:02.966464 ignition[682]: Ignition finished successfully Jan 13 21:05:02.975114 systemd-networkd[771]: eth0: DHCPv4 address 10.230.36.54/30, gateway 10.230.36.53 acquired from 10.230.36.53 Jan 13 21:05:02.976698 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 13 21:05:03.000833 ignition[781]: Ignition 2.20.0 Jan 13 21:05:03.000859 ignition[781]: Stage: fetch Jan 13 21:05:03.001126 ignition[781]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:05:03.001155 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 21:05:03.001291 ignition[781]: parsed url from cmdline: "" Jan 13 21:05:03.001298 ignition[781]: no config URL provided Jan 13 21:05:03.001307 ignition[781]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 21:05:03.001323 ignition[781]: no config at "/usr/lib/ignition/user.ign" Jan 13 21:05:03.001471 ignition[781]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 13 21:05:03.002740 ignition[781]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 13 21:05:03.002812 ignition[781]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 13 21:05:03.017091 ignition[781]: GET result: OK Jan 13 21:05:03.017775 ignition[781]: parsing config with SHA512: 05aca98b15f75535992afc99be0bca7defac0053c8829c399453c45cd6f23f7e4799398bd734b5f3bee8aee6da8ba8902274b6df863370c6239a01e06e0a974b Jan 13 21:05:03.027540 unknown[781]: fetched base config from "system" Jan 13 21:05:03.027559 unknown[781]: fetched base config from "system" Jan 13 21:05:03.027573 unknown[781]: fetched user config from "openstack" Jan 13 21:05:03.029598 ignition[781]: fetch: fetch complete Jan 13 21:05:03.029608 ignition[781]: fetch: fetch passed Jan 13 21:05:03.029685 ignition[781]: Ignition finished successfully Jan 13 21:05:03.031460 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 13 21:05:03.040296 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 21:05:03.056449 ignition[789]: Ignition 2.20.0 Jan 13 21:05:03.056468 ignition[789]: Stage: kargs Jan 13 21:05:03.056681 ignition[789]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:05:03.056701 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 21:05:03.061101 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 21:05:03.057846 ignition[789]: kargs: kargs passed Jan 13 21:05:03.057919 ignition[789]: Ignition finished successfully Jan 13 21:05:03.068300 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 21:05:03.086826 ignition[796]: Ignition 2.20.0 Jan 13 21:05:03.086847 ignition[796]: Stage: disks Jan 13 21:05:03.087089 ignition[796]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:05:03.089343 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 21:05:03.087109 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 21:05:03.090721 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 21:05:03.088222 ignition[796]: disks: disks passed Jan 13 21:05:03.092246 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 21:05:03.088293 ignition[796]: Ignition finished successfully Jan 13 21:05:03.093834 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 21:05:03.095354 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 21:05:03.096661 systemd[1]: Reached target basic.target - Basic System. Jan 13 21:05:03.111332 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 21:05:03.133249 systemd-fsck[804]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 13 21:05:03.136989 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 21:05:03.142125 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 21:05:03.258045 kernel: EXT4-fs (vda9): mounted filesystem 59ba8ffc-e6b0-4bb4-a36e-13a47bd6ad99 r/w with ordered data mode. Quota mode: none. Jan 13 21:05:03.259208 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 21:05:03.261205 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 21:05:03.268142 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:05:03.271145 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 21:05:03.273261 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 21:05:03.276450 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 13 21:05:03.278591 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 21:05:03.278705 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:05:03.287049 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (812) Jan 13 21:05:03.289988 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 21:05:03.296132 kernel: BTRFS info (device vda6): first mount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 21:05:03.296179 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:05:03.296226 kernel: BTRFS info (device vda6): using free space tree Jan 13 21:05:03.305254 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 21:05:03.310283 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 21:05:03.312991 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:05:03.384609 initrd-setup-root[840]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 21:05:03.391461 initrd-setup-root[847]: cut: /sysroot/etc/group: No such file or directory Jan 13 21:05:03.400858 initrd-setup-root[854]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 21:05:03.410768 initrd-setup-root[861]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 21:05:03.526575 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 21:05:03.532144 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 21:05:03.534247 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 21:05:03.549045 kernel: BTRFS info (device vda6): last unmount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 21:05:03.580926 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 21:05:03.583307 ignition[929]: INFO : Ignition 2.20.0 Jan 13 21:05:03.583307 ignition[929]: INFO : Stage: mount Jan 13 21:05:03.583307 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:05:03.583307 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 21:05:03.583307 ignition[929]: INFO : mount: mount passed Jan 13 21:05:03.583307 ignition[929]: INFO : Ignition finished successfully Jan 13 21:05:03.584113 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 21:05:03.710459 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 21:05:04.301367 systemd-networkd[771]: eth0: Gained IPv6LL Jan 13 21:05:05.808464 systemd-networkd[771]: eth0: Ignoring DHCPv6 address 2a02:1348:179:890d:24:19ff:fee6:2436/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:890d:24:19ff:fee6:2436/64 assigned by NDisc. Jan 13 21:05:05.808480 systemd-networkd[771]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 13 21:05:10.465177 coreos-metadata[814]: Jan 13 21:05:10.465 WARN failed to locate config-drive, using the metadata service API instead Jan 13 21:05:10.490750 coreos-metadata[814]: Jan 13 21:05:10.490 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 13 21:05:10.505215 coreos-metadata[814]: Jan 13 21:05:10.505 INFO Fetch successful Jan 13 21:05:10.506236 coreos-metadata[814]: Jan 13 21:05:10.506 INFO wrote hostname srv-85agx.gb1.brightbox.com to /sysroot/etc/hostname Jan 13 21:05:10.508555 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 13 21:05:10.508751 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 13 21:05:10.517167 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 21:05:10.533308 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:05:10.548095 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (945) Jan 13 21:05:10.552618 kernel: BTRFS info (device vda6): first mount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 21:05:10.552679 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:05:10.555596 kernel: BTRFS info (device vda6): using free space tree Jan 13 21:05:10.560045 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 21:05:10.563232 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:05:10.598933 ignition[963]: INFO : Ignition 2.20.0 Jan 13 21:05:10.600171 ignition[963]: INFO : Stage: files Jan 13 21:05:10.602396 ignition[963]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:05:10.602396 ignition[963]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 21:05:10.602396 ignition[963]: DEBUG : files: compiled without relabeling support, skipping Jan 13 21:05:10.605199 ignition[963]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 21:05:10.605199 ignition[963]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 21:05:10.607661 ignition[963]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 21:05:10.608637 ignition[963]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 21:05:10.608637 ignition[963]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 21:05:10.608495 unknown[963]: wrote ssh authorized keys file for user: core Jan 13 21:05:10.611544 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 21:05:10.611544 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 13 21:05:10.838971 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 13 21:05:11.193666 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 21:05:11.193666 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 13 21:05:11.196616 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 13 21:05:11.768920 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 13 21:05:12.139172 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 13 21:05:12.139172 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 13 21:05:12.141877 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 21:05:12.141877 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 13 21:05:12.141877 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 13 21:05:12.141877 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 21:05:12.141877 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 21:05:12.141877 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 21:05:12.141877 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 21:05:12.141877 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:05:12.141877 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:05:12.141877 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 21:05:12.141877 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 21:05:12.141877 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 21:05:12.141877 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 13 21:05:12.715789 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 13 21:05:15.694940 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 21:05:15.694940 ignition[963]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 13 21:05:15.701237 ignition[963]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 21:05:15.704384 ignition[963]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 21:05:15.705709 ignition[963]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 13 21:05:15.705709 ignition[963]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 13 21:05:15.705709 ignition[963]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 13 21:05:15.705709 ignition[963]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:05:15.712064 ignition[963]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:05:15.712064 ignition[963]: INFO : files: files passed Jan 13 21:05:15.712064 ignition[963]: INFO : Ignition finished successfully Jan 13 21:05:15.708531 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 21:05:15.728288 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 21:05:15.734027 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 21:05:15.735432 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 21:05:15.737461 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 21:05:15.752622 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:05:15.752622 initrd-setup-root-after-ignition[992]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:05:15.755082 initrd-setup-root-after-ignition[996]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:05:15.755217 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:05:15.757685 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 21:05:15.770681 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 21:05:15.812949 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 21:05:15.813149 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 21:05:15.814904 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 21:05:15.816217 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 21:05:15.817759 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 21:05:15.824192 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 21:05:15.844137 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:05:15.848246 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 21:05:15.865933 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:05:15.866847 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:05:15.868492 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 21:05:15.870046 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 21:05:15.870231 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:05:15.871955 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 21:05:15.872855 systemd[1]: Stopped target basic.target - Basic System. Jan 13 21:05:15.874333 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 21:05:15.875706 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:05:15.877122 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 21:05:15.878632 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 21:05:15.880157 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:05:15.881743 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 21:05:15.883169 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 21:05:15.884688 systemd[1]: Stopped target swap.target - Swaps. Jan 13 21:05:15.886167 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 21:05:15.886395 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:05:15.888093 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:05:15.889134 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:05:15.890491 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 21:05:15.890840 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:05:15.891985 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 21:05:15.892211 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 21:05:15.894248 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 21:05:15.894439 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:05:15.896041 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 21:05:15.896195 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 21:05:15.906810 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 21:05:15.909327 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 21:05:15.910511 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 21:05:15.911189 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:05:15.914044 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 21:05:15.916193 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:05:15.930363 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 21:05:15.931468 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 21:05:15.937815 ignition[1016]: INFO : Ignition 2.20.0 Jan 13 21:05:15.939236 ignition[1016]: INFO : Stage: umount Jan 13 21:05:15.940492 ignition[1016]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:05:15.940492 ignition[1016]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 21:05:15.943899 ignition[1016]: INFO : umount: umount passed Jan 13 21:05:15.944828 ignition[1016]: INFO : Ignition finished successfully Jan 13 21:05:15.946227 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 21:05:15.947238 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 21:05:15.948994 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 21:05:15.950211 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 21:05:15.951062 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 21:05:15.951139 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 21:05:15.951840 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 13 21:05:15.951933 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 13 21:05:15.954739 systemd[1]: Stopped target network.target - Network. Jan 13 21:05:15.955506 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 21:05:15.955594 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:05:15.957017 systemd[1]: Stopped target paths.target - Path Units. Jan 13 21:05:15.958315 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 21:05:15.962133 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:05:15.963199 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 21:05:15.964743 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 21:05:15.966088 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 21:05:15.966174 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:05:15.967395 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 21:05:15.967463 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:05:15.968691 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 21:05:15.968759 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 21:05:15.970133 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 21:05:15.970205 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 21:05:15.971724 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 21:05:15.973513 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 21:05:15.976822 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 21:05:15.977148 systemd-networkd[771]: eth0: DHCPv6 lease lost Jan 13 21:05:15.978192 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 21:05:15.978335 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 21:05:15.981154 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 21:05:15.981361 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 21:05:15.983980 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 21:05:15.984480 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:05:15.985679 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 21:05:15.985770 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 21:05:15.993171 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 21:05:15.993931 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 21:05:15.994008 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:05:15.996766 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:05:15.998150 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 21:05:15.998326 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 21:05:16.012700 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 21:05:16.013790 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:05:16.015540 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 21:05:16.015674 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 21:05:16.019246 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 21:05:16.019344 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 21:05:16.020900 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 21:05:16.020960 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:05:16.022450 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 21:05:16.022523 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:05:16.024661 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 21:05:16.024740 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 21:05:16.026069 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:05:16.026151 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:05:16.033201 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 21:05:16.034021 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 21:05:16.034119 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:05:16.035675 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 21:05:16.035742 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 21:05:16.039095 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 21:05:16.039172 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:05:16.040608 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 13 21:05:16.040679 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:05:16.043470 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 21:05:16.043542 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:05:16.044319 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 21:05:16.044407 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:05:16.046879 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:05:16.046949 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:05:16.049391 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 21:05:16.049563 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 21:05:16.050837 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 21:05:16.059302 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 21:05:16.070047 systemd[1]: Switching root. Jan 13 21:05:16.109975 systemd-journald[201]: Journal stopped Jan 13 21:05:17.589904 systemd-journald[201]: Received SIGTERM from PID 1 (systemd). Jan 13 21:05:17.590015 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 21:05:17.590162 kernel: SELinux: policy capability open_perms=1 Jan 13 21:05:17.590187 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 21:05:17.590207 kernel: SELinux: policy capability always_check_network=0 Jan 13 21:05:17.590225 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 21:05:17.590244 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 21:05:17.590263 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 21:05:17.590282 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 21:05:17.590307 kernel: audit: type=1403 audit(1736802316.367:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 21:05:17.590476 systemd[1]: Successfully loaded SELinux policy in 55.710ms. Jan 13 21:05:17.591334 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 21.173ms. Jan 13 21:05:17.591380 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 21:05:17.591404 systemd[1]: Detected virtualization kvm. Jan 13 21:05:17.591425 systemd[1]: Detected architecture x86-64. Jan 13 21:05:17.591446 systemd[1]: Detected first boot. Jan 13 21:05:17.591467 systemd[1]: Hostname set to . Jan 13 21:05:17.591495 systemd[1]: Initializing machine ID from VM UUID. Jan 13 21:05:17.591517 zram_generator::config[1059]: No configuration found. Jan 13 21:05:17.591554 systemd[1]: Populated /etc with preset unit settings. Jan 13 21:05:17.591578 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 13 21:05:17.591598 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 13 21:05:17.591619 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 13 21:05:17.591641 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 21:05:17.591662 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 21:05:17.591692 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 21:05:17.591714 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 21:05:17.591735 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 21:05:17.591761 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 21:05:17.591783 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 21:05:17.591804 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 21:05:17.591824 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:05:17.591846 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:05:17.591867 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 21:05:17.591892 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 21:05:17.591915 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 21:05:17.591936 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 21:05:17.591956 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 13 21:05:17.591991 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:05:17.592011 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 13 21:05:17.594050 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 13 21:05:17.594089 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 13 21:05:17.594112 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 21:05:17.594133 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:05:17.594155 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 21:05:17.594175 systemd[1]: Reached target slices.target - Slice Units. Jan 13 21:05:17.594234 systemd[1]: Reached target swap.target - Swaps. Jan 13 21:05:17.594269 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 21:05:17.594291 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 21:05:17.594312 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:05:17.594333 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 21:05:17.594368 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:05:17.594390 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 21:05:17.594421 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 21:05:17.594445 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 21:05:17.594472 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 21:05:17.594494 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:05:17.594515 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 21:05:17.594536 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 21:05:17.594557 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 21:05:17.594578 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 21:05:17.594600 systemd[1]: Reached target machines.target - Containers. Jan 13 21:05:17.594621 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 21:05:17.594648 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:05:17.594671 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 21:05:17.594700 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 21:05:17.594734 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:05:17.594754 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 21:05:17.594780 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:05:17.594801 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 21:05:17.594833 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:05:17.594853 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 21:05:17.594878 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 13 21:05:17.594910 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 13 21:05:17.594930 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 13 21:05:17.594951 systemd[1]: Stopped systemd-fsck-usr.service. Jan 13 21:05:17.594971 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 21:05:17.594991 kernel: fuse: init (API version 7.39) Jan 13 21:05:17.595010 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 21:05:17.595030 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 21:05:17.597159 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 21:05:17.597199 kernel: loop: module loaded Jan 13 21:05:17.597222 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 21:05:17.597250 systemd[1]: verity-setup.service: Deactivated successfully. Jan 13 21:05:17.597272 systemd[1]: Stopped verity-setup.service. Jan 13 21:05:17.597294 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:05:17.597327 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 21:05:17.597361 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 21:05:17.597397 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 21:05:17.597425 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 21:05:17.597447 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 21:05:17.597469 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 21:05:17.597491 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:05:17.597512 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 21:05:17.597568 systemd-journald[1148]: Collecting audit messages is disabled. Jan 13 21:05:17.597624 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 21:05:17.597649 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:05:17.597670 kernel: ACPI: bus type drm_connector registered Jan 13 21:05:17.597700 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:05:17.597721 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:05:17.597755 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:05:17.597782 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 21:05:17.597803 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 21:05:17.597823 systemd-journald[1148]: Journal started Jan 13 21:05:17.597860 systemd-journald[1148]: Runtime Journal (/run/log/journal/d8247e8f644e40aab4bcc457d8126048) is 4.7M, max 37.9M, 33.2M free. Jan 13 21:05:17.174223 systemd[1]: Queued start job for default target multi-user.target. Jan 13 21:05:17.201952 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 13 21:05:17.202677 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 13 21:05:17.601139 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 21:05:17.605124 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 21:05:17.606265 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 21:05:17.606495 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 21:05:17.607643 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:05:17.607829 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:05:17.608997 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 21:05:17.611282 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 21:05:17.612514 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 21:05:17.628952 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 21:05:17.639212 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 21:05:17.648500 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 21:05:17.649443 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 21:05:17.649594 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 21:05:17.651777 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 21:05:17.660656 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 21:05:17.666169 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 21:05:17.667119 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:05:17.672191 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 21:05:17.675370 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 21:05:17.676175 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 21:05:17.681186 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 21:05:17.687208 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 21:05:17.691233 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:05:17.693501 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 21:05:17.698630 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 21:05:17.702879 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 21:05:17.704295 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 21:05:17.705696 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 21:05:17.736297 systemd-journald[1148]: Time spent on flushing to /var/log/journal/d8247e8f644e40aab4bcc457d8126048 is 58.492ms for 1144 entries. Jan 13 21:05:17.736297 systemd-journald[1148]: System Journal (/var/log/journal/d8247e8f644e40aab4bcc457d8126048) is 8.0M, max 584.8M, 576.8M free. Jan 13 21:05:17.850806 systemd-journald[1148]: Received client request to flush runtime journal. Jan 13 21:05:17.850885 kernel: loop0: detected capacity change from 0 to 138184 Jan 13 21:05:17.782480 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 21:05:17.783613 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 21:05:17.790214 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 21:05:17.808166 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. Jan 13 21:05:17.808188 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. Jan 13 21:05:17.821043 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:05:17.829232 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 21:05:17.849089 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:05:17.857700 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 21:05:17.867191 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 21:05:17.868946 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 21:05:17.893084 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 21:05:17.922140 kernel: loop1: detected capacity change from 0 to 8 Jan 13 21:05:17.941596 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 21:05:17.948264 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 21:05:17.949416 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:05:17.960271 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 21:05:17.973747 kernel: loop2: detected capacity change from 0 to 141000 Jan 13 21:05:18.017986 udevadm[1216]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 13 21:05:18.030023 kernel: loop3: detected capacity change from 0 to 210664 Jan 13 21:05:18.038222 systemd-tmpfiles[1214]: ACLs are not supported, ignoring. Jan 13 21:05:18.038675 systemd-tmpfiles[1214]: ACLs are not supported, ignoring. Jan 13 21:05:18.053633 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:05:18.080106 kernel: loop4: detected capacity change from 0 to 138184 Jan 13 21:05:18.110125 kernel: loop5: detected capacity change from 0 to 8 Jan 13 21:05:18.119072 kernel: loop6: detected capacity change from 0 to 141000 Jan 13 21:05:18.150693 kernel: loop7: detected capacity change from 0 to 210664 Jan 13 21:05:18.186426 (sd-merge)[1222]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jan 13 21:05:18.187913 (sd-merge)[1222]: Merged extensions into '/usr'. Jan 13 21:05:18.202855 systemd[1]: Reloading requested from client PID 1192 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 21:05:18.202877 systemd[1]: Reloading... Jan 13 21:05:18.354939 zram_generator::config[1248]: No configuration found. Jan 13 21:05:18.502672 ldconfig[1187]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 21:05:18.614776 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:05:18.687792 systemd[1]: Reloading finished in 482 ms. Jan 13 21:05:18.719488 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 21:05:18.720880 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 21:05:18.734290 systemd[1]: Starting ensure-sysext.service... Jan 13 21:05:18.738646 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 21:05:18.739947 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 21:05:18.753105 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:05:18.759155 systemd[1]: Reloading requested from client PID 1304 ('systemctl') (unit ensure-sysext.service)... Jan 13 21:05:18.759176 systemd[1]: Reloading... Jan 13 21:05:18.807899 systemd-tmpfiles[1305]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 21:05:18.808438 systemd-tmpfiles[1305]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 21:05:18.811963 systemd-tmpfiles[1305]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 21:05:18.813564 systemd-tmpfiles[1305]: ACLs are not supported, ignoring. Jan 13 21:05:18.814193 systemd-tmpfiles[1305]: ACLs are not supported, ignoring. Jan 13 21:05:18.823288 systemd-tmpfiles[1305]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 21:05:18.823449 systemd-tmpfiles[1305]: Skipping /boot Jan 13 21:05:18.824733 systemd-udevd[1307]: Using default interface naming scheme 'v255'. Jan 13 21:05:18.853558 systemd-tmpfiles[1305]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 21:05:18.854166 systemd-tmpfiles[1305]: Skipping /boot Jan 13 21:05:18.874097 zram_generator::config[1334]: No configuration found. Jan 13 21:05:19.061639 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 44 scanned by (udev-worker) (1344) Jan 13 21:05:19.182044 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 13 21:05:19.192495 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:05:19.196055 kernel: mousedev: PS/2 mouse device common for all mice Jan 13 21:05:19.201040 kernel: ACPI: button: Power Button [PWRF] Jan 13 21:05:19.288045 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 13 21:05:19.299565 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 13 21:05:19.305059 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 13 21:05:19.305336 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 13 21:05:19.315274 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 13 21:05:19.315652 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 21:05:19.317701 systemd[1]: Reloading finished in 557 ms. Jan 13 21:05:19.342772 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:05:19.346115 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:05:19.403755 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:05:19.414458 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 21:05:19.425407 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 21:05:19.426469 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:05:19.471424 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:05:19.477301 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:05:19.481851 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:05:19.484064 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:05:19.491374 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 21:05:19.502355 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 21:05:19.515342 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 21:05:19.525345 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 21:05:19.532338 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 21:05:19.533202 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:05:19.537351 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:05:19.538095 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:05:19.567117 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 21:05:19.573007 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:05:19.577091 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:05:19.586428 systemd[1]: Finished ensure-sysext.service. Jan 13 21:05:19.591312 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:05:19.591705 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:05:19.608400 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:05:19.611757 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 21:05:19.612940 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:05:19.614279 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 21:05:19.624317 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 13 21:05:19.628501 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:05:19.631110 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 21:05:19.631261 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:05:19.633189 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:05:19.634132 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:05:19.636140 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 21:05:19.638951 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 21:05:19.643505 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 21:05:19.643708 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 21:05:19.664930 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 21:05:19.676126 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 21:05:19.678862 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:05:19.679179 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:05:19.686501 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 21:05:19.689531 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 21:05:19.699668 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 21:05:19.710250 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 21:05:19.714051 augenrules[1465]: No rules Jan 13 21:05:19.713796 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 21:05:19.715087 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 21:05:19.725007 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 21:05:19.745116 lvm[1463]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 21:05:19.769486 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 21:05:19.779063 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 21:05:19.780157 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:05:19.787376 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 21:05:19.892695 systemd-networkd[1428]: lo: Link UP Jan 13 21:05:19.892709 systemd-networkd[1428]: lo: Gained carrier Jan 13 21:05:19.896086 systemd-networkd[1428]: Enumeration completed Jan 13 21:05:19.896680 systemd-networkd[1428]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:05:19.896694 systemd-networkd[1428]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:05:19.901324 systemd-networkd[1428]: eth0: Link UP Jan 13 21:05:19.901338 systemd-networkd[1428]: eth0: Gained carrier Jan 13 21:05:19.901357 systemd-networkd[1428]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:05:19.913117 systemd-networkd[1428]: eth0: DHCPv4 address 10.230.36.54/30, gateway 10.230.36.53 acquired from 10.230.36.53 Jan 13 21:05:19.925474 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 21:05:19.927320 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 13 21:05:19.930101 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:05:19.933207 lvm[1481]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 21:05:19.931843 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 21:05:19.938709 systemd-resolved[1429]: Positive Trust Anchors: Jan 13 21:05:19.939003 systemd-resolved[1429]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 21:05:19.939129 systemd-resolved[1429]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 21:05:19.942224 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 21:05:19.947175 systemd-resolved[1429]: Using system hostname 'srv-85agx.gb1.brightbox.com'. Jan 13 21:05:19.949981 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 21:05:19.950958 systemd[1]: Reached target network.target - Network. Jan 13 21:05:19.951777 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:05:19.952646 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 21:05:19.953644 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 21:05:19.954601 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 21:05:19.956064 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 21:05:19.957140 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 21:05:19.958062 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 21:05:19.958949 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 21:05:19.959122 systemd[1]: Reached target paths.target - Path Units. Jan 13 21:05:19.959866 systemd[1]: Reached target timers.target - Timer Units. Jan 13 21:05:19.962087 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 21:05:19.965051 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 21:05:19.974276 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 21:05:19.976103 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 21:05:19.977167 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 21:05:19.978561 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 21:05:19.979256 systemd[1]: Reached target basic.target - Basic System. Jan 13 21:05:19.979989 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 21:05:19.980196 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 21:05:19.987172 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 21:05:19.995868 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 13 21:05:20.000212 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 21:05:20.004175 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 21:05:20.009141 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 21:05:20.009868 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 21:05:20.013636 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 21:05:20.017578 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 13 21:05:20.024168 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 21:05:20.032209 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 21:05:20.039777 jq[1493]: false Jan 13 21:05:20.665362 systemd-timesyncd[1442]: Contacted time server 77.74.199.184:123 (0.flatcar.pool.ntp.org). Jan 13 21:05:20.665461 systemd-timesyncd[1442]: Initial clock synchronization to Mon 2025-01-13 21:05:20.665221 UTC. Jan 13 21:05:20.666328 systemd-resolved[1429]: Clock change detected. Flushing caches. Jan 13 21:05:20.666705 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 21:05:20.668622 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 21:05:20.670083 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 21:05:20.671399 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 21:05:20.679308 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 21:05:20.688712 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 21:05:20.688984 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 21:05:20.731763 jq[1503]: true Jan 13 21:05:20.743716 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 21:05:20.744008 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 21:05:20.746553 extend-filesystems[1494]: Found loop4 Jan 13 21:05:20.746553 extend-filesystems[1494]: Found loop5 Jan 13 21:05:20.746553 extend-filesystems[1494]: Found loop6 Jan 13 21:05:20.746553 extend-filesystems[1494]: Found loop7 Jan 13 21:05:20.746553 extend-filesystems[1494]: Found vda Jan 13 21:05:20.746553 extend-filesystems[1494]: Found vda1 Jan 13 21:05:20.746553 extend-filesystems[1494]: Found vda2 Jan 13 21:05:20.746553 extend-filesystems[1494]: Found vda3 Jan 13 21:05:20.746553 extend-filesystems[1494]: Found usr Jan 13 21:05:20.746553 extend-filesystems[1494]: Found vda4 Jan 13 21:05:20.746553 extend-filesystems[1494]: Found vda6 Jan 13 21:05:20.746553 extend-filesystems[1494]: Found vda7 Jan 13 21:05:20.746553 extend-filesystems[1494]: Found vda9 Jan 13 21:05:20.746553 extend-filesystems[1494]: Checking size of /dev/vda9 Jan 13 21:05:20.816686 extend-filesystems[1494]: Resized partition /dev/vda9 Jan 13 21:05:20.752864 dbus-daemon[1492]: [system] SELinux support is enabled Jan 13 21:05:20.823950 update_engine[1502]: I20250113 21:05:20.781522 1502 main.cc:92] Flatcar Update Engine starting Jan 13 21:05:20.823950 update_engine[1502]: I20250113 21:05:20.796757 1502 update_check_scheduler.cc:74] Next update check in 3m53s Jan 13 21:05:20.751175 (ntainerd)[1514]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 21:05:20.824660 extend-filesystems[1531]: resize2fs 1.47.1 (20-May-2024) Jan 13 21:05:20.827541 tar[1506]: linux-amd64/helm Jan 13 21:05:20.764576 dbus-daemon[1492]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1428 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 13 21:05:20.753219 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 21:05:20.770811 dbus-daemon[1492]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 13 21:05:20.759950 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 21:05:20.759988 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 21:05:20.765600 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 21:05:20.765633 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 21:05:20.788416 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 13 21:05:20.796277 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 21:05:20.796536 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 21:05:20.807825 systemd[1]: Started update-engine.service - Update Engine. Jan 13 21:05:20.823872 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 21:05:20.841169 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Jan 13 21:05:20.855756 jq[1521]: true Jan 13 21:05:20.897160 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 44 scanned by (udev-worker) (1343) Jan 13 21:05:21.085020 systemd-logind[1501]: Watching system buttons on /dev/input/event2 (Power Button) Jan 13 21:05:21.085092 systemd-logind[1501]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 13 21:05:21.088801 systemd-logind[1501]: New seat seat0. Jan 13 21:05:21.094650 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 21:05:21.121741 bash[1553]: Updated "/home/core/.ssh/authorized_keys" Jan 13 21:05:21.128205 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 21:05:21.142394 systemd[1]: Starting sshkeys.service... Jan 13 21:05:21.145568 locksmithd[1532]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 21:05:21.186181 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jan 13 21:05:21.207859 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 13 21:05:21.223939 extend-filesystems[1531]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 13 21:05:21.223939 extend-filesystems[1531]: old_desc_blocks = 1, new_desc_blocks = 8 Jan 13 21:05:21.223939 extend-filesystems[1531]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jan 13 21:05:21.218646 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 13 21:05:21.226773 dbus-daemon[1492]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 13 21:05:21.230652 extend-filesystems[1494]: Resized filesystem in /dev/vda9 Jan 13 21:05:21.225886 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 21:05:21.230013 dbus-daemon[1492]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1527 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 13 21:05:21.226200 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 21:05:21.237359 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 13 21:05:21.249600 systemd[1]: Starting polkit.service - Authorization Manager... Jan 13 21:05:21.271273 containerd[1514]: time="2025-01-13T21:05:21.270064427Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 13 21:05:21.274525 polkitd[1567]: Started polkitd version 121 Jan 13 21:05:21.290375 polkitd[1567]: Loading rules from directory /etc/polkit-1/rules.d Jan 13 21:05:21.290483 polkitd[1567]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 13 21:05:21.292509 polkitd[1567]: Finished loading, compiling and executing 2 rules Jan 13 21:05:21.297620 dbus-daemon[1492]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 13 21:05:21.297877 systemd[1]: Started polkit.service - Authorization Manager. Jan 13 21:05:21.303141 polkitd[1567]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 13 21:05:21.336048 systemd-hostnamed[1527]: Hostname set to (static) Jan 13 21:05:21.343311 containerd[1514]: time="2025-01-13T21:05:21.343254928Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:05:21.348654 containerd[1514]: time="2025-01-13T21:05:21.348523179Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:05:21.348654 containerd[1514]: time="2025-01-13T21:05:21.348572116Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 21:05:21.348654 containerd[1514]: time="2025-01-13T21:05:21.348596942Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 21:05:21.349214 containerd[1514]: time="2025-01-13T21:05:21.348922699Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 21:05:21.349214 containerd[1514]: time="2025-01-13T21:05:21.348959081Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 21:05:21.349214 containerd[1514]: time="2025-01-13T21:05:21.349083829Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:05:21.349214 containerd[1514]: time="2025-01-13T21:05:21.349106317Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:05:21.349764 containerd[1514]: time="2025-01-13T21:05:21.349474020Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:05:21.349764 containerd[1514]: time="2025-01-13T21:05:21.349509681Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 21:05:21.349764 containerd[1514]: time="2025-01-13T21:05:21.349533744Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:05:21.349764 containerd[1514]: time="2025-01-13T21:05:21.349551271Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 21:05:21.349764 containerd[1514]: time="2025-01-13T21:05:21.349693435Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:05:21.350290 containerd[1514]: time="2025-01-13T21:05:21.350057204Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:05:21.351273 containerd[1514]: time="2025-01-13T21:05:21.351241284Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:05:21.351328 containerd[1514]: time="2025-01-13T21:05:21.351272446Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 21:05:21.352078 containerd[1514]: time="2025-01-13T21:05:21.351424907Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 21:05:21.352078 containerd[1514]: time="2025-01-13T21:05:21.351511590Z" level=info msg="metadata content store policy set" policy=shared Jan 13 21:05:21.356184 containerd[1514]: time="2025-01-13T21:05:21.356135945Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 21:05:21.358228 containerd[1514]: time="2025-01-13T21:05:21.358199889Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 21:05:21.358320 containerd[1514]: time="2025-01-13T21:05:21.358298883Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 21:05:21.358389 containerd[1514]: time="2025-01-13T21:05:21.358333011Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 21:05:21.358389 containerd[1514]: time="2025-01-13T21:05:21.358355859Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 21:05:21.358592 containerd[1514]: time="2025-01-13T21:05:21.358562183Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 21:05:21.358999 containerd[1514]: time="2025-01-13T21:05:21.358973146Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 21:05:21.359210 containerd[1514]: time="2025-01-13T21:05:21.359182382Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 21:05:21.359273 containerd[1514]: time="2025-01-13T21:05:21.359214920Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 21:05:21.359273 containerd[1514]: time="2025-01-13T21:05:21.359237802Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 21:05:21.359349 containerd[1514]: time="2025-01-13T21:05:21.359270919Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 21:05:21.359349 containerd[1514]: time="2025-01-13T21:05:21.359289168Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 21:05:21.359349 containerd[1514]: time="2025-01-13T21:05:21.359314546Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 21:05:21.359349 containerd[1514]: time="2025-01-13T21:05:21.359334542Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 21:05:21.359509 containerd[1514]: time="2025-01-13T21:05:21.359354318Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 21:05:21.359509 containerd[1514]: time="2025-01-13T21:05:21.359401688Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 21:05:21.359509 containerd[1514]: time="2025-01-13T21:05:21.359429148Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 21:05:21.359509 containerd[1514]: time="2025-01-13T21:05:21.359450083Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 21:05:21.359509 containerd[1514]: time="2025-01-13T21:05:21.359485286Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 21:05:21.359712 containerd[1514]: time="2025-01-13T21:05:21.359507574Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 21:05:21.359712 containerd[1514]: time="2025-01-13T21:05:21.359526253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 21:05:21.359712 containerd[1514]: time="2025-01-13T21:05:21.359546702Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 21:05:21.359712 containerd[1514]: time="2025-01-13T21:05:21.359571909Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 21:05:21.359712 containerd[1514]: time="2025-01-13T21:05:21.359598222Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 21:05:21.359712 containerd[1514]: time="2025-01-13T21:05:21.359618193Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 21:05:21.359712 containerd[1514]: time="2025-01-13T21:05:21.359637683Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 21:05:21.359712 containerd[1514]: time="2025-01-13T21:05:21.359656966Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 21:05:21.359712 containerd[1514]: time="2025-01-13T21:05:21.359684634Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 21:05:21.360061 containerd[1514]: time="2025-01-13T21:05:21.359716383Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 21:05:21.360061 containerd[1514]: time="2025-01-13T21:05:21.359735467Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 21:05:21.360061 containerd[1514]: time="2025-01-13T21:05:21.359753942Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 21:05:21.360061 containerd[1514]: time="2025-01-13T21:05:21.359787154Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 21:05:21.360061 containerd[1514]: time="2025-01-13T21:05:21.359826523Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 21:05:21.360061 containerd[1514]: time="2025-01-13T21:05:21.359850128Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 21:05:21.360061 containerd[1514]: time="2025-01-13T21:05:21.359879596Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 21:05:21.360061 containerd[1514]: time="2025-01-13T21:05:21.359978146Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 21:05:21.362643 containerd[1514]: time="2025-01-13T21:05:21.360005763Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 21:05:21.362643 containerd[1514]: time="2025-01-13T21:05:21.361287282Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 21:05:21.362643 containerd[1514]: time="2025-01-13T21:05:21.361324303Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 21:05:21.362643 containerd[1514]: time="2025-01-13T21:05:21.361339089Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 21:05:21.362643 containerd[1514]: time="2025-01-13T21:05:21.361388773Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 21:05:21.362643 containerd[1514]: time="2025-01-13T21:05:21.361434790Z" level=info msg="NRI interface is disabled by configuration." Jan 13 21:05:21.362643 containerd[1514]: time="2025-01-13T21:05:21.361452742Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 21:05:21.362942 containerd[1514]: time="2025-01-13T21:05:21.361856357Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 21:05:21.362942 containerd[1514]: time="2025-01-13T21:05:21.361940673Z" level=info msg="Connect containerd service" Jan 13 21:05:21.362942 containerd[1514]: time="2025-01-13T21:05:21.361984935Z" level=info msg="using legacy CRI server" Jan 13 21:05:21.362942 containerd[1514]: time="2025-01-13T21:05:21.362018279Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 21:05:21.362942 containerd[1514]: time="2025-01-13T21:05:21.362228584Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 21:05:21.366335 containerd[1514]: time="2025-01-13T21:05:21.365643885Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 21:05:21.366512 containerd[1514]: time="2025-01-13T21:05:21.366381645Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 21:05:21.366512 containerd[1514]: time="2025-01-13T21:05:21.366478949Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 21:05:21.369302 containerd[1514]: time="2025-01-13T21:05:21.369238388Z" level=info msg="Start subscribing containerd event" Jan 13 21:05:21.369526 containerd[1514]: time="2025-01-13T21:05:21.369312387Z" level=info msg="Start recovering state" Jan 13 21:05:21.369526 containerd[1514]: time="2025-01-13T21:05:21.369439289Z" level=info msg="Start event monitor" Jan 13 21:05:21.369526 containerd[1514]: time="2025-01-13T21:05:21.369474790Z" level=info msg="Start snapshots syncer" Jan 13 21:05:21.369526 containerd[1514]: time="2025-01-13T21:05:21.369498028Z" level=info msg="Start cni network conf syncer for default" Jan 13 21:05:21.369526 containerd[1514]: time="2025-01-13T21:05:21.369512964Z" level=info msg="Start streaming server" Jan 13 21:05:21.371897 containerd[1514]: time="2025-01-13T21:05:21.369618198Z" level=info msg="containerd successfully booted in 0.109662s" Jan 13 21:05:21.369775 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 21:05:21.659851 sshd_keygen[1526]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 21:05:21.692950 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 21:05:21.704701 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 21:05:21.714764 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 21:05:21.715077 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 21:05:21.727051 tar[1506]: linux-amd64/LICENSE Jan 13 21:05:21.727051 tar[1506]: linux-amd64/README.md Jan 13 21:05:21.728633 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 21:05:21.751252 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 21:05:21.764778 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 21:05:21.767780 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 13 21:05:21.768868 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 21:05:21.770634 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 13 21:05:21.950928 systemd-networkd[1428]: eth0: Gained IPv6LL Jan 13 21:05:21.955365 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 21:05:21.958095 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 21:05:21.972660 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:05:21.977019 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 21:05:22.019812 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 21:05:22.862928 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:05:22.881122 (kubelet)[1617]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:05:23.067752 systemd-networkd[1428]: eth0: Ignoring DHCPv6 address 2a02:1348:179:890d:24:19ff:fee6:2436/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:890d:24:19ff:fee6:2436/64 assigned by NDisc. Jan 13 21:05:23.067764 systemd-networkd[1428]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 13 21:05:23.543527 kubelet[1617]: E0113 21:05:23.543215 1617 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:05:23.546423 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:05:23.546711 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:05:23.547394 systemd[1]: kubelet.service: Consumed 1.054s CPU time. Jan 13 21:05:25.713717 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 21:05:25.729691 systemd[1]: Started sshd@0-10.230.36.54:22-139.178.68.195:37434.service - OpenSSH per-connection server daemon (139.178.68.195:37434). Jan 13 21:05:26.637593 sshd[1629]: Accepted publickey for core from 139.178.68.195 port 37434 ssh2: RSA SHA256:hnRa+lrXktC2wPLY5bcSKNUrJK0GTTLH7jAG9gNraiM Jan 13 21:05:26.640720 sshd-session[1629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:05:26.658130 systemd-logind[1501]: New session 1 of user core. Jan 13 21:05:26.660963 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 21:05:26.670750 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 21:05:26.704496 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 21:05:26.713662 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 21:05:26.728941 (systemd)[1633]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 21:05:26.827519 agetty[1598]: failed to open credentials directory Jan 13 21:05:26.842334 agetty[1597]: failed to open credentials directory Jan 13 21:05:26.852871 login[1598]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 13 21:05:26.862338 login[1597]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 13 21:05:26.868363 systemd-logind[1501]: New session 2 of user core. Jan 13 21:05:26.878308 systemd-logind[1501]: New session 3 of user core. Jan 13 21:05:26.881358 systemd[1633]: Queued start job for default target default.target. Jan 13 21:05:26.889950 systemd[1633]: Created slice app.slice - User Application Slice. Jan 13 21:05:26.889999 systemd[1633]: Reached target paths.target - Paths. Jan 13 21:05:26.890023 systemd[1633]: Reached target timers.target - Timers. Jan 13 21:05:26.892117 systemd[1633]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 21:05:26.908301 systemd[1633]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 21:05:26.909319 systemd[1633]: Reached target sockets.target - Sockets. Jan 13 21:05:26.909354 systemd[1633]: Reached target basic.target - Basic System. Jan 13 21:05:26.909426 systemd[1633]: Reached target default.target - Main User Target. Jan 13 21:05:26.909490 systemd[1633]: Startup finished in 171ms. Jan 13 21:05:26.910056 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 21:05:26.923728 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 21:05:26.925473 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 21:05:26.927122 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 21:05:27.565713 systemd[1]: Started sshd@1-10.230.36.54:22-139.178.68.195:37450.service - OpenSSH per-connection server daemon (139.178.68.195:37450). Jan 13 21:05:27.773310 coreos-metadata[1491]: Jan 13 21:05:27.773 WARN failed to locate config-drive, using the metadata service API instead Jan 13 21:05:27.799558 coreos-metadata[1491]: Jan 13 21:05:27.799 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 13 21:05:27.805631 coreos-metadata[1491]: Jan 13 21:05:27.805 INFO Fetch failed with 404: resource not found Jan 13 21:05:27.805631 coreos-metadata[1491]: Jan 13 21:05:27.805 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 13 21:05:27.806373 coreos-metadata[1491]: Jan 13 21:05:27.806 INFO Fetch successful Jan 13 21:05:27.806548 coreos-metadata[1491]: Jan 13 21:05:27.806 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 13 21:05:27.820487 coreos-metadata[1491]: Jan 13 21:05:27.820 INFO Fetch successful Jan 13 21:05:27.820487 coreos-metadata[1491]: Jan 13 21:05:27.820 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 13 21:05:27.834421 coreos-metadata[1491]: Jan 13 21:05:27.834 INFO Fetch successful Jan 13 21:05:27.834421 coreos-metadata[1491]: Jan 13 21:05:27.834 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 13 21:05:27.850268 coreos-metadata[1491]: Jan 13 21:05:27.850 INFO Fetch successful Jan 13 21:05:27.850268 coreos-metadata[1491]: Jan 13 21:05:27.850 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 13 21:05:27.867704 coreos-metadata[1491]: Jan 13 21:05:27.867 INFO Fetch successful Jan 13 21:05:27.891270 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 13 21:05:27.892140 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 21:05:28.336248 coreos-metadata[1562]: Jan 13 21:05:28.336 WARN failed to locate config-drive, using the metadata service API instead Jan 13 21:05:28.359028 coreos-metadata[1562]: Jan 13 21:05:28.358 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 13 21:05:28.387585 coreos-metadata[1562]: Jan 13 21:05:28.387 INFO Fetch successful Jan 13 21:05:28.387843 coreos-metadata[1562]: Jan 13 21:05:28.387 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 13 21:05:28.424550 coreos-metadata[1562]: Jan 13 21:05:28.424 INFO Fetch successful Jan 13 21:05:28.426949 unknown[1562]: wrote ssh authorized keys file for user: core Jan 13 21:05:28.445668 update-ssh-keys[1681]: Updated "/home/core/.ssh/authorized_keys" Jan 13 21:05:28.447357 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 13 21:05:28.450009 systemd[1]: Finished sshkeys.service. Jan 13 21:05:28.451998 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 21:05:28.452420 systemd[1]: Startup finished in 1.317s (kernel) + 16.605s (initrd) + 11.515s (userspace) = 29.438s. Jan 13 21:05:28.458453 sshd[1670]: Accepted publickey for core from 139.178.68.195 port 37450 ssh2: RSA SHA256:hnRa+lrXktC2wPLY5bcSKNUrJK0GTTLH7jAG9gNraiM Jan 13 21:05:28.460533 sshd-session[1670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:05:28.468243 systemd-logind[1501]: New session 4 of user core. Jan 13 21:05:28.476389 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 21:05:29.134518 sshd[1685]: Connection closed by 139.178.68.195 port 37450 Jan 13 21:05:29.135457 sshd-session[1670]: pam_unix(sshd:session): session closed for user core Jan 13 21:05:29.140517 systemd[1]: sshd@1-10.230.36.54:22-139.178.68.195:37450.service: Deactivated successfully. Jan 13 21:05:29.142600 systemd[1]: session-4.scope: Deactivated successfully. Jan 13 21:05:29.143621 systemd-logind[1501]: Session 4 logged out. Waiting for processes to exit. Jan 13 21:05:29.145281 systemd-logind[1501]: Removed session 4. Jan 13 21:05:29.300501 systemd[1]: Started sshd@2-10.230.36.54:22-139.178.68.195:37464.service - OpenSSH per-connection server daemon (139.178.68.195:37464). Jan 13 21:05:30.219236 sshd[1690]: Accepted publickey for core from 139.178.68.195 port 37464 ssh2: RSA SHA256:hnRa+lrXktC2wPLY5bcSKNUrJK0GTTLH7jAG9gNraiM Jan 13 21:05:30.221020 sshd-session[1690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:05:30.228430 systemd-logind[1501]: New session 5 of user core. Jan 13 21:05:30.239377 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 21:05:30.833644 sshd[1692]: Connection closed by 139.178.68.195 port 37464 Jan 13 21:05:30.834906 sshd-session[1690]: pam_unix(sshd:session): session closed for user core Jan 13 21:05:30.840565 systemd[1]: sshd@2-10.230.36.54:22-139.178.68.195:37464.service: Deactivated successfully. Jan 13 21:05:30.842811 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 21:05:30.843711 systemd-logind[1501]: Session 5 logged out. Waiting for processes to exit. Jan 13 21:05:30.845601 systemd-logind[1501]: Removed session 5. Jan 13 21:05:30.994540 systemd[1]: Started sshd@3-10.230.36.54:22-139.178.68.195:37472.service - OpenSSH per-connection server daemon (139.178.68.195:37472). Jan 13 21:05:31.899522 sshd[1697]: Accepted publickey for core from 139.178.68.195 port 37472 ssh2: RSA SHA256:hnRa+lrXktC2wPLY5bcSKNUrJK0GTTLH7jAG9gNraiM Jan 13 21:05:31.901528 sshd-session[1697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:05:31.910207 systemd-logind[1501]: New session 6 of user core. Jan 13 21:05:31.916395 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 21:05:32.522230 sshd[1699]: Connection closed by 139.178.68.195 port 37472 Jan 13 21:05:32.523451 sshd-session[1697]: pam_unix(sshd:session): session closed for user core Jan 13 21:05:32.528526 systemd[1]: sshd@3-10.230.36.54:22-139.178.68.195:37472.service: Deactivated successfully. Jan 13 21:05:32.530913 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 21:05:32.531890 systemd-logind[1501]: Session 6 logged out. Waiting for processes to exit. Jan 13 21:05:32.533810 systemd-logind[1501]: Removed session 6. Jan 13 21:05:32.681752 systemd[1]: Started sshd@4-10.230.36.54:22-139.178.68.195:37480.service - OpenSSH per-connection server daemon (139.178.68.195:37480). Jan 13 21:05:33.560136 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 13 21:05:33.568402 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:05:33.597086 sshd[1704]: Accepted publickey for core from 139.178.68.195 port 37480 ssh2: RSA SHA256:hnRa+lrXktC2wPLY5bcSKNUrJK0GTTLH7jAG9gNraiM Jan 13 21:05:33.599166 sshd-session[1704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:05:33.606711 systemd-logind[1501]: New session 7 of user core. Jan 13 21:05:33.612563 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 21:05:33.752800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:05:33.758855 (kubelet)[1715]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:05:33.833993 kubelet[1715]: E0113 21:05:33.833732 1715 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:05:33.838039 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:05:33.838334 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:05:34.097198 sudo[1723]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 13 21:05:34.097717 sudo[1723]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:05:34.114884 sudo[1723]: pam_unix(sudo:session): session closed for user root Jan 13 21:05:34.262184 sshd[1709]: Connection closed by 139.178.68.195 port 37480 Jan 13 21:05:34.261583 sshd-session[1704]: pam_unix(sshd:session): session closed for user core Jan 13 21:05:34.266647 systemd[1]: sshd@4-10.230.36.54:22-139.178.68.195:37480.service: Deactivated successfully. Jan 13 21:05:34.269726 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 21:05:34.271612 systemd-logind[1501]: Session 7 logged out. Waiting for processes to exit. Jan 13 21:05:34.273828 systemd-logind[1501]: Removed session 7. Jan 13 21:05:34.419551 systemd[1]: Started sshd@5-10.230.36.54:22-139.178.68.195:37496.service - OpenSSH per-connection server daemon (139.178.68.195:37496). Jan 13 21:05:35.322207 sshd[1728]: Accepted publickey for core from 139.178.68.195 port 37496 ssh2: RSA SHA256:hnRa+lrXktC2wPLY5bcSKNUrJK0GTTLH7jAG9gNraiM Jan 13 21:05:35.324373 sshd-session[1728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:05:35.332540 systemd-logind[1501]: New session 8 of user core. Jan 13 21:05:35.342408 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 13 21:05:35.799217 sudo[1732]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 13 21:05:35.799780 sudo[1732]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:05:35.806044 sudo[1732]: pam_unix(sudo:session): session closed for user root Jan 13 21:05:35.815742 sudo[1731]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 13 21:05:35.816284 sudo[1731]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:05:35.846136 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 21:05:35.891428 augenrules[1754]: No rules Jan 13 21:05:35.892425 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 21:05:35.892751 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 21:05:35.894359 sudo[1731]: pam_unix(sudo:session): session closed for user root Jan 13 21:05:36.037414 sshd[1730]: Connection closed by 139.178.68.195 port 37496 Jan 13 21:05:36.038509 sshd-session[1728]: pam_unix(sshd:session): session closed for user core Jan 13 21:05:36.043659 systemd[1]: sshd@5-10.230.36.54:22-139.178.68.195:37496.service: Deactivated successfully. Jan 13 21:05:36.046325 systemd[1]: session-8.scope: Deactivated successfully. Jan 13 21:05:36.048829 systemd-logind[1501]: Session 8 logged out. Waiting for processes to exit. Jan 13 21:05:36.050489 systemd-logind[1501]: Removed session 8. Jan 13 21:05:36.203822 systemd[1]: Started sshd@6-10.230.36.54:22-139.178.68.195:36256.service - OpenSSH per-connection server daemon (139.178.68.195:36256). Jan 13 21:05:37.092227 sshd[1762]: Accepted publickey for core from 139.178.68.195 port 36256 ssh2: RSA SHA256:hnRa+lrXktC2wPLY5bcSKNUrJK0GTTLH7jAG9gNraiM Jan 13 21:05:37.094384 sshd-session[1762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:05:37.102335 systemd-logind[1501]: New session 9 of user core. Jan 13 21:05:37.109417 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 13 21:05:37.570649 sudo[1765]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 21:05:37.571744 sudo[1765]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:05:38.036795 (dockerd)[1783]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 13 21:05:38.037608 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 13 21:05:38.451613 dockerd[1783]: time="2025-01-13T21:05:38.451456161Z" level=info msg="Starting up" Jan 13 21:05:38.563897 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2639702485-merged.mount: Deactivated successfully. Jan 13 21:05:38.600004 dockerd[1783]: time="2025-01-13T21:05:38.599886462Z" level=info msg="Loading containers: start." Jan 13 21:05:38.839284 kernel: Initializing XFRM netlink socket Jan 13 21:05:38.949324 systemd-networkd[1428]: docker0: Link UP Jan 13 21:05:38.991046 dockerd[1783]: time="2025-01-13T21:05:38.990963332Z" level=info msg="Loading containers: done." Jan 13 21:05:39.015846 dockerd[1783]: time="2025-01-13T21:05:39.015784796Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 13 21:05:39.016084 dockerd[1783]: time="2025-01-13T21:05:39.015979026Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jan 13 21:05:39.016184 dockerd[1783]: time="2025-01-13T21:05:39.016137840Z" level=info msg="Daemon has completed initialization" Jan 13 21:05:39.063244 dockerd[1783]: time="2025-01-13T21:05:39.061979414Z" level=info msg="API listen on /run/docker.sock" Jan 13 21:05:39.062412 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 13 21:05:40.473232 containerd[1514]: time="2025-01-13T21:05:40.472970308Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\"" Jan 13 21:05:41.266187 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3982246398.mount: Deactivated successfully. Jan 13 21:05:43.904931 containerd[1514]: time="2025-01-13T21:05:43.901980257Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:05:43.904931 containerd[1514]: time="2025-01-13T21:05:43.905176635Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.8: active requests=0, bytes read=32675650" Jan 13 21:05:43.904931 containerd[1514]: time="2025-01-13T21:05:43.906436796Z" level=info msg="ImageCreate event name:\"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:05:43.911053 containerd[1514]: time="2025-01-13T21:05:43.910946729Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:05:43.912686 containerd[1514]: time="2025-01-13T21:05:43.912649251Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.8\" with image id \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\", size \"32672442\" in 3.439562266s" Jan 13 21:05:43.913178 containerd[1514]: time="2025-01-13T21:05:43.912873682Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\" returns image reference \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\"" Jan 13 21:05:43.953973 containerd[1514]: time="2025-01-13T21:05:43.953913407Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\"" Jan 13 21:05:44.061692 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 13 21:05:44.070470 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:05:44.252363 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:05:44.263673 (kubelet)[2046]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:05:44.334770 kubelet[2046]: E0113 21:05:44.334660 2046 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:05:44.337016 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:05:44.337290 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:05:46.969198 containerd[1514]: time="2025-01-13T21:05:46.969094758Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:05:46.976042 containerd[1514]: time="2025-01-13T21:05:46.975959940Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.8: active requests=0, bytes read=29606417" Jan 13 21:05:46.977301 containerd[1514]: time="2025-01-13T21:05:46.977223690Z" level=info msg="ImageCreate event name:\"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:05:46.982366 containerd[1514]: time="2025-01-13T21:05:46.982293549Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:05:46.984000 containerd[1514]: time="2025-01-13T21:05:46.983828631Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.8\" with image id \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\", size \"31051521\" in 3.029572236s" Jan 13 21:05:46.984000 containerd[1514]: time="2025-01-13T21:05:46.983870514Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\" returns image reference \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\"" Jan 13 21:05:47.015582 containerd[1514]: time="2025-01-13T21:05:47.015531055Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\"" Jan 13 21:05:48.838376 containerd[1514]: time="2025-01-13T21:05:48.835971285Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:05:48.838376 containerd[1514]: time="2025-01-13T21:05:48.837642019Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.8: active requests=0, bytes read=17783043" Jan 13 21:05:48.838376 containerd[1514]: time="2025-01-13T21:05:48.839982479Z" level=info msg="ImageCreate event name:\"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:05:48.847931 containerd[1514]: time="2025-01-13T21:05:48.844749836Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:05:48.847931 containerd[1514]: time="2025-01-13T21:05:48.847338857Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.8\" with image id \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\", size \"19228165\" in 1.831751902s" Jan 13 21:05:48.847931 containerd[1514]: time="2025-01-13T21:05:48.847414720Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\" returns image reference \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\"" Jan 13 21:05:48.884816 containerd[1514]: time="2025-01-13T21:05:48.884245929Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Jan 13 21:05:50.369561 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1092742142.mount: Deactivated successfully. Jan 13 21:05:51.032573 containerd[1514]: time="2025-01-13T21:05:51.032502872Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:05:51.034181 containerd[1514]: time="2025-01-13T21:05:51.033768021Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.8: active requests=0, bytes read=29057478" Jan 13 21:05:51.035393 containerd[1514]: time="2025-01-13T21:05:51.035302382Z" level=info msg="ImageCreate event name:\"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:05:51.043182 containerd[1514]: time="2025-01-13T21:05:51.043038932Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:05:51.044673 containerd[1514]: time="2025-01-13T21:05:51.044445264Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.8\" with image id \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\", repo tag \"registry.k8s.io/kube-proxy:v1.30.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\", size \"29056489\" in 2.160137169s" Jan 13 21:05:51.044673 containerd[1514]: time="2025-01-13T21:05:51.044510014Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\"" Jan 13 21:05:51.086961 containerd[1514]: time="2025-01-13T21:05:51.086891299Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 13 21:05:51.718376 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3016288441.mount: Deactivated successfully. Jan 13 21:05:52.919531 containerd[1514]: time="2025-01-13T21:05:52.917988183Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:05:52.921219 containerd[1514]: time="2025-01-13T21:05:52.921173603Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Jan 13 21:05:52.922763 containerd[1514]: time="2025-01-13T21:05:52.922726368Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:05:52.926316 containerd[1514]: time="2025-01-13T21:05:52.926266141Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:05:52.928090 containerd[1514]: time="2025-01-13T21:05:52.928054342Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.841098519s" Jan 13 21:05:52.928249 containerd[1514]: time="2025-01-13T21:05:52.928222617Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 13 21:05:52.961544 containerd[1514]: time="2025-01-13T21:05:52.961474045Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 13 21:05:53.112238 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 13 21:05:53.582742 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount591201996.mount: Deactivated successfully. Jan 13 21:05:53.589728 containerd[1514]: time="2025-01-13T21:05:53.589663684Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:05:53.591508 containerd[1514]: time="2025-01-13T21:05:53.591370530Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Jan 13 21:05:53.595404 containerd[1514]: time="2025-01-13T21:05:53.595353659Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:05:53.599342 containerd[1514]: time="2025-01-13T21:05:53.598374236Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:05:53.600673 containerd[1514]: time="2025-01-13T21:05:53.599666426Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 638.121529ms" Jan 13 21:05:53.600673 containerd[1514]: time="2025-01-13T21:05:53.599713305Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 13 21:05:53.633560 containerd[1514]: time="2025-01-13T21:05:53.633111876Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 13 21:05:54.237828 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1421069520.mount: Deactivated successfully. Jan 13 21:05:54.561917 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 13 21:05:54.573478 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:05:54.881402 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:05:54.886346 (kubelet)[2164]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:05:54.988920 kubelet[2164]: E0113 21:05:54.988822 2164 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:05:54.992174 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:05:54.992443 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:05:58.052817 containerd[1514]: time="2025-01-13T21:05:58.052587529Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:05:58.055417 containerd[1514]: time="2025-01-13T21:05:58.055323008Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238579" Jan 13 21:05:58.056329 containerd[1514]: time="2025-01-13T21:05:58.056284143Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:05:58.062489 containerd[1514]: time="2025-01-13T21:05:58.061734799Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:05:58.063558 containerd[1514]: time="2025-01-13T21:05:58.063514438Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 4.430282576s" Jan 13 21:05:58.063644 containerd[1514]: time="2025-01-13T21:05:58.063591910Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jan 13 21:06:02.346820 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:06:02.365137 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:06:02.394357 systemd[1]: Reloading requested from client PID 2268 ('systemctl') (unit session-9.scope)... Jan 13 21:06:02.394404 systemd[1]: Reloading... Jan 13 21:06:02.582233 zram_generator::config[2303]: No configuration found. Jan 13 21:06:02.773829 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:06:02.889424 systemd[1]: Reloading finished in 494 ms. Jan 13 21:06:02.973531 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:06:02.978789 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:06:02.982038 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 21:06:02.982432 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:06:02.988572 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:06:03.169313 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:06:03.188988 (kubelet)[2376]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 21:06:03.286837 kubelet[2376]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:06:03.286837 kubelet[2376]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 21:06:03.286837 kubelet[2376]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:06:03.298250 kubelet[2376]: I0113 21:06:03.297707 2376 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 21:06:03.639358 kubelet[2376]: I0113 21:06:03.639272 2376 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 13 21:06:03.640007 kubelet[2376]: I0113 21:06:03.639777 2376 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 21:06:03.640438 kubelet[2376]: I0113 21:06:03.640365 2376 server.go:927] "Client rotation is on, will bootstrap in background" Jan 13 21:06:03.665837 kubelet[2376]: I0113 21:06:03.664979 2376 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 21:06:03.666053 kubelet[2376]: E0113 21:06:03.666026 2376 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.230.36.54:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.230.36.54:6443: connect: connection refused Jan 13 21:06:03.699743 kubelet[2376]: I0113 21:06:03.699699 2376 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 21:06:03.703711 kubelet[2376]: I0113 21:06:03.702940 2376 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 21:06:03.703711 kubelet[2376]: I0113 21:06:03.703017 2376 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-85agx.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 21:06:03.703711 kubelet[2376]: I0113 21:06:03.703372 2376 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 21:06:03.703711 kubelet[2376]: I0113 21:06:03.703389 2376 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 21:06:03.704867 kubelet[2376]: I0113 21:06:03.704805 2376 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:06:03.706014 kubelet[2376]: I0113 21:06:03.705976 2376 kubelet.go:400] "Attempting to sync node with API server" Jan 13 21:06:03.706079 kubelet[2376]: I0113 21:06:03.706015 2376 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 21:06:03.706079 kubelet[2376]: I0113 21:06:03.706076 2376 kubelet.go:312] "Adding apiserver pod source" Jan 13 21:06:03.706235 kubelet[2376]: I0113 21:06:03.706118 2376 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 21:06:03.711169 kubelet[2376]: W0113 21:06:03.710501 2376 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.36.54:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-85agx.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.36.54:6443: connect: connection refused Jan 13 21:06:03.711169 kubelet[2376]: E0113 21:06:03.710623 2376 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.230.36.54:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-85agx.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.36.54:6443: connect: connection refused Jan 13 21:06:03.711169 kubelet[2376]: W0113 21:06:03.711005 2376 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.36.54:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.230.36.54:6443: connect: connection refused Jan 13 21:06:03.711169 kubelet[2376]: E0113 21:06:03.711062 2376 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.230.36.54:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.230.36.54:6443: connect: connection refused Jan 13 21:06:03.711666 kubelet[2376]: I0113 21:06:03.711638 2376 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 21:06:03.713230 kubelet[2376]: I0113 21:06:03.713200 2376 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 21:06:03.713374 kubelet[2376]: W0113 21:06:03.713344 2376 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 21:06:03.714831 kubelet[2376]: I0113 21:06:03.714689 2376 server.go:1264] "Started kubelet" Jan 13 21:06:03.720069 kubelet[2376]: I0113 21:06:03.720016 2376 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 21:06:03.721760 kubelet[2376]: I0113 21:06:03.721021 2376 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 21:06:03.721760 kubelet[2376]: I0113 21:06:03.721691 2376 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 21:06:03.722634 kubelet[2376]: E0113 21:06:03.721914 2376 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.36.54:6443/api/v1/namespaces/default/events\": dial tcp 10.230.36.54:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-85agx.gb1.brightbox.com.181a5c8c3e45503b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-85agx.gb1.brightbox.com,UID:srv-85agx.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-85agx.gb1.brightbox.com,},FirstTimestamp:2025-01-13 21:06:03.714654267 +0000 UTC m=+0.520291482,LastTimestamp:2025-01-13 21:06:03.714654267 +0000 UTC m=+0.520291482,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-85agx.gb1.brightbox.com,}" Jan 13 21:06:03.725020 kubelet[2376]: I0113 21:06:03.724877 2376 server.go:455] "Adding debug handlers to kubelet server" Jan 13 21:06:03.728717 kubelet[2376]: I0113 21:06:03.728494 2376 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 21:06:03.736647 kubelet[2376]: E0113 21:06:03.736601 2376 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-85agx.gb1.brightbox.com\" not found" Jan 13 21:06:03.736739 kubelet[2376]: I0113 21:06:03.736725 2376 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 21:06:03.738171 kubelet[2376]: I0113 21:06:03.736899 2376 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 13 21:06:03.738171 kubelet[2376]: I0113 21:06:03.737010 2376 reconciler.go:26] "Reconciler: start to sync state" Jan 13 21:06:03.738171 kubelet[2376]: W0113 21:06:03.737451 2376 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.36.54:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.36.54:6443: connect: connection refused Jan 13 21:06:03.738171 kubelet[2376]: E0113 21:06:03.737504 2376 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.230.36.54:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.36.54:6443: connect: connection refused Jan 13 21:06:03.738591 kubelet[2376]: E0113 21:06:03.738550 2376 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 21:06:03.739141 kubelet[2376]: E0113 21:06:03.739100 2376 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.36.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-85agx.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.36.54:6443: connect: connection refused" interval="200ms" Jan 13 21:06:03.744961 kubelet[2376]: I0113 21:06:03.744928 2376 factory.go:221] Registration of the containerd container factory successfully Jan 13 21:06:03.744961 kubelet[2376]: I0113 21:06:03.744955 2376 factory.go:221] Registration of the systemd container factory successfully Jan 13 21:06:03.745468 kubelet[2376]: I0113 21:06:03.745435 2376 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 21:06:03.771492 kubelet[2376]: I0113 21:06:03.771387 2376 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 21:06:03.774262 kubelet[2376]: I0113 21:06:03.774210 2376 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 21:06:03.774358 kubelet[2376]: I0113 21:06:03.774283 2376 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 21:06:03.774358 kubelet[2376]: I0113 21:06:03.774328 2376 kubelet.go:2337] "Starting kubelet main sync loop" Jan 13 21:06:03.774475 kubelet[2376]: E0113 21:06:03.774400 2376 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 21:06:03.778270 kubelet[2376]: W0113 21:06:03.778202 2376 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.36.54:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.36.54:6443: connect: connection refused Jan 13 21:06:03.778368 kubelet[2376]: E0113 21:06:03.778281 2376 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.230.36.54:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.36.54:6443: connect: connection refused Jan 13 21:06:03.788739 kubelet[2376]: I0113 21:06:03.788663 2376 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 21:06:03.788739 kubelet[2376]: I0113 21:06:03.788729 2376 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 21:06:03.788938 kubelet[2376]: I0113 21:06:03.788761 2376 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:06:03.790918 kubelet[2376]: I0113 21:06:03.790878 2376 policy_none.go:49] "None policy: Start" Jan 13 21:06:03.791655 kubelet[2376]: I0113 21:06:03.791625 2376 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 21:06:03.791765 kubelet[2376]: I0113 21:06:03.791663 2376 state_mem.go:35] "Initializing new in-memory state store" Jan 13 21:06:03.803936 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 13 21:06:03.816226 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 13 21:06:03.822091 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 13 21:06:03.834422 kubelet[2376]: I0113 21:06:03.833681 2376 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 21:06:03.834422 kubelet[2376]: I0113 21:06:03.834022 2376 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 21:06:03.834422 kubelet[2376]: I0113 21:06:03.834241 2376 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 21:06:03.836063 kubelet[2376]: E0113 21:06:03.836036 2376 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-85agx.gb1.brightbox.com\" not found" Jan 13 21:06:03.841398 kubelet[2376]: I0113 21:06:03.841293 2376 kubelet_node_status.go:73] "Attempting to register node" node="srv-85agx.gb1.brightbox.com" Jan 13 21:06:03.841978 kubelet[2376]: E0113 21:06:03.841945 2376 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.230.36.54:6443/api/v1/nodes\": dial tcp 10.230.36.54:6443: connect: connection refused" node="srv-85agx.gb1.brightbox.com" Jan 13 21:06:03.875899 kubelet[2376]: I0113 21:06:03.875492 2376 topology_manager.go:215] "Topology Admit Handler" podUID="6c35af4e7d242c56818f4c23d9a4ebca" podNamespace="kube-system" podName="kube-scheduler-srv-85agx.gb1.brightbox.com" Jan 13 21:06:03.878500 kubelet[2376]: I0113 21:06:03.878464 2376 topology_manager.go:215] "Topology Admit Handler" podUID="5663f8283d5cb4c79d584cfce5035efd" podNamespace="kube-system" podName="kube-apiserver-srv-85agx.gb1.brightbox.com" Jan 13 21:06:03.881213 kubelet[2376]: I0113 21:06:03.880987 2376 topology_manager.go:215] "Topology Admit Handler" podUID="349323a42eb0ac908ead0a0270aee4f0" podNamespace="kube-system" podName="kube-controller-manager-srv-85agx.gb1.brightbox.com" Jan 13 21:06:03.893868 systemd[1]: Created slice kubepods-burstable-pod6c35af4e7d242c56818f4c23d9a4ebca.slice - libcontainer container kubepods-burstable-pod6c35af4e7d242c56818f4c23d9a4ebca.slice. Jan 13 21:06:03.918784 systemd[1]: Created slice kubepods-burstable-pod5663f8283d5cb4c79d584cfce5035efd.slice - libcontainer container kubepods-burstable-pod5663f8283d5cb4c79d584cfce5035efd.slice. Jan 13 21:06:03.932737 systemd[1]: Created slice kubepods-burstable-pod349323a42eb0ac908ead0a0270aee4f0.slice - libcontainer container kubepods-burstable-pod349323a42eb0ac908ead0a0270aee4f0.slice. Jan 13 21:06:03.940345 kubelet[2376]: E0113 21:06:03.940268 2376 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.36.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-85agx.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.36.54:6443: connect: connection refused" interval="400ms" Jan 13 21:06:04.037933 kubelet[2376]: I0113 21:06:04.037772 2376 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5663f8283d5cb4c79d584cfce5035efd-ca-certs\") pod \"kube-apiserver-srv-85agx.gb1.brightbox.com\" (UID: \"5663f8283d5cb4c79d584cfce5035efd\") " pod="kube-system/kube-apiserver-srv-85agx.gb1.brightbox.com" Jan 13 21:06:04.037933 kubelet[2376]: I0113 21:06:04.037864 2376 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/349323a42eb0ac908ead0a0270aee4f0-flexvolume-dir\") pod \"kube-controller-manager-srv-85agx.gb1.brightbox.com\" (UID: \"349323a42eb0ac908ead0a0270aee4f0\") " pod="kube-system/kube-controller-manager-srv-85agx.gb1.brightbox.com" Jan 13 21:06:04.037933 kubelet[2376]: I0113 21:06:04.037901 2376 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/349323a42eb0ac908ead0a0270aee4f0-kubeconfig\") pod \"kube-controller-manager-srv-85agx.gb1.brightbox.com\" (UID: \"349323a42eb0ac908ead0a0270aee4f0\") " pod="kube-system/kube-controller-manager-srv-85agx.gb1.brightbox.com" Jan 13 21:06:04.037933 kubelet[2376]: I0113 21:06:04.037943 2376 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/349323a42eb0ac908ead0a0270aee4f0-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-85agx.gb1.brightbox.com\" (UID: \"349323a42eb0ac908ead0a0270aee4f0\") " pod="kube-system/kube-controller-manager-srv-85agx.gb1.brightbox.com" Jan 13 21:06:04.038422 kubelet[2376]: I0113 21:06:04.037974 2376 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6c35af4e7d242c56818f4c23d9a4ebca-kubeconfig\") pod \"kube-scheduler-srv-85agx.gb1.brightbox.com\" (UID: \"6c35af4e7d242c56818f4c23d9a4ebca\") " pod="kube-system/kube-scheduler-srv-85agx.gb1.brightbox.com" Jan 13 21:06:04.038422 kubelet[2376]: I0113 21:06:04.038002 2376 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5663f8283d5cb4c79d584cfce5035efd-k8s-certs\") pod \"kube-apiserver-srv-85agx.gb1.brightbox.com\" (UID: \"5663f8283d5cb4c79d584cfce5035efd\") " pod="kube-system/kube-apiserver-srv-85agx.gb1.brightbox.com" Jan 13 21:06:04.038422 kubelet[2376]: I0113 21:06:04.038032 2376 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5663f8283d5cb4c79d584cfce5035efd-usr-share-ca-certificates\") pod \"kube-apiserver-srv-85agx.gb1.brightbox.com\" (UID: \"5663f8283d5cb4c79d584cfce5035efd\") " pod="kube-system/kube-apiserver-srv-85agx.gb1.brightbox.com" Jan 13 21:06:04.038422 kubelet[2376]: I0113 21:06:04.038058 2376 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/349323a42eb0ac908ead0a0270aee4f0-ca-certs\") pod \"kube-controller-manager-srv-85agx.gb1.brightbox.com\" (UID: \"349323a42eb0ac908ead0a0270aee4f0\") " pod="kube-system/kube-controller-manager-srv-85agx.gb1.brightbox.com" Jan 13 21:06:04.038422 kubelet[2376]: I0113 21:06:04.038084 2376 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/349323a42eb0ac908ead0a0270aee4f0-k8s-certs\") pod \"kube-controller-manager-srv-85agx.gb1.brightbox.com\" (UID: \"349323a42eb0ac908ead0a0270aee4f0\") " pod="kube-system/kube-controller-manager-srv-85agx.gb1.brightbox.com" Jan 13 21:06:04.045800 kubelet[2376]: I0113 21:06:04.045753 2376 kubelet_node_status.go:73] "Attempting to register node" node="srv-85agx.gb1.brightbox.com" Jan 13 21:06:04.046315 kubelet[2376]: E0113 21:06:04.046236 2376 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.230.36.54:6443/api/v1/nodes\": dial tcp 10.230.36.54:6443: connect: connection refused" node="srv-85agx.gb1.brightbox.com" Jan 13 21:06:04.214422 containerd[1514]: time="2025-01-13T21:06:04.214208449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-85agx.gb1.brightbox.com,Uid:6c35af4e7d242c56818f4c23d9a4ebca,Namespace:kube-system,Attempt:0,}" Jan 13 21:06:04.228128 containerd[1514]: time="2025-01-13T21:06:04.228073529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-85agx.gb1.brightbox.com,Uid:5663f8283d5cb4c79d584cfce5035efd,Namespace:kube-system,Attempt:0,}" Jan 13 21:06:04.237608 containerd[1514]: time="2025-01-13T21:06:04.237069400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-85agx.gb1.brightbox.com,Uid:349323a42eb0ac908ead0a0270aee4f0,Namespace:kube-system,Attempt:0,}" Jan 13 21:06:04.341306 kubelet[2376]: E0113 21:06:04.341225 2376 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.36.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-85agx.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.36.54:6443: connect: connection refused" interval="800ms" Jan 13 21:06:04.451106 kubelet[2376]: I0113 21:06:04.450428 2376 kubelet_node_status.go:73] "Attempting to register node" node="srv-85agx.gb1.brightbox.com" Jan 13 21:06:04.451106 kubelet[2376]: E0113 21:06:04.451007 2376 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.230.36.54:6443/api/v1/nodes\": dial tcp 10.230.36.54:6443: connect: connection refused" node="srv-85agx.gb1.brightbox.com" Jan 13 21:06:04.654955 kubelet[2376]: W0113 21:06:04.654784 2376 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.36.54:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.230.36.54:6443: connect: connection refused Jan 13 21:06:04.654955 kubelet[2376]: E0113 21:06:04.654967 2376 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.230.36.54:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.230.36.54:6443: connect: connection refused Jan 13 21:06:04.727728 kubelet[2376]: W0113 21:06:04.727597 2376 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.36.54:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-85agx.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.36.54:6443: connect: connection refused Jan 13 21:06:04.727728 kubelet[2376]: E0113 21:06:04.727688 2376 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.230.36.54:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-85agx.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.36.54:6443: connect: connection refused Jan 13 21:06:04.817684 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3936693361.mount: Deactivated successfully. Jan 13 21:06:04.844222 containerd[1514]: time="2025-01-13T21:06:04.843542203Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:06:04.846692 containerd[1514]: time="2025-01-13T21:06:04.846636493Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 21:06:04.847379 containerd[1514]: time="2025-01-13T21:06:04.847342273Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:06:04.849132 containerd[1514]: time="2025-01-13T21:06:04.849088185Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:06:04.851524 containerd[1514]: time="2025-01-13T21:06:04.851482838Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jan 13 21:06:04.853640 containerd[1514]: time="2025-01-13T21:06:04.853011023Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:06:04.853640 containerd[1514]: time="2025-01-13T21:06:04.853450216Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 21:06:04.854879 containerd[1514]: time="2025-01-13T21:06:04.854838878Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:06:04.860172 containerd[1514]: time="2025-01-13T21:06:04.858583670Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 630.375667ms" Jan 13 21:06:04.862859 containerd[1514]: time="2025-01-13T21:06:04.862822895Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 625.597819ms" Jan 13 21:06:04.863234 containerd[1514]: time="2025-01-13T21:06:04.863018352Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 648.562495ms" Jan 13 21:06:05.112879 containerd[1514]: time="2025-01-13T21:06:05.112451038Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:06:05.113539 containerd[1514]: time="2025-01-13T21:06:05.112725670Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:06:05.113539 containerd[1514]: time="2025-01-13T21:06:05.112854455Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:06:05.114270 containerd[1514]: time="2025-01-13T21:06:05.114033297Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:06:05.116824 containerd[1514]: time="2025-01-13T21:06:05.116733039Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:06:05.117704 containerd[1514]: time="2025-01-13T21:06:05.117599956Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:06:05.118280 containerd[1514]: time="2025-01-13T21:06:05.117636785Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:06:05.118643 containerd[1514]: time="2025-01-13T21:06:05.118572644Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:06:05.123484 containerd[1514]: time="2025-01-13T21:06:05.123375390Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:06:05.123618 containerd[1514]: time="2025-01-13T21:06:05.123509431Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:06:05.123618 containerd[1514]: time="2025-01-13T21:06:05.123569018Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:06:05.125573 containerd[1514]: time="2025-01-13T21:06:05.125381436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:06:05.142564 kubelet[2376]: E0113 21:06:05.142474 2376 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.36.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-85agx.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.36.54:6443: connect: connection refused" interval="1.6s" Jan 13 21:06:05.157425 systemd[1]: Started cri-containerd-895c757df56d996466c300fd74760f905edec8b74842a9762040f53e9bf4fd4f.scope - libcontainer container 895c757df56d996466c300fd74760f905edec8b74842a9762040f53e9bf4fd4f. Jan 13 21:06:05.165217 systemd[1]: Started cri-containerd-2095c9ab32f4f615805ef06d8d93e3b72bf117dfd9d422e6696812b78e3690c8.scope - libcontainer container 2095c9ab32f4f615805ef06d8d93e3b72bf117dfd9d422e6696812b78e3690c8. Jan 13 21:06:05.189383 systemd[1]: Started cri-containerd-2ccaf632f80121c7c76d693ea9f038157c55337d9f7ab01fac264803e41464f1.scope - libcontainer container 2ccaf632f80121c7c76d693ea9f038157c55337d9f7ab01fac264803e41464f1. Jan 13 21:06:05.231976 kubelet[2376]: W0113 21:06:05.231374 2376 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.36.54:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.36.54:6443: connect: connection refused Jan 13 21:06:05.231976 kubelet[2376]: E0113 21:06:05.231484 2376 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.230.36.54:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.36.54:6443: connect: connection refused Jan 13 21:06:05.257566 kubelet[2376]: I0113 21:06:05.257082 2376 kubelet_node_status.go:73] "Attempting to register node" node="srv-85agx.gb1.brightbox.com" Jan 13 21:06:05.258032 kubelet[2376]: E0113 21:06:05.257950 2376 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.230.36.54:6443/api/v1/nodes\": dial tcp 10.230.36.54:6443: connect: connection refused" node="srv-85agx.gb1.brightbox.com" Jan 13 21:06:05.288676 kubelet[2376]: W0113 21:06:05.288511 2376 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.36.54:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.36.54:6443: connect: connection refused Jan 13 21:06:05.288676 kubelet[2376]: E0113 21:06:05.288619 2376 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.230.36.54:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.36.54:6443: connect: connection refused Jan 13 21:06:05.308002 containerd[1514]: time="2025-01-13T21:06:05.307943019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-85agx.gb1.brightbox.com,Uid:349323a42eb0ac908ead0a0270aee4f0,Namespace:kube-system,Attempt:0,} returns sandbox id \"2ccaf632f80121c7c76d693ea9f038157c55337d9f7ab01fac264803e41464f1\"" Jan 13 21:06:05.315316 containerd[1514]: time="2025-01-13T21:06:05.314867375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-85agx.gb1.brightbox.com,Uid:5663f8283d5cb4c79d584cfce5035efd,Namespace:kube-system,Attempt:0,} returns sandbox id \"895c757df56d996466c300fd74760f905edec8b74842a9762040f53e9bf4fd4f\"" Jan 13 21:06:05.320893 containerd[1514]: time="2025-01-13T21:06:05.320837295Z" level=info msg="CreateContainer within sandbox \"2ccaf632f80121c7c76d693ea9f038157c55337d9f7ab01fac264803e41464f1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 13 21:06:05.322967 containerd[1514]: time="2025-01-13T21:06:05.322934155Z" level=info msg="CreateContainer within sandbox \"895c757df56d996466c300fd74760f905edec8b74842a9762040f53e9bf4fd4f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 13 21:06:05.329132 containerd[1514]: time="2025-01-13T21:06:05.329058156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-85agx.gb1.brightbox.com,Uid:6c35af4e7d242c56818f4c23d9a4ebca,Namespace:kube-system,Attempt:0,} returns sandbox id \"2095c9ab32f4f615805ef06d8d93e3b72bf117dfd9d422e6696812b78e3690c8\"" Jan 13 21:06:05.335211 containerd[1514]: time="2025-01-13T21:06:05.334896233Z" level=info msg="CreateContainer within sandbox \"2095c9ab32f4f615805ef06d8d93e3b72bf117dfd9d422e6696812b78e3690c8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 13 21:06:05.341904 containerd[1514]: time="2025-01-13T21:06:05.341706034Z" level=info msg="CreateContainer within sandbox \"2ccaf632f80121c7c76d693ea9f038157c55337d9f7ab01fac264803e41464f1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ee7b79959d1d08070c4dbb6812bc57579a1c00ab26ab5ba08476ac618d702332\"" Jan 13 21:06:05.343560 containerd[1514]: time="2025-01-13T21:06:05.343339480Z" level=info msg="StartContainer for \"ee7b79959d1d08070c4dbb6812bc57579a1c00ab26ab5ba08476ac618d702332\"" Jan 13 21:06:05.347220 containerd[1514]: time="2025-01-13T21:06:05.346786222Z" level=info msg="CreateContainer within sandbox \"895c757df56d996466c300fd74760f905edec8b74842a9762040f53e9bf4fd4f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"94aa5da1a2db027d1e2604ccef86f580984c0cd2f43798338c7a0ea46d7f0653\"" Jan 13 21:06:05.349249 containerd[1514]: time="2025-01-13T21:06:05.349111562Z" level=info msg="StartContainer for \"94aa5da1a2db027d1e2604ccef86f580984c0cd2f43798338c7a0ea46d7f0653\"" Jan 13 21:06:05.374827 containerd[1514]: time="2025-01-13T21:06:05.373372988Z" level=info msg="CreateContainer within sandbox \"2095c9ab32f4f615805ef06d8d93e3b72bf117dfd9d422e6696812b78e3690c8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e5727f6426a0db2e0c634cc0b514f3a4feb4f488eb0d76e90d72a23a36fb0f85\"" Jan 13 21:06:05.376104 containerd[1514]: time="2025-01-13T21:06:05.375039236Z" level=info msg="StartContainer for \"e5727f6426a0db2e0c634cc0b514f3a4feb4f488eb0d76e90d72a23a36fb0f85\"" Jan 13 21:06:05.409287 systemd[1]: Started cri-containerd-ee7b79959d1d08070c4dbb6812bc57579a1c00ab26ab5ba08476ac618d702332.scope - libcontainer container ee7b79959d1d08070c4dbb6812bc57579a1c00ab26ab5ba08476ac618d702332. Jan 13 21:06:05.422469 systemd[1]: Started cri-containerd-94aa5da1a2db027d1e2604ccef86f580984c0cd2f43798338c7a0ea46d7f0653.scope - libcontainer container 94aa5da1a2db027d1e2604ccef86f580984c0cd2f43798338c7a0ea46d7f0653. Jan 13 21:06:05.450412 systemd[1]: Started cri-containerd-e5727f6426a0db2e0c634cc0b514f3a4feb4f488eb0d76e90d72a23a36fb0f85.scope - libcontainer container e5727f6426a0db2e0c634cc0b514f3a4feb4f488eb0d76e90d72a23a36fb0f85. Jan 13 21:06:05.549996 containerd[1514]: time="2025-01-13T21:06:05.549598456Z" level=info msg="StartContainer for \"e5727f6426a0db2e0c634cc0b514f3a4feb4f488eb0d76e90d72a23a36fb0f85\" returns successfully" Jan 13 21:06:05.566353 containerd[1514]: time="2025-01-13T21:06:05.565908585Z" level=info msg="StartContainer for \"94aa5da1a2db027d1e2604ccef86f580984c0cd2f43798338c7a0ea46d7f0653\" returns successfully" Jan 13 21:06:05.570316 containerd[1514]: time="2025-01-13T21:06:05.569530748Z" level=info msg="StartContainer for \"ee7b79959d1d08070c4dbb6812bc57579a1c00ab26ab5ba08476ac618d702332\" returns successfully" Jan 13 21:06:05.723381 kubelet[2376]: E0113 21:06:05.722305 2376 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.230.36.54:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.230.36.54:6443: connect: connection refused Jan 13 21:06:06.017568 update_engine[1502]: I20250113 21:06:06.017272 1502 update_attempter.cc:509] Updating boot flags... Jan 13 21:06:06.109213 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 44 scanned by (udev-worker) (2661) Jan 13 21:06:06.288342 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 44 scanned by (udev-worker) (2661) Jan 13 21:06:06.862167 kubelet[2376]: I0113 21:06:06.862090 2376 kubelet_node_status.go:73] "Attempting to register node" node="srv-85agx.gb1.brightbox.com" Jan 13 21:06:08.688836 kubelet[2376]: E0113 21:06:08.688777 2376 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-85agx.gb1.brightbox.com\" not found" node="srv-85agx.gb1.brightbox.com" Jan 13 21:06:08.714549 kubelet[2376]: I0113 21:06:08.714477 2376 apiserver.go:52] "Watching apiserver" Jan 13 21:06:08.738070 kubelet[2376]: I0113 21:06:08.737864 2376 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 13 21:06:08.859123 kubelet[2376]: I0113 21:06:08.859065 2376 kubelet_node_status.go:76] "Successfully registered node" node="srv-85agx.gb1.brightbox.com" Jan 13 21:06:11.191789 systemd[1]: Reloading requested from client PID 2671 ('systemctl') (unit session-9.scope)... Jan 13 21:06:11.192517 systemd[1]: Reloading... Jan 13 21:06:11.373176 zram_generator::config[2714]: No configuration found. Jan 13 21:06:11.596784 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:06:11.746738 systemd[1]: Reloading finished in 553 ms. Jan 13 21:06:11.816912 kubelet[2376]: I0113 21:06:11.816795 2376 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 21:06:11.817030 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:06:11.827726 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 21:06:11.829071 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:06:11.829352 systemd[1]: kubelet.service: Consumed 1.037s CPU time, 112.1M memory peak, 0B memory swap peak. Jan 13 21:06:11.839541 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:06:12.085662 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:06:12.099655 (kubelet)[2774]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 21:06:12.230976 kubelet[2774]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:06:12.230976 kubelet[2774]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 21:06:12.230976 kubelet[2774]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:06:12.230976 kubelet[2774]: I0113 21:06:12.230742 2774 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 21:06:12.240547 kubelet[2774]: I0113 21:06:12.240328 2774 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 13 21:06:12.240547 kubelet[2774]: I0113 21:06:12.240368 2774 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 21:06:12.240973 kubelet[2774]: I0113 21:06:12.240884 2774 server.go:927] "Client rotation is on, will bootstrap in background" Jan 13 21:06:12.245816 kubelet[2774]: I0113 21:06:12.245475 2774 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 13 21:06:12.248288 kubelet[2774]: I0113 21:06:12.248265 2774 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 21:06:12.254089 sudo[2787]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 13 21:06:12.254718 sudo[2787]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 13 21:06:12.267953 kubelet[2774]: I0113 21:06:12.267833 2774 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 21:06:12.268398 kubelet[2774]: I0113 21:06:12.268348 2774 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 21:06:12.269075 kubelet[2774]: I0113 21:06:12.268397 2774 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-85agx.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 21:06:12.269579 kubelet[2774]: I0113 21:06:12.269087 2774 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 21:06:12.269579 kubelet[2774]: I0113 21:06:12.269108 2774 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 21:06:12.269579 kubelet[2774]: I0113 21:06:12.269320 2774 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:06:12.269579 kubelet[2774]: I0113 21:06:12.269526 2774 kubelet.go:400] "Attempting to sync node with API server" Jan 13 21:06:12.269579 kubelet[2774]: I0113 21:06:12.269555 2774 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 21:06:12.270495 kubelet[2774]: I0113 21:06:12.269599 2774 kubelet.go:312] "Adding apiserver pod source" Jan 13 21:06:12.270495 kubelet[2774]: I0113 21:06:12.269627 2774 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 21:06:12.277183 kubelet[2774]: I0113 21:06:12.276576 2774 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 21:06:12.277183 kubelet[2774]: I0113 21:06:12.276851 2774 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 21:06:12.279177 kubelet[2774]: I0113 21:06:12.277595 2774 server.go:1264] "Started kubelet" Jan 13 21:06:12.281204 kubelet[2774]: I0113 21:06:12.280904 2774 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 21:06:12.301571 kubelet[2774]: I0113 21:06:12.301516 2774 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 21:06:12.316434 kubelet[2774]: I0113 21:06:12.302398 2774 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 21:06:12.323700 kubelet[2774]: I0113 21:06:12.323234 2774 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 21:06:12.323700 kubelet[2774]: I0113 21:06:12.309424 2774 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 13 21:06:12.323700 kubelet[2774]: I0113 21:06:12.309408 2774 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 21:06:12.323700 kubelet[2774]: I0113 21:06:12.323709 2774 reconciler.go:26] "Reconciler: start to sync state" Jan 13 21:06:12.328022 kubelet[2774]: I0113 21:06:12.327434 2774 factory.go:221] Registration of the systemd container factory successfully Jan 13 21:06:12.328022 kubelet[2774]: I0113 21:06:12.327600 2774 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 21:06:12.340349 kubelet[2774]: I0113 21:06:12.339336 2774 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 21:06:12.345194 kubelet[2774]: I0113 21:06:12.341987 2774 server.go:455] "Adding debug handlers to kubelet server" Jan 13 21:06:12.346958 kubelet[2774]: E0113 21:06:12.346016 2774 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 21:06:12.347173 kubelet[2774]: I0113 21:06:12.347036 2774 factory.go:221] Registration of the containerd container factory successfully Jan 13 21:06:12.365179 kubelet[2774]: I0113 21:06:12.364613 2774 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 21:06:12.366616 kubelet[2774]: I0113 21:06:12.366189 2774 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 21:06:12.366616 kubelet[2774]: I0113 21:06:12.366530 2774 kubelet.go:2337] "Starting kubelet main sync loop" Jan 13 21:06:12.372128 kubelet[2774]: E0113 21:06:12.370770 2774 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 21:06:12.432830 kubelet[2774]: I0113 21:06:12.432773 2774 kubelet_node_status.go:73] "Attempting to register node" node="srv-85agx.gb1.brightbox.com" Jan 13 21:06:12.449398 kubelet[2774]: I0113 21:06:12.446673 2774 kubelet_node_status.go:112] "Node was previously registered" node="srv-85agx.gb1.brightbox.com" Jan 13 21:06:12.449398 kubelet[2774]: I0113 21:06:12.446800 2774 kubelet_node_status.go:76] "Successfully registered node" node="srv-85agx.gb1.brightbox.com" Jan 13 21:06:12.477604 kubelet[2774]: E0113 21:06:12.475768 2774 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 13 21:06:12.490126 kubelet[2774]: I0113 21:06:12.488788 2774 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 21:06:12.490126 kubelet[2774]: I0113 21:06:12.488816 2774 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 21:06:12.490126 kubelet[2774]: I0113 21:06:12.488883 2774 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:06:12.490402 kubelet[2774]: I0113 21:06:12.490226 2774 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 13 21:06:12.490402 kubelet[2774]: I0113 21:06:12.490253 2774 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 13 21:06:12.490402 kubelet[2774]: I0113 21:06:12.490333 2774 policy_none.go:49] "None policy: Start" Jan 13 21:06:12.492633 kubelet[2774]: I0113 21:06:12.491463 2774 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 21:06:12.492633 kubelet[2774]: I0113 21:06:12.491516 2774 state_mem.go:35] "Initializing new in-memory state store" Jan 13 21:06:12.492633 kubelet[2774]: I0113 21:06:12.492234 2774 state_mem.go:75] "Updated machine memory state" Jan 13 21:06:12.513499 kubelet[2774]: I0113 21:06:12.513462 2774 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 21:06:12.515014 kubelet[2774]: I0113 21:06:12.513819 2774 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 21:06:12.515014 kubelet[2774]: I0113 21:06:12.514831 2774 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 21:06:12.679445 kubelet[2774]: I0113 21:06:12.679216 2774 topology_manager.go:215] "Topology Admit Handler" podUID="5663f8283d5cb4c79d584cfce5035efd" podNamespace="kube-system" podName="kube-apiserver-srv-85agx.gb1.brightbox.com" Jan 13 21:06:12.680832 kubelet[2774]: I0113 21:06:12.680677 2774 topology_manager.go:215] "Topology Admit Handler" podUID="349323a42eb0ac908ead0a0270aee4f0" podNamespace="kube-system" podName="kube-controller-manager-srv-85agx.gb1.brightbox.com" Jan 13 21:06:12.681891 kubelet[2774]: I0113 21:06:12.681815 2774 topology_manager.go:215] "Topology Admit Handler" podUID="6c35af4e7d242c56818f4c23d9a4ebca" podNamespace="kube-system" podName="kube-scheduler-srv-85agx.gb1.brightbox.com" Jan 13 21:06:12.711620 kubelet[2774]: W0113 21:06:12.711547 2774 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 13 21:06:12.717380 kubelet[2774]: W0113 21:06:12.716363 2774 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 13 21:06:12.717809 kubelet[2774]: W0113 21:06:12.717624 2774 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 13 21:06:12.734229 kubelet[2774]: I0113 21:06:12.734066 2774 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5663f8283d5cb4c79d584cfce5035efd-usr-share-ca-certificates\") pod \"kube-apiserver-srv-85agx.gb1.brightbox.com\" (UID: \"5663f8283d5cb4c79d584cfce5035efd\") " pod="kube-system/kube-apiserver-srv-85agx.gb1.brightbox.com" Jan 13 21:06:12.734502 kubelet[2774]: I0113 21:06:12.734256 2774 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/349323a42eb0ac908ead0a0270aee4f0-ca-certs\") pod \"kube-controller-manager-srv-85agx.gb1.brightbox.com\" (UID: \"349323a42eb0ac908ead0a0270aee4f0\") " pod="kube-system/kube-controller-manager-srv-85agx.gb1.brightbox.com" Jan 13 21:06:12.734502 kubelet[2774]: I0113 21:06:12.734329 2774 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/349323a42eb0ac908ead0a0270aee4f0-k8s-certs\") pod \"kube-controller-manager-srv-85agx.gb1.brightbox.com\" (UID: \"349323a42eb0ac908ead0a0270aee4f0\") " pod="kube-system/kube-controller-manager-srv-85agx.gb1.brightbox.com" Jan 13 21:06:12.734502 kubelet[2774]: I0113 21:06:12.734371 2774 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6c35af4e7d242c56818f4c23d9a4ebca-kubeconfig\") pod \"kube-scheduler-srv-85agx.gb1.brightbox.com\" (UID: \"6c35af4e7d242c56818f4c23d9a4ebca\") " pod="kube-system/kube-scheduler-srv-85agx.gb1.brightbox.com" Jan 13 21:06:12.734502 kubelet[2774]: I0113 21:06:12.734443 2774 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5663f8283d5cb4c79d584cfce5035efd-ca-certs\") pod \"kube-apiserver-srv-85agx.gb1.brightbox.com\" (UID: \"5663f8283d5cb4c79d584cfce5035efd\") " pod="kube-system/kube-apiserver-srv-85agx.gb1.brightbox.com" Jan 13 21:06:12.734502 kubelet[2774]: I0113 21:06:12.734499 2774 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5663f8283d5cb4c79d584cfce5035efd-k8s-certs\") pod \"kube-apiserver-srv-85agx.gb1.brightbox.com\" (UID: \"5663f8283d5cb4c79d584cfce5035efd\") " pod="kube-system/kube-apiserver-srv-85agx.gb1.brightbox.com" Jan 13 21:06:12.734985 kubelet[2774]: I0113 21:06:12.734886 2774 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/349323a42eb0ac908ead0a0270aee4f0-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-85agx.gb1.brightbox.com\" (UID: \"349323a42eb0ac908ead0a0270aee4f0\") " pod="kube-system/kube-controller-manager-srv-85agx.gb1.brightbox.com" Jan 13 21:06:12.735171 kubelet[2774]: I0113 21:06:12.735123 2774 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/349323a42eb0ac908ead0a0270aee4f0-flexvolume-dir\") pod \"kube-controller-manager-srv-85agx.gb1.brightbox.com\" (UID: \"349323a42eb0ac908ead0a0270aee4f0\") " pod="kube-system/kube-controller-manager-srv-85agx.gb1.brightbox.com" Jan 13 21:06:12.736022 kubelet[2774]: I0113 21:06:12.735341 2774 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/349323a42eb0ac908ead0a0270aee4f0-kubeconfig\") pod \"kube-controller-manager-srv-85agx.gb1.brightbox.com\" (UID: \"349323a42eb0ac908ead0a0270aee4f0\") " pod="kube-system/kube-controller-manager-srv-85agx.gb1.brightbox.com" Jan 13 21:06:13.109228 sudo[2787]: pam_unix(sudo:session): session closed for user root Jan 13 21:06:13.273538 kubelet[2774]: I0113 21:06:13.273445 2774 apiserver.go:52] "Watching apiserver" Jan 13 21:06:13.324409 kubelet[2774]: I0113 21:06:13.323648 2774 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 13 21:06:13.353176 kubelet[2774]: I0113 21:06:13.352115 2774 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-85agx.gb1.brightbox.com" podStartSLOduration=1.352047052 podStartE2EDuration="1.352047052s" podCreationTimestamp="2025-01-13 21:06:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:06:13.351867394 +0000 UTC m=+1.244469547" watchObservedRunningTime="2025-01-13 21:06:13.352047052 +0000 UTC m=+1.244649201" Jan 13 21:06:13.381074 kubelet[2774]: I0113 21:06:13.380067 2774 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-85agx.gb1.brightbox.com" podStartSLOduration=1.38004397 podStartE2EDuration="1.38004397s" podCreationTimestamp="2025-01-13 21:06:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:06:13.366281899 +0000 UTC m=+1.258884054" watchObservedRunningTime="2025-01-13 21:06:13.38004397 +0000 UTC m=+1.272646105" Jan 13 21:06:13.399329 kubelet[2774]: I0113 21:06:13.398944 2774 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-85agx.gb1.brightbox.com" podStartSLOduration=1.398895656 podStartE2EDuration="1.398895656s" podCreationTimestamp="2025-01-13 21:06:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:06:13.382100725 +0000 UTC m=+1.274702886" watchObservedRunningTime="2025-01-13 21:06:13.398895656 +0000 UTC m=+1.291497782" Jan 13 21:06:13.425474 kubelet[2774]: W0113 21:06:13.424554 2774 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 13 21:06:13.425474 kubelet[2774]: E0113 21:06:13.424645 2774 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-srv-85agx.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-85agx.gb1.brightbox.com" Jan 13 21:06:15.535639 sudo[1765]: pam_unix(sudo:session): session closed for user root Jan 13 21:06:15.679172 sshd[1764]: Connection closed by 139.178.68.195 port 36256 Jan 13 21:06:15.680865 sshd-session[1762]: pam_unix(sshd:session): session closed for user core Jan 13 21:06:15.686661 systemd[1]: sshd@6-10.230.36.54:22-139.178.68.195:36256.service: Deactivated successfully. Jan 13 21:06:15.689911 systemd[1]: session-9.scope: Deactivated successfully. Jan 13 21:06:15.690227 systemd[1]: session-9.scope: Consumed 7.206s CPU time, 184.2M memory peak, 0B memory swap peak. Jan 13 21:06:15.691882 systemd-logind[1501]: Session 9 logged out. Waiting for processes to exit. Jan 13 21:06:15.693699 systemd-logind[1501]: Removed session 9. Jan 13 21:06:24.624250 kubelet[2774]: I0113 21:06:24.623384 2774 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 13 21:06:24.625682 containerd[1514]: time="2025-01-13T21:06:24.625510761Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 21:06:24.626885 kubelet[2774]: I0113 21:06:24.625963 2774 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 13 21:06:25.572747 kubelet[2774]: I0113 21:06:25.571559 2774 topology_manager.go:215] "Topology Admit Handler" podUID="af6a5a54-6f2a-4878-b3a5-80e92c107467" podNamespace="kube-system" podName="kube-proxy-gkd4r" Jan 13 21:06:25.600202 kubelet[2774]: I0113 21:06:25.599979 2774 topology_manager.go:215] "Topology Admit Handler" podUID="9baad02a-9c4c-47de-bcc5-9bab8abac647" podNamespace="kube-system" podName="cilium-2hwfn" Jan 13 21:06:25.621558 kubelet[2774]: I0113 21:06:25.619137 2774 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9baad02a-9c4c-47de-bcc5-9bab8abac647-clustermesh-secrets\") pod \"cilium-2hwfn\" (UID: \"9baad02a-9c4c-47de-bcc5-9bab8abac647\") " pod="kube-system/cilium-2hwfn" Jan 13 21:06:25.621558 kubelet[2774]: I0113 21:06:25.619219 2774 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9baad02a-9c4c-47de-bcc5-9bab8abac647-hubble-tls\") pod \"cilium-2hwfn\" (UID: \"9baad02a-9c4c-47de-bcc5-9bab8abac647\") " pod="kube-system/cilium-2hwfn" Jan 13 21:06:25.621558 kubelet[2774]: I0113 21:06:25.619314 2774 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9baad02a-9c4c-47de-bcc5-9bab8abac647-cilium-run\") pod \"cilium-2hwfn\" (UID: \"9baad02a-9c4c-47de-bcc5-9bab8abac647\") " pod="kube-system/cilium-2hwfn" Jan 13 21:06:25.621558 kubelet[2774]: I0113 21:06:25.619361 2774 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/af6a5a54-6f2a-4878-b3a5-80e92c107467-kube-proxy\") pod \"kube-proxy-gkd4r\" (UID: \"af6a5a54-6f2a-4878-b3a5-80e92c107467\") " pod="kube-system/kube-proxy-gkd4r" Jan 13 21:06:25.621558 kubelet[2774]: I0113 21:06:25.619396 2774 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9j72\" (UniqueName: \"kubernetes.io/projected/af6a5a54-6f2a-4878-b3a5-80e92c107467-kube-api-access-z9j72\") pod \"kube-proxy-gkd4r\" (UID: \"af6a5a54-6f2a-4878-b3a5-80e92c107467\") " pod="kube-system/kube-proxy-gkd4r" Jan 13 21:06:25.621558 kubelet[2774]: I0113 21:06:25.619442 2774 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9baad02a-9c4c-47de-bcc5-9bab8abac647-xtables-lock\") pod \"cilium-2hwfn\" (UID: \"9baad02a-9c4c-47de-bcc5-9bab8abac647\") " pod="kube-system/cilium-2hwfn" Jan 13 21:06:25.622041 kubelet[2774]: I0113 21:06:25.619475 2774 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjz2l\" (UniqueName: \"kubernetes.io/projected/9baad02a-9c4c-47de-bcc5-9bab8abac647-kube-api-access-zjz2l\") pod \"cilium-2hwfn\" (UID: \"9baad02a-9c4c-47de-bcc5-9bab8abac647\") " pod="kube-system/cilium-2hwfn" Jan 13 21:06:25.622041 kubelet[2774]: I0113 21:06:25.619519 2774 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/af6a5a54-6f2a-4878-b3a5-80e92c107467-lib-modules\") pod \"kube-proxy-gkd4r\" (UID: \"af6a5a54-6f2a-4878-b3a5-80e92c107467\") " pod="kube-system/kube-proxy-gkd4r" Jan 13 21:06:25.622041 kubelet[2774]: I0113 21:06:25.619553 2774 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9baad02a-9c4c-47de-bcc5-9bab8abac647-etc-cni-netd\") pod \"cilium-2hwfn\" (UID: \"9baad02a-9c4c-47de-bcc5-9bab8abac647\") " pod="kube-system/cilium-2hwfn" Jan 13 21:06:25.622041 kubelet[2774]: I0113 21:06:25.619598 2774 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9baad02a-9c4c-47de-bcc5-9bab8abac647-cni-path\") pod \"cilium-2hwfn\" (UID: \"9baad02a-9c4c-47de-bcc5-9bab8abac647\") " pod="kube-system/cilium-2hwfn" Jan 13 21:06:25.622041 kubelet[2774]: I0113 21:06:25.619626 2774 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9baad02a-9c4c-47de-bcc5-9bab8abac647-cilium-config-path\") pod \"cilium-2hwfn\" (UID: \"9baad02a-9c4c-47de-bcc5-9bab8abac647\") " pod="kube-system/cilium-2hwfn" Jan 13 21:06:25.622041 kubelet[2774]: I0113 21:06:25.619662 2774 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/af6a5a54-6f2a-4878-b3a5-80e92c107467-xtables-lock\") pod \"kube-proxy-gkd4r\" (UID: \"af6a5a54-6f2a-4878-b3a5-80e92c107467\") " pod="kube-system/kube-proxy-gkd4r" Jan 13 21:06:25.622530 kubelet[2774]: I0113 21:06:25.619695 2774 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9baad02a-9c4c-47de-bcc5-9bab8abac647-cilium-cgroup\") pod \"cilium-2hwfn\" (UID: \"9baad02a-9c4c-47de-bcc5-9bab8abac647\") " pod="kube-system/cilium-2hwfn" Jan 13 21:06:25.622530 kubelet[2774]: I0113 21:06:25.619735 2774 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9baad02a-9c4c-47de-bcc5-9bab8abac647-host-proc-sys-net\") pod \"cilium-2hwfn\" (UID: \"9baad02a-9c4c-47de-bcc5-9bab8abac647\") " pod="kube-system/cilium-2hwfn" Jan 13 21:06:25.622530 kubelet[2774]: I0113 21:06:25.619781 2774 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9baad02a-9c4c-47de-bcc5-9bab8abac647-lib-modules\") pod \"cilium-2hwfn\" (UID: \"9baad02a-9c4c-47de-bcc5-9bab8abac647\") " pod="kube-system/cilium-2hwfn" Jan 13 21:06:25.622530 kubelet[2774]: I0113 21:06:25.619834 2774 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9baad02a-9c4c-47de-bcc5-9bab8abac647-bpf-maps\") pod \"cilium-2hwfn\" (UID: \"9baad02a-9c4c-47de-bcc5-9bab8abac647\") " pod="kube-system/cilium-2hwfn" Jan 13 21:06:25.622530 kubelet[2774]: I0113 21:06:25.619876 2774 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9baad02a-9c4c-47de-bcc5-9bab8abac647-hostproc\") pod \"cilium-2hwfn\" (UID: \"9baad02a-9c4c-47de-bcc5-9bab8abac647\") " pod="kube-system/cilium-2hwfn" Jan 13 21:06:25.622530 kubelet[2774]: I0113 21:06:25.619931 2774 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9baad02a-9c4c-47de-bcc5-9bab8abac647-host-proc-sys-kernel\") pod \"cilium-2hwfn\" (UID: \"9baad02a-9c4c-47de-bcc5-9bab8abac647\") " pod="kube-system/cilium-2hwfn" Jan 13 21:06:25.623980 systemd[1]: Created slice kubepods-besteffort-podaf6a5a54_6f2a_4878_b3a5_80e92c107467.slice - libcontainer container kubepods-besteffort-podaf6a5a54_6f2a_4878_b3a5_80e92c107467.slice. Jan 13 21:06:25.644993 systemd[1]: Created slice kubepods-burstable-pod9baad02a_9c4c_47de_bcc5_9bab8abac647.slice - libcontainer container kubepods-burstable-pod9baad02a_9c4c_47de_bcc5_9bab8abac647.slice. Jan 13 21:06:25.704545 kubelet[2774]: I0113 21:06:25.704373 2774 topology_manager.go:215] "Topology Admit Handler" podUID="3e18377e-529d-4307-a5cd-8bc992479c7d" podNamespace="kube-system" podName="cilium-operator-599987898-78xkk" Jan 13 21:06:25.716553 systemd[1]: Created slice kubepods-besteffort-pod3e18377e_529d_4307_a5cd_8bc992479c7d.slice - libcontainer container kubepods-besteffort-pod3e18377e_529d_4307_a5cd_8bc992479c7d.slice. Jan 13 21:06:25.722724 kubelet[2774]: I0113 21:06:25.720760 2774 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wffht\" (UniqueName: \"kubernetes.io/projected/3e18377e-529d-4307-a5cd-8bc992479c7d-kube-api-access-wffht\") pod \"cilium-operator-599987898-78xkk\" (UID: \"3e18377e-529d-4307-a5cd-8bc992479c7d\") " pod="kube-system/cilium-operator-599987898-78xkk" Jan 13 21:06:25.722724 kubelet[2774]: I0113 21:06:25.720913 2774 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3e18377e-529d-4307-a5cd-8bc992479c7d-cilium-config-path\") pod \"cilium-operator-599987898-78xkk\" (UID: \"3e18377e-529d-4307-a5cd-8bc992479c7d\") " pod="kube-system/cilium-operator-599987898-78xkk" Jan 13 21:06:25.938978 containerd[1514]: time="2025-01-13T21:06:25.938783644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gkd4r,Uid:af6a5a54-6f2a-4878-b3a5-80e92c107467,Namespace:kube-system,Attempt:0,}" Jan 13 21:06:25.956487 containerd[1514]: time="2025-01-13T21:06:25.956075205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2hwfn,Uid:9baad02a-9c4c-47de-bcc5-9bab8abac647,Namespace:kube-system,Attempt:0,}" Jan 13 21:06:25.983721 containerd[1514]: time="2025-01-13T21:06:25.983542419Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:06:25.983721 containerd[1514]: time="2025-01-13T21:06:25.983663790Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:06:25.984092 containerd[1514]: time="2025-01-13T21:06:25.983688440Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:06:25.984092 containerd[1514]: time="2025-01-13T21:06:25.983865328Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:06:26.009481 containerd[1514]: time="2025-01-13T21:06:26.008853748Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:06:26.009481 containerd[1514]: time="2025-01-13T21:06:26.008946102Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:06:26.009481 containerd[1514]: time="2025-01-13T21:06:26.008970667Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:06:26.009481 containerd[1514]: time="2025-01-13T21:06:26.009098444Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:06:26.021402 systemd[1]: Started cri-containerd-ea0b10f4c391d949782e3a9f406f76a30486eee2bd728288ca3da3c5e0a819bd.scope - libcontainer container ea0b10f4c391d949782e3a9f406f76a30486eee2bd728288ca3da3c5e0a819bd. Jan 13 21:06:26.024610 containerd[1514]: time="2025-01-13T21:06:26.024303250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-78xkk,Uid:3e18377e-529d-4307-a5cd-8bc992479c7d,Namespace:kube-system,Attempt:0,}" Jan 13 21:06:26.047376 systemd[1]: Started cri-containerd-51079f8fa5965f6550225547df0f512143410cd57d3c245e2ab5901800f79a6a.scope - libcontainer container 51079f8fa5965f6550225547df0f512143410cd57d3c245e2ab5901800f79a6a. Jan 13 21:06:26.093176 containerd[1514]: time="2025-01-13T21:06:26.093005219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gkd4r,Uid:af6a5a54-6f2a-4878-b3a5-80e92c107467,Namespace:kube-system,Attempt:0,} returns sandbox id \"ea0b10f4c391d949782e3a9f406f76a30486eee2bd728288ca3da3c5e0a819bd\"" Jan 13 21:06:26.102190 containerd[1514]: time="2025-01-13T21:06:26.102103436Z" level=info msg="CreateContainer within sandbox \"ea0b10f4c391d949782e3a9f406f76a30486eee2bd728288ca3da3c5e0a819bd\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 21:06:26.117844 containerd[1514]: time="2025-01-13T21:06:26.117801697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2hwfn,Uid:9baad02a-9c4c-47de-bcc5-9bab8abac647,Namespace:kube-system,Attempt:0,} returns sandbox id \"51079f8fa5965f6550225547df0f512143410cd57d3c245e2ab5901800f79a6a\"" Jan 13 21:06:26.122885 containerd[1514]: time="2025-01-13T21:06:26.122841494Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 13 21:06:26.127505 containerd[1514]: time="2025-01-13T21:06:26.127371899Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:06:26.127505 containerd[1514]: time="2025-01-13T21:06:26.127497206Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:06:26.127505 containerd[1514]: time="2025-01-13T21:06:26.127523081Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:06:26.128857 containerd[1514]: time="2025-01-13T21:06:26.128663371Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:06:26.140751 containerd[1514]: time="2025-01-13T21:06:26.140634868Z" level=info msg="CreateContainer within sandbox \"ea0b10f4c391d949782e3a9f406f76a30486eee2bd728288ca3da3c5e0a819bd\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7dbfbae7546d159776aac43bdbff9c5db90bc9b713e2cace4364b9cf77c632b3\"" Jan 13 21:06:26.143584 containerd[1514]: time="2025-01-13T21:06:26.143289549Z" level=info msg="StartContainer for \"7dbfbae7546d159776aac43bdbff9c5db90bc9b713e2cace4364b9cf77c632b3\"" Jan 13 21:06:26.162368 systemd[1]: Started cri-containerd-41ec423ab22bdea5a8b1815689f44317f15a21a93c5a95fb68a80fbe50957379.scope - libcontainer container 41ec423ab22bdea5a8b1815689f44317f15a21a93c5a95fb68a80fbe50957379. Jan 13 21:06:26.200351 systemd[1]: Started cri-containerd-7dbfbae7546d159776aac43bdbff9c5db90bc9b713e2cace4364b9cf77c632b3.scope - libcontainer container 7dbfbae7546d159776aac43bdbff9c5db90bc9b713e2cace4364b9cf77c632b3. Jan 13 21:06:26.272485 containerd[1514]: time="2025-01-13T21:06:26.272254755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-78xkk,Uid:3e18377e-529d-4307-a5cd-8bc992479c7d,Namespace:kube-system,Attempt:0,} returns sandbox id \"41ec423ab22bdea5a8b1815689f44317f15a21a93c5a95fb68a80fbe50957379\"" Jan 13 21:06:26.298837 containerd[1514]: time="2025-01-13T21:06:26.298770714Z" level=info msg="StartContainer for \"7dbfbae7546d159776aac43bdbff9c5db90bc9b713e2cace4364b9cf77c632b3\" returns successfully" Jan 13 21:06:26.468797 kubelet[2774]: I0113 21:06:26.468541 2774 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gkd4r" podStartSLOduration=1.468462817 podStartE2EDuration="1.468462817s" podCreationTimestamp="2025-01-13 21:06:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:06:26.467807897 +0000 UTC m=+14.360410040" watchObservedRunningTime="2025-01-13 21:06:26.468462817 +0000 UTC m=+14.361064951" Jan 13 21:06:34.539025 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1642291164.mount: Deactivated successfully. Jan 13 21:06:38.183570 containerd[1514]: time="2025-01-13T21:06:38.183440101Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:06:38.188722 containerd[1514]: time="2025-01-13T21:06:38.188640560Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166734679" Jan 13 21:06:38.190107 containerd[1514]: time="2025-01-13T21:06:38.190061014Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:06:38.193961 containerd[1514]: time="2025-01-13T21:06:38.193919329Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 12.071020581s" Jan 13 21:06:38.194061 containerd[1514]: time="2025-01-13T21:06:38.193975050Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 13 21:06:38.197038 containerd[1514]: time="2025-01-13T21:06:38.196501168Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 13 21:06:38.198101 containerd[1514]: time="2025-01-13T21:06:38.197784040Z" level=info msg="CreateContainer within sandbox \"51079f8fa5965f6550225547df0f512143410cd57d3c245e2ab5901800f79a6a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 21:06:38.289041 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1285069767.mount: Deactivated successfully. Jan 13 21:06:38.311420 containerd[1514]: time="2025-01-13T21:06:38.310921971Z" level=info msg="CreateContainer within sandbox \"51079f8fa5965f6550225547df0f512143410cd57d3c245e2ab5901800f79a6a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6ad6908883bacdaaa1af7c9aee8be428c57a3e2fc804ad91399aa3a357051253\"" Jan 13 21:06:38.319875 containerd[1514]: time="2025-01-13T21:06:38.319611116Z" level=info msg="StartContainer for \"6ad6908883bacdaaa1af7c9aee8be428c57a3e2fc804ad91399aa3a357051253\"" Jan 13 21:06:38.509823 systemd[1]: run-containerd-runc-k8s.io-6ad6908883bacdaaa1af7c9aee8be428c57a3e2fc804ad91399aa3a357051253-runc.mZTPT3.mount: Deactivated successfully. Jan 13 21:06:38.525731 systemd[1]: Started cri-containerd-6ad6908883bacdaaa1af7c9aee8be428c57a3e2fc804ad91399aa3a357051253.scope - libcontainer container 6ad6908883bacdaaa1af7c9aee8be428c57a3e2fc804ad91399aa3a357051253. Jan 13 21:06:38.586216 containerd[1514]: time="2025-01-13T21:06:38.585686867Z" level=info msg="StartContainer for \"6ad6908883bacdaaa1af7c9aee8be428c57a3e2fc804ad91399aa3a357051253\" returns successfully" Jan 13 21:06:38.602859 systemd[1]: cri-containerd-6ad6908883bacdaaa1af7c9aee8be428c57a3e2fc804ad91399aa3a357051253.scope: Deactivated successfully. Jan 13 21:06:38.863931 containerd[1514]: time="2025-01-13T21:06:38.832827650Z" level=info msg="shim disconnected" id=6ad6908883bacdaaa1af7c9aee8be428c57a3e2fc804ad91399aa3a357051253 namespace=k8s.io Jan 13 21:06:38.863931 containerd[1514]: time="2025-01-13T21:06:38.863925117Z" level=warning msg="cleaning up after shim disconnected" id=6ad6908883bacdaaa1af7c9aee8be428c57a3e2fc804ad91399aa3a357051253 namespace=k8s.io Jan 13 21:06:38.864504 containerd[1514]: time="2025-01-13T21:06:38.863950129Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:06:39.280044 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6ad6908883bacdaaa1af7c9aee8be428c57a3e2fc804ad91399aa3a357051253-rootfs.mount: Deactivated successfully. Jan 13 21:06:39.566932 containerd[1514]: time="2025-01-13T21:06:39.566477732Z" level=info msg="CreateContainer within sandbox \"51079f8fa5965f6550225547df0f512143410cd57d3c245e2ab5901800f79a6a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 21:06:39.623993 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4192087287.mount: Deactivated successfully. Jan 13 21:06:39.632056 containerd[1514]: time="2025-01-13T21:06:39.631982341Z" level=info msg="CreateContainer within sandbox \"51079f8fa5965f6550225547df0f512143410cd57d3c245e2ab5901800f79a6a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"4e9c7d1b5fe1aa1cb276e05346f4b1471b0e1ce0c89426cf3aa504297a8f0e52\"" Jan 13 21:06:39.635362 containerd[1514]: time="2025-01-13T21:06:39.633017705Z" level=info msg="StartContainer for \"4e9c7d1b5fe1aa1cb276e05346f4b1471b0e1ce0c89426cf3aa504297a8f0e52\"" Jan 13 21:06:39.697432 systemd[1]: Started cri-containerd-4e9c7d1b5fe1aa1cb276e05346f4b1471b0e1ce0c89426cf3aa504297a8f0e52.scope - libcontainer container 4e9c7d1b5fe1aa1cb276e05346f4b1471b0e1ce0c89426cf3aa504297a8f0e52. Jan 13 21:06:39.793605 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 21:06:39.794038 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:06:39.794206 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:06:39.802586 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:06:39.802901 systemd[1]: cri-containerd-4e9c7d1b5fe1aa1cb276e05346f4b1471b0e1ce0c89426cf3aa504297a8f0e52.scope: Deactivated successfully. Jan 13 21:06:39.815579 containerd[1514]: time="2025-01-13T21:06:39.815541025Z" level=info msg="StartContainer for \"4e9c7d1b5fe1aa1cb276e05346f4b1471b0e1ce0c89426cf3aa504297a8f0e52\" returns successfully" Jan 13 21:06:39.861641 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:06:39.871065 containerd[1514]: time="2025-01-13T21:06:39.870789775Z" level=info msg="shim disconnected" id=4e9c7d1b5fe1aa1cb276e05346f4b1471b0e1ce0c89426cf3aa504297a8f0e52 namespace=k8s.io Jan 13 21:06:39.871065 containerd[1514]: time="2025-01-13T21:06:39.870856912Z" level=warning msg="cleaning up after shim disconnected" id=4e9c7d1b5fe1aa1cb276e05346f4b1471b0e1ce0c89426cf3aa504297a8f0e52 namespace=k8s.io Jan 13 21:06:39.871065 containerd[1514]: time="2025-01-13T21:06:39.870886765Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:06:40.278811 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4e9c7d1b5fe1aa1cb276e05346f4b1471b0e1ce0c89426cf3aa504297a8f0e52-rootfs.mount: Deactivated successfully. Jan 13 21:06:40.581552 containerd[1514]: time="2025-01-13T21:06:40.581472924Z" level=info msg="CreateContainer within sandbox \"51079f8fa5965f6550225547df0f512143410cd57d3c245e2ab5901800f79a6a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 21:06:40.642034 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3141065810.mount: Deactivated successfully. Jan 13 21:06:40.647665 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1619911820.mount: Deactivated successfully. Jan 13 21:06:40.653540 containerd[1514]: time="2025-01-13T21:06:40.652690006Z" level=info msg="CreateContainer within sandbox \"51079f8fa5965f6550225547df0f512143410cd57d3c245e2ab5901800f79a6a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6a52ab2871dca8913ac8cae3994fe3584821f47c91b3f0a594b5c248089f5647\"" Jan 13 21:06:40.655214 containerd[1514]: time="2025-01-13T21:06:40.654428714Z" level=info msg="StartContainer for \"6a52ab2871dca8913ac8cae3994fe3584821f47c91b3f0a594b5c248089f5647\"" Jan 13 21:06:40.738682 systemd[1]: Started cri-containerd-6a52ab2871dca8913ac8cae3994fe3584821f47c91b3f0a594b5c248089f5647.scope - libcontainer container 6a52ab2871dca8913ac8cae3994fe3584821f47c91b3f0a594b5c248089f5647. Jan 13 21:06:40.815345 containerd[1514]: time="2025-01-13T21:06:40.814349941Z" level=info msg="StartContainer for \"6a52ab2871dca8913ac8cae3994fe3584821f47c91b3f0a594b5c248089f5647\" returns successfully" Jan 13 21:06:40.825344 systemd[1]: cri-containerd-6a52ab2871dca8913ac8cae3994fe3584821f47c91b3f0a594b5c248089f5647.scope: Deactivated successfully. Jan 13 21:06:41.024294 containerd[1514]: time="2025-01-13T21:06:41.023086375Z" level=info msg="shim disconnected" id=6a52ab2871dca8913ac8cae3994fe3584821f47c91b3f0a594b5c248089f5647 namespace=k8s.io Jan 13 21:06:41.024294 containerd[1514]: time="2025-01-13T21:06:41.023402258Z" level=warning msg="cleaning up after shim disconnected" id=6a52ab2871dca8913ac8cae3994fe3584821f47c91b3f0a594b5c248089f5647 namespace=k8s.io Jan 13 21:06:41.024844 containerd[1514]: time="2025-01-13T21:06:41.024311253Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:06:41.277182 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6a52ab2871dca8913ac8cae3994fe3584821f47c91b3f0a594b5c248089f5647-rootfs.mount: Deactivated successfully. Jan 13 21:06:41.586370 containerd[1514]: time="2025-01-13T21:06:41.586086125Z" level=info msg="CreateContainer within sandbox \"51079f8fa5965f6550225547df0f512143410cd57d3c245e2ab5901800f79a6a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 21:06:41.648355 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1428559267.mount: Deactivated successfully. Jan 13 21:06:41.651610 containerd[1514]: time="2025-01-13T21:06:41.651565041Z" level=info msg="CreateContainer within sandbox \"51079f8fa5965f6550225547df0f512143410cd57d3c245e2ab5901800f79a6a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e1578dced54102ea51eaea837fea4dd93789be3868cf25fb717941d4894e0ab1\"" Jan 13 21:06:41.653024 containerd[1514]: time="2025-01-13T21:06:41.652994547Z" level=info msg="StartContainer for \"e1578dced54102ea51eaea837fea4dd93789be3868cf25fb717941d4894e0ab1\"" Jan 13 21:06:41.707896 systemd[1]: Started cri-containerd-e1578dced54102ea51eaea837fea4dd93789be3868cf25fb717941d4894e0ab1.scope - libcontainer container e1578dced54102ea51eaea837fea4dd93789be3868cf25fb717941d4894e0ab1. Jan 13 21:06:41.746461 systemd[1]: cri-containerd-e1578dced54102ea51eaea837fea4dd93789be3868cf25fb717941d4894e0ab1.scope: Deactivated successfully. Jan 13 21:06:41.749894 containerd[1514]: time="2025-01-13T21:06:41.749768801Z" level=info msg="StartContainer for \"e1578dced54102ea51eaea837fea4dd93789be3868cf25fb717941d4894e0ab1\" returns successfully" Jan 13 21:06:41.809516 containerd[1514]: time="2025-01-13T21:06:41.807129912Z" level=info msg="shim disconnected" id=e1578dced54102ea51eaea837fea4dd93789be3868cf25fb717941d4894e0ab1 namespace=k8s.io Jan 13 21:06:41.809516 containerd[1514]: time="2025-01-13T21:06:41.808795502Z" level=warning msg="cleaning up after shim disconnected" id=e1578dced54102ea51eaea837fea4dd93789be3868cf25fb717941d4894e0ab1 namespace=k8s.io Jan 13 21:06:41.809516 containerd[1514]: time="2025-01-13T21:06:41.808855258Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:06:42.277786 systemd[1]: run-containerd-runc-k8s.io-e1578dced54102ea51eaea837fea4dd93789be3868cf25fb717941d4894e0ab1-runc.L7Hmfk.mount: Deactivated successfully. Jan 13 21:06:42.277959 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e1578dced54102ea51eaea837fea4dd93789be3868cf25fb717941d4894e0ab1-rootfs.mount: Deactivated successfully. Jan 13 21:06:42.594308 containerd[1514]: time="2025-01-13T21:06:42.593729810Z" level=info msg="CreateContainer within sandbox \"51079f8fa5965f6550225547df0f512143410cd57d3c245e2ab5901800f79a6a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 21:06:42.615853 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3014098138.mount: Deactivated successfully. Jan 13 21:06:42.617990 containerd[1514]: time="2025-01-13T21:06:42.617755473Z" level=info msg="CreateContainer within sandbox \"51079f8fa5965f6550225547df0f512143410cd57d3c245e2ab5901800f79a6a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"cd525d21d37f972403fb87a4f142a11bdef9ed14aff4e19af8b6431f2d425a7b\"" Jan 13 21:06:42.620206 containerd[1514]: time="2025-01-13T21:06:42.619975873Z" level=info msg="StartContainer for \"cd525d21d37f972403fb87a4f142a11bdef9ed14aff4e19af8b6431f2d425a7b\"" Jan 13 21:06:42.669378 systemd[1]: Started cri-containerd-cd525d21d37f972403fb87a4f142a11bdef9ed14aff4e19af8b6431f2d425a7b.scope - libcontainer container cd525d21d37f972403fb87a4f142a11bdef9ed14aff4e19af8b6431f2d425a7b. Jan 13 21:06:42.723202 containerd[1514]: time="2025-01-13T21:06:42.722006574Z" level=info msg="StartContainer for \"cd525d21d37f972403fb87a4f142a11bdef9ed14aff4e19af8b6431f2d425a7b\" returns successfully" Jan 13 21:06:43.029249 kubelet[2774]: I0113 21:06:43.029084 2774 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 13 21:06:43.069180 kubelet[2774]: I0113 21:06:43.068922 2774 topology_manager.go:215] "Topology Admit Handler" podUID="9175fe33-3521-42a6-b0e9-f7c865c24148" podNamespace="kube-system" podName="coredns-7db6d8ff4d-98rtk" Jan 13 21:06:43.075749 kubelet[2774]: I0113 21:06:43.075337 2774 topology_manager.go:215] "Topology Admit Handler" podUID="5e03b7e5-3eeb-441a-8023-dd76c5a95fd9" podNamespace="kube-system" podName="coredns-7db6d8ff4d-tglk4" Jan 13 21:06:43.089692 systemd[1]: Created slice kubepods-burstable-pod9175fe33_3521_42a6_b0e9_f7c865c24148.slice - libcontainer container kubepods-burstable-pod9175fe33_3521_42a6_b0e9_f7c865c24148.slice. Jan 13 21:06:43.108609 systemd[1]: Created slice kubepods-burstable-pod5e03b7e5_3eeb_441a_8023_dd76c5a95fd9.slice - libcontainer container kubepods-burstable-pod5e03b7e5_3eeb_441a_8023_dd76c5a95fd9.slice. Jan 13 21:06:43.197408 kubelet[2774]: I0113 21:06:43.197316 2774 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fcg4\" (UniqueName: \"kubernetes.io/projected/9175fe33-3521-42a6-b0e9-f7c865c24148-kube-api-access-9fcg4\") pod \"coredns-7db6d8ff4d-98rtk\" (UID: \"9175fe33-3521-42a6-b0e9-f7c865c24148\") " pod="kube-system/coredns-7db6d8ff4d-98rtk" Jan 13 21:06:43.197623 kubelet[2774]: I0113 21:06:43.197420 2774 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9175fe33-3521-42a6-b0e9-f7c865c24148-config-volume\") pod \"coredns-7db6d8ff4d-98rtk\" (UID: \"9175fe33-3521-42a6-b0e9-f7c865c24148\") " pod="kube-system/coredns-7db6d8ff4d-98rtk" Jan 13 21:06:43.197623 kubelet[2774]: I0113 21:06:43.197476 2774 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5e03b7e5-3eeb-441a-8023-dd76c5a95fd9-config-volume\") pod \"coredns-7db6d8ff4d-tglk4\" (UID: \"5e03b7e5-3eeb-441a-8023-dd76c5a95fd9\") " pod="kube-system/coredns-7db6d8ff4d-tglk4" Jan 13 21:06:43.197623 kubelet[2774]: I0113 21:06:43.197510 2774 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kffsc\" (UniqueName: \"kubernetes.io/projected/5e03b7e5-3eeb-441a-8023-dd76c5a95fd9-kube-api-access-kffsc\") pod \"coredns-7db6d8ff4d-tglk4\" (UID: \"5e03b7e5-3eeb-441a-8023-dd76c5a95fd9\") " pod="kube-system/coredns-7db6d8ff4d-tglk4" Jan 13 21:06:43.409319 containerd[1514]: time="2025-01-13T21:06:43.407256955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-98rtk,Uid:9175fe33-3521-42a6-b0e9-f7c865c24148,Namespace:kube-system,Attempt:0,}" Jan 13 21:06:43.417086 containerd[1514]: time="2025-01-13T21:06:43.416781916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-tglk4,Uid:5e03b7e5-3eeb-441a-8023-dd76c5a95fd9,Namespace:kube-system,Attempt:0,}" Jan 13 21:06:43.643535 kubelet[2774]: I0113 21:06:43.643317 2774 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-2hwfn" podStartSLOduration=6.567433392 podStartE2EDuration="18.64327719s" podCreationTimestamp="2025-01-13 21:06:25 +0000 UTC" firstStartedPulling="2025-01-13 21:06:26.119517279 +0000 UTC m=+14.012119405" lastFinishedPulling="2025-01-13 21:06:38.195361076 +0000 UTC m=+26.087963203" observedRunningTime="2025-01-13 21:06:43.640789689 +0000 UTC m=+31.533391866" watchObservedRunningTime="2025-01-13 21:06:43.64327719 +0000 UTC m=+31.535879326" Jan 13 21:06:46.115072 containerd[1514]: time="2025-01-13T21:06:46.114938266Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:06:46.116793 containerd[1514]: time="2025-01-13T21:06:46.116720091Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18906609" Jan 13 21:06:46.118128 containerd[1514]: time="2025-01-13T21:06:46.118062723Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:06:46.130656 containerd[1514]: time="2025-01-13T21:06:46.129990733Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 7.933407031s" Jan 13 21:06:46.130656 containerd[1514]: time="2025-01-13T21:06:46.130066786Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 13 21:06:46.138388 containerd[1514]: time="2025-01-13T21:06:46.138320682Z" level=info msg="CreateContainer within sandbox \"41ec423ab22bdea5a8b1815689f44317f15a21a93c5a95fb68a80fbe50957379\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 13 21:06:46.159997 containerd[1514]: time="2025-01-13T21:06:46.159924804Z" level=info msg="CreateContainer within sandbox \"41ec423ab22bdea5a8b1815689f44317f15a21a93c5a95fb68a80fbe50957379\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"6d1f6a3d4497f57ee389dab8fa0a12152f8196f3b063140604032716f2fe657a\"" Jan 13 21:06:46.163236 containerd[1514]: time="2025-01-13T21:06:46.163203194Z" level=info msg="StartContainer for \"6d1f6a3d4497f57ee389dab8fa0a12152f8196f3b063140604032716f2fe657a\"" Jan 13 21:06:46.219410 systemd[1]: Started cri-containerd-6d1f6a3d4497f57ee389dab8fa0a12152f8196f3b063140604032716f2fe657a.scope - libcontainer container 6d1f6a3d4497f57ee389dab8fa0a12152f8196f3b063140604032716f2fe657a. Jan 13 21:06:46.265700 containerd[1514]: time="2025-01-13T21:06:46.264982167Z" level=info msg="StartContainer for \"6d1f6a3d4497f57ee389dab8fa0a12152f8196f3b063140604032716f2fe657a\" returns successfully" Jan 13 21:06:50.441734 systemd-networkd[1428]: cilium_host: Link UP Jan 13 21:06:50.442466 systemd-networkd[1428]: cilium_net: Link UP Jan 13 21:06:50.442793 systemd-networkd[1428]: cilium_net: Gained carrier Jan 13 21:06:50.443074 systemd-networkd[1428]: cilium_host: Gained carrier Jan 13 21:06:50.620045 systemd-networkd[1428]: cilium_vxlan: Link UP Jan 13 21:06:50.620057 systemd-networkd[1428]: cilium_vxlan: Gained carrier Jan 13 21:06:50.934417 systemd-networkd[1428]: cilium_net: Gained IPv6LL Jan 13 21:06:51.189843 kernel: NET: Registered PF_ALG protocol family Jan 13 21:06:51.231010 systemd-networkd[1428]: cilium_host: Gained IPv6LL Jan 13 21:06:51.807443 systemd-networkd[1428]: cilium_vxlan: Gained IPv6LL Jan 13 21:06:52.260714 systemd-networkd[1428]: lxc_health: Link UP Jan 13 21:06:52.276290 systemd-networkd[1428]: lxc_health: Gained carrier Jan 13 21:06:52.551461 kernel: eth0: renamed from tmp1cfbe Jan 13 21:06:52.556187 systemd-networkd[1428]: lxc1d94b78475ee: Link UP Jan 13 21:06:52.561603 systemd-networkd[1428]: lxcb498bd9a6973: Link UP Jan 13 21:06:52.569817 kernel: eth0: renamed from tmpbe594 Jan 13 21:06:52.583084 systemd-networkd[1428]: lxc1d94b78475ee: Gained carrier Jan 13 21:06:52.586377 systemd-networkd[1428]: lxcb498bd9a6973: Gained carrier Jan 13 21:06:53.471561 systemd-networkd[1428]: lxc_health: Gained IPv6LL Jan 13 21:06:53.981976 kubelet[2774]: I0113 21:06:53.981896 2774 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-78xkk" podStartSLOduration=9.126330049 podStartE2EDuration="28.981875152s" podCreationTimestamp="2025-01-13 21:06:25 +0000 UTC" firstStartedPulling="2025-01-13 21:06:26.275766 +0000 UTC m=+14.168368129" lastFinishedPulling="2025-01-13 21:06:46.131311102 +0000 UTC m=+34.023913232" observedRunningTime="2025-01-13 21:06:46.629454075 +0000 UTC m=+34.522056219" watchObservedRunningTime="2025-01-13 21:06:53.981875152 +0000 UTC m=+41.874477289" Jan 13 21:06:54.174681 systemd-networkd[1428]: lxc1d94b78475ee: Gained IPv6LL Jan 13 21:06:54.558334 systemd-networkd[1428]: lxcb498bd9a6973: Gained IPv6LL Jan 13 21:06:58.619141 containerd[1514]: time="2025-01-13T21:06:58.617623878Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:06:58.619141 containerd[1514]: time="2025-01-13T21:06:58.619019512Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:06:58.619141 containerd[1514]: time="2025-01-13T21:06:58.619053444Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:06:58.622668 containerd[1514]: time="2025-01-13T21:06:58.619225419Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:06:58.695278 systemd[1]: run-containerd-runc-k8s.io-1cfbe947f0234ff3478bc64dbd415c650f5e97ba7c4b991a0eafb82cac95abd3-runc.t3g6Uj.mount: Deactivated successfully. Jan 13 21:06:58.711405 systemd[1]: Started cri-containerd-1cfbe947f0234ff3478bc64dbd415c650f5e97ba7c4b991a0eafb82cac95abd3.scope - libcontainer container 1cfbe947f0234ff3478bc64dbd415c650f5e97ba7c4b991a0eafb82cac95abd3. Jan 13 21:06:58.751991 containerd[1514]: time="2025-01-13T21:06:58.751697205Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:06:58.751991 containerd[1514]: time="2025-01-13T21:06:58.751821888Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:06:58.751991 containerd[1514]: time="2025-01-13T21:06:58.751847200Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:06:58.752650 containerd[1514]: time="2025-01-13T21:06:58.752229931Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:06:58.819392 systemd[1]: Started cri-containerd-be5941eeecdc51e5d2ae9b3acfc183e8ae4ea894ad94906e9eae8427bd9e1e92.scope - libcontainer container be5941eeecdc51e5d2ae9b3acfc183e8ae4ea894ad94906e9eae8427bd9e1e92. Jan 13 21:06:58.885042 containerd[1514]: time="2025-01-13T21:06:58.882843631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-tglk4,Uid:5e03b7e5-3eeb-441a-8023-dd76c5a95fd9,Namespace:kube-system,Attempt:0,} returns sandbox id \"1cfbe947f0234ff3478bc64dbd415c650f5e97ba7c4b991a0eafb82cac95abd3\"" Jan 13 21:06:58.892904 containerd[1514]: time="2025-01-13T21:06:58.892814101Z" level=info msg="CreateContainer within sandbox \"1cfbe947f0234ff3478bc64dbd415c650f5e97ba7c4b991a0eafb82cac95abd3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 21:06:58.916906 containerd[1514]: time="2025-01-13T21:06:58.916358023Z" level=info msg="CreateContainer within sandbox \"1cfbe947f0234ff3478bc64dbd415c650f5e97ba7c4b991a0eafb82cac95abd3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0fe64e869bb8b9eddd9635b14918ecbf1937ee91e9332f6d51bf6b1bf27d0535\"" Jan 13 21:06:58.918394 containerd[1514]: time="2025-01-13T21:06:58.917700529Z" level=info msg="StartContainer for \"0fe64e869bb8b9eddd9635b14918ecbf1937ee91e9332f6d51bf6b1bf27d0535\"" Jan 13 21:06:58.944230 containerd[1514]: time="2025-01-13T21:06:58.944052521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-98rtk,Uid:9175fe33-3521-42a6-b0e9-f7c865c24148,Namespace:kube-system,Attempt:0,} returns sandbox id \"be5941eeecdc51e5d2ae9b3acfc183e8ae4ea894ad94906e9eae8427bd9e1e92\"" Jan 13 21:06:58.953778 containerd[1514]: time="2025-01-13T21:06:58.953581098Z" level=info msg="CreateContainer within sandbox \"be5941eeecdc51e5d2ae9b3acfc183e8ae4ea894ad94906e9eae8427bd9e1e92\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 21:06:58.968385 systemd[1]: Started cri-containerd-0fe64e869bb8b9eddd9635b14918ecbf1937ee91e9332f6d51bf6b1bf27d0535.scope - libcontainer container 0fe64e869bb8b9eddd9635b14918ecbf1937ee91e9332f6d51bf6b1bf27d0535. Jan 13 21:06:58.982449 containerd[1514]: time="2025-01-13T21:06:58.982397802Z" level=info msg="CreateContainer within sandbox \"be5941eeecdc51e5d2ae9b3acfc183e8ae4ea894ad94906e9eae8427bd9e1e92\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f198288057494c1de1df9ce58d8546f828f16802a477087c5560f95e1e08bd1f\"" Jan 13 21:06:58.985593 containerd[1514]: time="2025-01-13T21:06:58.983740504Z" level=info msg="StartContainer for \"f198288057494c1de1df9ce58d8546f828f16802a477087c5560f95e1e08bd1f\"" Jan 13 21:06:59.026306 containerd[1514]: time="2025-01-13T21:06:59.026101183Z" level=info msg="StartContainer for \"0fe64e869bb8b9eddd9635b14918ecbf1937ee91e9332f6d51bf6b1bf27d0535\" returns successfully" Jan 13 21:06:59.032349 systemd[1]: Started cri-containerd-f198288057494c1de1df9ce58d8546f828f16802a477087c5560f95e1e08bd1f.scope - libcontainer container f198288057494c1de1df9ce58d8546f828f16802a477087c5560f95e1e08bd1f. Jan 13 21:06:59.077593 containerd[1514]: time="2025-01-13T21:06:59.077521729Z" level=info msg="StartContainer for \"f198288057494c1de1df9ce58d8546f828f16802a477087c5560f95e1e08bd1f\" returns successfully" Jan 13 21:06:59.634938 systemd[1]: run-containerd-runc-k8s.io-be5941eeecdc51e5d2ae9b3acfc183e8ae4ea894ad94906e9eae8427bd9e1e92-runc.yjtrif.mount: Deactivated successfully. Jan 13 21:06:59.681896 kubelet[2774]: I0113 21:06:59.681665 2774 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-98rtk" podStartSLOduration=34.681610204 podStartE2EDuration="34.681610204s" podCreationTimestamp="2025-01-13 21:06:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:06:59.679779686 +0000 UTC m=+47.572381840" watchObservedRunningTime="2025-01-13 21:06:59.681610204 +0000 UTC m=+47.574212360" Jan 13 21:07:24.349817 systemd[1]: Started sshd@7-10.230.36.54:22-139.178.68.195:44484.service - OpenSSH per-connection server daemon (139.178.68.195:44484). Jan 13 21:07:25.307330 sshd[4144]: Accepted publickey for core from 139.178.68.195 port 44484 ssh2: RSA SHA256:hnRa+lrXktC2wPLY5bcSKNUrJK0GTTLH7jAG9gNraiM Jan 13 21:07:25.310234 sshd-session[4144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:07:25.320111 systemd-logind[1501]: New session 10 of user core. Jan 13 21:07:25.328384 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 13 21:07:26.491231 sshd[4146]: Connection closed by 139.178.68.195 port 44484 Jan 13 21:07:26.492362 sshd-session[4144]: pam_unix(sshd:session): session closed for user core Jan 13 21:07:26.498293 systemd[1]: sshd@7-10.230.36.54:22-139.178.68.195:44484.service: Deactivated successfully. Jan 13 21:07:26.501017 systemd[1]: session-10.scope: Deactivated successfully. Jan 13 21:07:26.502517 systemd-logind[1501]: Session 10 logged out. Waiting for processes to exit. Jan 13 21:07:26.504390 systemd-logind[1501]: Removed session 10. Jan 13 21:07:31.653138 systemd[1]: Started sshd@8-10.230.36.54:22-139.178.68.195:42604.service - OpenSSH per-connection server daemon (139.178.68.195:42604). Jan 13 21:07:32.575837 sshd[4162]: Accepted publickey for core from 139.178.68.195 port 42604 ssh2: RSA SHA256:hnRa+lrXktC2wPLY5bcSKNUrJK0GTTLH7jAG9gNraiM Jan 13 21:07:32.578091 sshd-session[4162]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:07:32.586774 systemd-logind[1501]: New session 11 of user core. Jan 13 21:07:32.593401 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 13 21:07:33.304611 sshd[4164]: Connection closed by 139.178.68.195 port 42604 Jan 13 21:07:33.305880 sshd-session[4162]: pam_unix(sshd:session): session closed for user core Jan 13 21:07:33.312010 systemd[1]: sshd@8-10.230.36.54:22-139.178.68.195:42604.service: Deactivated successfully. Jan 13 21:07:33.315710 systemd[1]: session-11.scope: Deactivated successfully. Jan 13 21:07:33.317327 systemd-logind[1501]: Session 11 logged out. Waiting for processes to exit. Jan 13 21:07:33.319462 systemd-logind[1501]: Removed session 11. Jan 13 21:07:38.469074 systemd[1]: Started sshd@9-10.230.36.54:22-139.178.68.195:36196.service - OpenSSH per-connection server daemon (139.178.68.195:36196). Jan 13 21:07:39.376270 sshd[4175]: Accepted publickey for core from 139.178.68.195 port 36196 ssh2: RSA SHA256:hnRa+lrXktC2wPLY5bcSKNUrJK0GTTLH7jAG9gNraiM Jan 13 21:07:39.380924 sshd-session[4175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:07:39.389793 systemd-logind[1501]: New session 12 of user core. Jan 13 21:07:39.396504 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 13 21:07:40.109874 sshd[4177]: Connection closed by 139.178.68.195 port 36196 Jan 13 21:07:40.109607 sshd-session[4175]: pam_unix(sshd:session): session closed for user core Jan 13 21:07:40.116383 systemd[1]: sshd@9-10.230.36.54:22-139.178.68.195:36196.service: Deactivated successfully. Jan 13 21:07:40.119094 systemd[1]: session-12.scope: Deactivated successfully. Jan 13 21:07:40.120256 systemd-logind[1501]: Session 12 logged out. Waiting for processes to exit. Jan 13 21:07:40.122137 systemd-logind[1501]: Removed session 12. Jan 13 21:07:45.265539 systemd[1]: Started sshd@10-10.230.36.54:22-139.178.68.195:45686.service - OpenSSH per-connection server daemon (139.178.68.195:45686). Jan 13 21:07:46.205523 sshd[4189]: Accepted publickey for core from 139.178.68.195 port 45686 ssh2: RSA SHA256:hnRa+lrXktC2wPLY5bcSKNUrJK0GTTLH7jAG9gNraiM Jan 13 21:07:46.207723 sshd-session[4189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:07:46.216099 systemd-logind[1501]: New session 13 of user core. Jan 13 21:07:46.221387 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 13 21:07:46.934827 sshd[4191]: Connection closed by 139.178.68.195 port 45686 Jan 13 21:07:46.933987 sshd-session[4189]: pam_unix(sshd:session): session closed for user core Jan 13 21:07:46.939432 systemd[1]: sshd@10-10.230.36.54:22-139.178.68.195:45686.service: Deactivated successfully. Jan 13 21:07:46.942089 systemd[1]: session-13.scope: Deactivated successfully. Jan 13 21:07:46.943019 systemd-logind[1501]: Session 13 logged out. Waiting for processes to exit. Jan 13 21:07:46.944822 systemd-logind[1501]: Removed session 13. Jan 13 21:07:47.093519 systemd[1]: Started sshd@11-10.230.36.54:22-139.178.68.195:45700.service - OpenSSH per-connection server daemon (139.178.68.195:45700). Jan 13 21:07:47.995502 sshd[4203]: Accepted publickey for core from 139.178.68.195 port 45700 ssh2: RSA SHA256:hnRa+lrXktC2wPLY5bcSKNUrJK0GTTLH7jAG9gNraiM Jan 13 21:07:47.997969 sshd-session[4203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:07:48.006846 systemd-logind[1501]: New session 14 of user core. Jan 13 21:07:48.017411 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 13 21:07:48.778416 sshd[4205]: Connection closed by 139.178.68.195 port 45700 Jan 13 21:07:48.779551 sshd-session[4203]: pam_unix(sshd:session): session closed for user core Jan 13 21:07:48.785020 systemd[1]: sshd@11-10.230.36.54:22-139.178.68.195:45700.service: Deactivated successfully. Jan 13 21:07:48.788405 systemd[1]: session-14.scope: Deactivated successfully. Jan 13 21:07:48.792831 systemd-logind[1501]: Session 14 logged out. Waiting for processes to exit. Jan 13 21:07:48.794675 systemd-logind[1501]: Removed session 14. Jan 13 21:07:48.939627 systemd[1]: Started sshd@12-10.230.36.54:22-139.178.68.195:45710.service - OpenSSH per-connection server daemon (139.178.68.195:45710). Jan 13 21:07:49.841954 sshd[4214]: Accepted publickey for core from 139.178.68.195 port 45710 ssh2: RSA SHA256:hnRa+lrXktC2wPLY5bcSKNUrJK0GTTLH7jAG9gNraiM Jan 13 21:07:49.845767 sshd-session[4214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:07:49.854211 systemd-logind[1501]: New session 15 of user core. Jan 13 21:07:49.863442 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 13 21:07:50.556844 sshd[4217]: Connection closed by 139.178.68.195 port 45710 Jan 13 21:07:50.558049 sshd-session[4214]: pam_unix(sshd:session): session closed for user core Jan 13 21:07:50.563718 systemd[1]: sshd@12-10.230.36.54:22-139.178.68.195:45710.service: Deactivated successfully. Jan 13 21:07:50.566512 systemd[1]: session-15.scope: Deactivated successfully. Jan 13 21:07:50.567788 systemd-logind[1501]: Session 15 logged out. Waiting for processes to exit. Jan 13 21:07:50.569653 systemd-logind[1501]: Removed session 15. Jan 13 21:07:55.719878 systemd[1]: Started sshd@13-10.230.36.54:22-139.178.68.195:52154.service - OpenSSH per-connection server daemon (139.178.68.195:52154). Jan 13 21:07:56.628354 sshd[4228]: Accepted publickey for core from 139.178.68.195 port 52154 ssh2: RSA SHA256:hnRa+lrXktC2wPLY5bcSKNUrJK0GTTLH7jAG9gNraiM Jan 13 21:07:56.630676 sshd-session[4228]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:07:56.641256 systemd-logind[1501]: New session 16 of user core. Jan 13 21:07:56.646442 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 13 21:07:57.348999 sshd[4230]: Connection closed by 139.178.68.195 port 52154 Jan 13 21:07:57.350042 sshd-session[4228]: pam_unix(sshd:session): session closed for user core Jan 13 21:07:57.354957 systemd-logind[1501]: Session 16 logged out. Waiting for processes to exit. Jan 13 21:07:57.355687 systemd[1]: sshd@13-10.230.36.54:22-139.178.68.195:52154.service: Deactivated successfully. Jan 13 21:07:57.358547 systemd[1]: session-16.scope: Deactivated successfully. Jan 13 21:07:57.360289 systemd-logind[1501]: Removed session 16. Jan 13 21:08:02.514545 systemd[1]: Started sshd@14-10.230.36.54:22-139.178.68.195:52168.service - OpenSSH per-connection server daemon (139.178.68.195:52168). Jan 13 21:08:03.425369 sshd[4244]: Accepted publickey for core from 139.178.68.195 port 52168 ssh2: RSA SHA256:hnRa+lrXktC2wPLY5bcSKNUrJK0GTTLH7jAG9gNraiM Jan 13 21:08:03.427438 sshd-session[4244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:08:03.435467 systemd-logind[1501]: New session 17 of user core. Jan 13 21:08:03.444421 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 13 21:08:04.138395 sshd[4246]: Connection closed by 139.178.68.195 port 52168 Jan 13 21:08:04.141591 sshd-session[4244]: pam_unix(sshd:session): session closed for user core Jan 13 21:08:04.147122 systemd[1]: sshd@14-10.230.36.54:22-139.178.68.195:52168.service: Deactivated successfully. Jan 13 21:08:04.150997 systemd[1]: session-17.scope: Deactivated successfully. Jan 13 21:08:04.153145 systemd-logind[1501]: Session 17 logged out. Waiting for processes to exit. Jan 13 21:08:04.155574 systemd-logind[1501]: Removed session 17. Jan 13 21:08:04.291443 systemd[1]: Started sshd@15-10.230.36.54:22-139.178.68.195:52184.service - OpenSSH per-connection server daemon (139.178.68.195:52184). Jan 13 21:08:05.207252 sshd[4258]: Accepted publickey for core from 139.178.68.195 port 52184 ssh2: RSA SHA256:hnRa+lrXktC2wPLY5bcSKNUrJK0GTTLH7jAG9gNraiM Jan 13 21:08:05.209335 sshd-session[4258]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:08:05.216810 systemd-logind[1501]: New session 18 of user core. Jan 13 21:08:05.223374 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 13 21:08:06.189652 sshd[4261]: Connection closed by 139.178.68.195 port 52184 Jan 13 21:08:06.190987 sshd-session[4258]: pam_unix(sshd:session): session closed for user core Jan 13 21:08:06.198233 systemd[1]: sshd@15-10.230.36.54:22-139.178.68.195:52184.service: Deactivated successfully. Jan 13 21:08:06.200899 systemd[1]: session-18.scope: Deactivated successfully. Jan 13 21:08:06.201957 systemd-logind[1501]: Session 18 logged out. Waiting for processes to exit. Jan 13 21:08:06.203900 systemd-logind[1501]: Removed session 18. Jan 13 21:08:06.353635 systemd[1]: Started sshd@16-10.230.36.54:22-139.178.68.195:47358.service - OpenSSH per-connection server daemon (139.178.68.195:47358). Jan 13 21:08:07.300530 sshd[4269]: Accepted publickey for core from 139.178.68.195 port 47358 ssh2: RSA SHA256:hnRa+lrXktC2wPLY5bcSKNUrJK0GTTLH7jAG9gNraiM Jan 13 21:08:07.303046 sshd-session[4269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:08:07.311141 systemd-logind[1501]: New session 19 of user core. Jan 13 21:08:07.319469 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 13 21:08:10.235278 sshd[4271]: Connection closed by 139.178.68.195 port 47358 Jan 13 21:08:10.236655 sshd-session[4269]: pam_unix(sshd:session): session closed for user core Jan 13 21:08:10.243007 systemd[1]: sshd@16-10.230.36.54:22-139.178.68.195:47358.service: Deactivated successfully. Jan 13 21:08:10.246489 systemd[1]: session-19.scope: Deactivated successfully. Jan 13 21:08:10.247779 systemd-logind[1501]: Session 19 logged out. Waiting for processes to exit. Jan 13 21:08:10.249355 systemd-logind[1501]: Removed session 19. Jan 13 21:08:10.392543 systemd[1]: Started sshd@17-10.230.36.54:22-139.178.68.195:47362.service - OpenSSH per-connection server daemon (139.178.68.195:47362). Jan 13 21:08:11.296095 sshd[4287]: Accepted publickey for core from 139.178.68.195 port 47362 ssh2: RSA SHA256:hnRa+lrXktC2wPLY5bcSKNUrJK0GTTLH7jAG9gNraiM Jan 13 21:08:11.298325 sshd-session[4287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:08:11.306421 systemd-logind[1501]: New session 20 of user core. Jan 13 21:08:11.311359 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 13 21:08:12.232933 sshd[4289]: Connection closed by 139.178.68.195 port 47362 Jan 13 21:08:12.234419 sshd-session[4287]: pam_unix(sshd:session): session closed for user core Jan 13 21:08:12.240531 systemd[1]: sshd@17-10.230.36.54:22-139.178.68.195:47362.service: Deactivated successfully. Jan 13 21:08:12.243544 systemd[1]: session-20.scope: Deactivated successfully. Jan 13 21:08:12.244856 systemd-logind[1501]: Session 20 logged out. Waiting for processes to exit. Jan 13 21:08:12.246938 systemd-logind[1501]: Removed session 20. Jan 13 21:08:12.388412 systemd[1]: Started sshd@18-10.230.36.54:22-139.178.68.195:47366.service - OpenSSH per-connection server daemon (139.178.68.195:47366). Jan 13 21:08:13.287687 sshd[4298]: Accepted publickey for core from 139.178.68.195 port 47366 ssh2: RSA SHA256:hnRa+lrXktC2wPLY5bcSKNUrJK0GTTLH7jAG9gNraiM Jan 13 21:08:13.290505 sshd-session[4298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:08:13.298799 systemd-logind[1501]: New session 21 of user core. Jan 13 21:08:13.308395 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 13 21:08:14.004599 sshd[4302]: Connection closed by 139.178.68.195 port 47366 Jan 13 21:08:14.005898 sshd-session[4298]: pam_unix(sshd:session): session closed for user core Jan 13 21:08:14.012063 systemd[1]: sshd@18-10.230.36.54:22-139.178.68.195:47366.service: Deactivated successfully. Jan 13 21:08:14.017294 systemd[1]: session-21.scope: Deactivated successfully. Jan 13 21:08:14.019104 systemd-logind[1501]: Session 21 logged out. Waiting for processes to exit. Jan 13 21:08:14.020696 systemd-logind[1501]: Removed session 21. Jan 13 21:08:19.166599 systemd[1]: Started sshd@19-10.230.36.54:22-139.178.68.195:48268.service - OpenSSH per-connection server daemon (139.178.68.195:48268). Jan 13 21:08:20.067776 sshd[4315]: Accepted publickey for core from 139.178.68.195 port 48268 ssh2: RSA SHA256:hnRa+lrXktC2wPLY5bcSKNUrJK0GTTLH7jAG9gNraiM Jan 13 21:08:20.070131 sshd-session[4315]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:08:20.079243 systemd-logind[1501]: New session 22 of user core. Jan 13 21:08:20.085386 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 13 21:08:20.777098 sshd[4317]: Connection closed by 139.178.68.195 port 48268 Jan 13 21:08:20.778555 sshd-session[4315]: pam_unix(sshd:session): session closed for user core Jan 13 21:08:20.784336 systemd[1]: sshd@19-10.230.36.54:22-139.178.68.195:48268.service: Deactivated successfully. Jan 13 21:08:20.784866 systemd-logind[1501]: Session 22 logged out. Waiting for processes to exit. Jan 13 21:08:20.788184 systemd[1]: session-22.scope: Deactivated successfully. Jan 13 21:08:20.791261 systemd-logind[1501]: Removed session 22. Jan 13 21:08:25.936566 systemd[1]: Started sshd@20-10.230.36.54:22-139.178.68.195:44836.service - OpenSSH per-connection server daemon (139.178.68.195:44836). Jan 13 21:08:26.842462 sshd[4328]: Accepted publickey for core from 139.178.68.195 port 44836 ssh2: RSA SHA256:hnRa+lrXktC2wPLY5bcSKNUrJK0GTTLH7jAG9gNraiM Jan 13 21:08:26.845504 sshd-session[4328]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:08:26.860386 systemd-logind[1501]: New session 23 of user core. Jan 13 21:08:26.868384 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 13 21:08:27.561598 sshd[4333]: Connection closed by 139.178.68.195 port 44836 Jan 13 21:08:27.562734 sshd-session[4328]: pam_unix(sshd:session): session closed for user core Jan 13 21:08:27.568450 systemd[1]: sshd@20-10.230.36.54:22-139.178.68.195:44836.service: Deactivated successfully. Jan 13 21:08:27.571809 systemd[1]: session-23.scope: Deactivated successfully. Jan 13 21:08:27.575170 systemd-logind[1501]: Session 23 logged out. Waiting for processes to exit. Jan 13 21:08:27.577198 systemd-logind[1501]: Removed session 23. Jan 13 21:08:32.731203 systemd[1]: Started sshd@21-10.230.36.54:22-139.178.68.195:44846.service - OpenSSH per-connection server daemon (139.178.68.195:44846). Jan 13 21:08:33.622695 sshd[4344]: Accepted publickey for core from 139.178.68.195 port 44846 ssh2: RSA SHA256:hnRa+lrXktC2wPLY5bcSKNUrJK0GTTLH7jAG9gNraiM Jan 13 21:08:33.625308 sshd-session[4344]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:08:33.633643 systemd-logind[1501]: New session 24 of user core. Jan 13 21:08:33.642414 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 13 21:08:34.324756 sshd[4346]: Connection closed by 139.178.68.195 port 44846 Jan 13 21:08:34.325784 sshd-session[4344]: pam_unix(sshd:session): session closed for user core Jan 13 21:08:34.330651 systemd[1]: sshd@21-10.230.36.54:22-139.178.68.195:44846.service: Deactivated successfully. Jan 13 21:08:34.333488 systemd[1]: session-24.scope: Deactivated successfully. Jan 13 21:08:34.336120 systemd-logind[1501]: Session 24 logged out. Waiting for processes to exit. Jan 13 21:08:34.337768 systemd-logind[1501]: Removed session 24. Jan 13 21:08:34.487593 systemd[1]: Started sshd@22-10.230.36.54:22-139.178.68.195:44850.service - OpenSSH per-connection server daemon (139.178.68.195:44850). Jan 13 21:08:35.377420 sshd[4357]: Accepted publickey for core from 139.178.68.195 port 44850 ssh2: RSA SHA256:hnRa+lrXktC2wPLY5bcSKNUrJK0GTTLH7jAG9gNraiM Jan 13 21:08:35.379611 sshd-session[4357]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:08:35.388256 systemd-logind[1501]: New session 25 of user core. Jan 13 21:08:35.393444 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 13 21:08:37.321102 kubelet[2774]: I0113 21:08:37.319927 2774 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-tglk4" podStartSLOduration=132.319865442 podStartE2EDuration="2m12.319865442s" podCreationTimestamp="2025-01-13 21:06:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:06:59.728410285 +0000 UTC m=+47.621012443" watchObservedRunningTime="2025-01-13 21:08:37.319865442 +0000 UTC m=+145.212467593" Jan 13 21:08:37.444614 containerd[1514]: time="2025-01-13T21:08:37.444439131Z" level=info msg="StopContainer for \"6d1f6a3d4497f57ee389dab8fa0a12152f8196f3b063140604032716f2fe657a\" with timeout 30 (s)" Jan 13 21:08:37.450183 containerd[1514]: time="2025-01-13T21:08:37.449561339Z" level=info msg="Stop container \"6d1f6a3d4497f57ee389dab8fa0a12152f8196f3b063140604032716f2fe657a\" with signal terminated" Jan 13 21:08:37.506465 containerd[1514]: time="2025-01-13T21:08:37.506380602Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 21:08:37.507240 systemd[1]: cri-containerd-6d1f6a3d4497f57ee389dab8fa0a12152f8196f3b063140604032716f2fe657a.scope: Deactivated successfully. Jan 13 21:08:37.524219 containerd[1514]: time="2025-01-13T21:08:37.523532082Z" level=info msg="StopContainer for \"cd525d21d37f972403fb87a4f142a11bdef9ed14aff4e19af8b6431f2d425a7b\" with timeout 2 (s)" Jan 13 21:08:37.524219 containerd[1514]: time="2025-01-13T21:08:37.523937511Z" level=info msg="Stop container \"cd525d21d37f972403fb87a4f142a11bdef9ed14aff4e19af8b6431f2d425a7b\" with signal terminated" Jan 13 21:08:37.546345 systemd-networkd[1428]: lxc_health: Link DOWN Jan 13 21:08:37.546360 systemd-networkd[1428]: lxc_health: Lost carrier Jan 13 21:08:37.564287 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6d1f6a3d4497f57ee389dab8fa0a12152f8196f3b063140604032716f2fe657a-rootfs.mount: Deactivated successfully. Jan 13 21:08:37.576987 systemd[1]: cri-containerd-cd525d21d37f972403fb87a4f142a11bdef9ed14aff4e19af8b6431f2d425a7b.scope: Deactivated successfully. Jan 13 21:08:37.579570 kubelet[2774]: E0113 21:08:37.567230 2774 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 13 21:08:37.577407 systemd[1]: cri-containerd-cd525d21d37f972403fb87a4f142a11bdef9ed14aff4e19af8b6431f2d425a7b.scope: Consumed 10.628s CPU time. Jan 13 21:08:37.582631 containerd[1514]: time="2025-01-13T21:08:37.582469387Z" level=info msg="shim disconnected" id=6d1f6a3d4497f57ee389dab8fa0a12152f8196f3b063140604032716f2fe657a namespace=k8s.io Jan 13 21:08:37.583958 containerd[1514]: time="2025-01-13T21:08:37.583919895Z" level=warning msg="cleaning up after shim disconnected" id=6d1f6a3d4497f57ee389dab8fa0a12152f8196f3b063140604032716f2fe657a namespace=k8s.io Jan 13 21:08:37.584182 containerd[1514]: time="2025-01-13T21:08:37.584082148Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:08:37.617940 containerd[1514]: time="2025-01-13T21:08:37.617245348Z" level=info msg="StopContainer for \"6d1f6a3d4497f57ee389dab8fa0a12152f8196f3b063140604032716f2fe657a\" returns successfully" Jan 13 21:08:37.625439 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cd525d21d37f972403fb87a4f142a11bdef9ed14aff4e19af8b6431f2d425a7b-rootfs.mount: Deactivated successfully. Jan 13 21:08:37.627662 containerd[1514]: time="2025-01-13T21:08:37.625522512Z" level=info msg="StopPodSandbox for \"41ec423ab22bdea5a8b1815689f44317f15a21a93c5a95fb68a80fbe50957379\"" Jan 13 21:08:37.632813 containerd[1514]: time="2025-01-13T21:08:37.631564676Z" level=info msg="shim disconnected" id=cd525d21d37f972403fb87a4f142a11bdef9ed14aff4e19af8b6431f2d425a7b namespace=k8s.io Jan 13 21:08:37.632813 containerd[1514]: time="2025-01-13T21:08:37.631625862Z" level=warning msg="cleaning up after shim disconnected" id=cd525d21d37f972403fb87a4f142a11bdef9ed14aff4e19af8b6431f2d425a7b namespace=k8s.io Jan 13 21:08:37.632813 containerd[1514]: time="2025-01-13T21:08:37.631647483Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:08:37.634529 containerd[1514]: time="2025-01-13T21:08:37.627882999Z" level=info msg="Container to stop \"6d1f6a3d4497f57ee389dab8fa0a12152f8196f3b063140604032716f2fe657a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:08:37.639775 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-41ec423ab22bdea5a8b1815689f44317f15a21a93c5a95fb68a80fbe50957379-shm.mount: Deactivated successfully. Jan 13 21:08:37.656462 systemd[1]: cri-containerd-41ec423ab22bdea5a8b1815689f44317f15a21a93c5a95fb68a80fbe50957379.scope: Deactivated successfully. Jan 13 21:08:37.671198 containerd[1514]: time="2025-01-13T21:08:37.670689061Z" level=info msg="StopContainer for \"cd525d21d37f972403fb87a4f142a11bdef9ed14aff4e19af8b6431f2d425a7b\" returns successfully" Jan 13 21:08:37.673565 containerd[1514]: time="2025-01-13T21:08:37.673202696Z" level=info msg="StopPodSandbox for \"51079f8fa5965f6550225547df0f512143410cd57d3c245e2ab5901800f79a6a\"" Jan 13 21:08:37.673565 containerd[1514]: time="2025-01-13T21:08:37.673322768Z" level=info msg="Container to stop \"4e9c7d1b5fe1aa1cb276e05346f4b1471b0e1ce0c89426cf3aa504297a8f0e52\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:08:37.673565 containerd[1514]: time="2025-01-13T21:08:37.673377319Z" level=info msg="Container to stop \"6a52ab2871dca8913ac8cae3994fe3584821f47c91b3f0a594b5c248089f5647\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:08:37.673565 containerd[1514]: time="2025-01-13T21:08:37.673393179Z" level=info msg="Container to stop \"e1578dced54102ea51eaea837fea4dd93789be3868cf25fb717941d4894e0ab1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:08:37.673565 containerd[1514]: time="2025-01-13T21:08:37.673407432Z" level=info msg="Container to stop \"cd525d21d37f972403fb87a4f142a11bdef9ed14aff4e19af8b6431f2d425a7b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:08:37.673565 containerd[1514]: time="2025-01-13T21:08:37.673420992Z" level=info msg="Container to stop \"6ad6908883bacdaaa1af7c9aee8be428c57a3e2fc804ad91399aa3a357051253\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:08:37.678743 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-51079f8fa5965f6550225547df0f512143410cd57d3c245e2ab5901800f79a6a-shm.mount: Deactivated successfully. Jan 13 21:08:37.695457 systemd[1]: cri-containerd-51079f8fa5965f6550225547df0f512143410cd57d3c245e2ab5901800f79a6a.scope: Deactivated successfully. Jan 13 21:08:37.725508 containerd[1514]: time="2025-01-13T21:08:37.725397738Z" level=info msg="shim disconnected" id=41ec423ab22bdea5a8b1815689f44317f15a21a93c5a95fb68a80fbe50957379 namespace=k8s.io Jan 13 21:08:37.725508 containerd[1514]: time="2025-01-13T21:08:37.725495207Z" level=warning msg="cleaning up after shim disconnected" id=41ec423ab22bdea5a8b1815689f44317f15a21a93c5a95fb68a80fbe50957379 namespace=k8s.io Jan 13 21:08:37.725508 containerd[1514]: time="2025-01-13T21:08:37.725511614Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:08:37.732679 containerd[1514]: time="2025-01-13T21:08:37.732539473Z" level=info msg="shim disconnected" id=51079f8fa5965f6550225547df0f512143410cd57d3c245e2ab5901800f79a6a namespace=k8s.io Jan 13 21:08:37.732679 containerd[1514]: time="2025-01-13T21:08:37.732587006Z" level=warning msg="cleaning up after shim disconnected" id=51079f8fa5965f6550225547df0f512143410cd57d3c245e2ab5901800f79a6a namespace=k8s.io Jan 13 21:08:37.732679 containerd[1514]: time="2025-01-13T21:08:37.732601805Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:08:37.752279 containerd[1514]: time="2025-01-13T21:08:37.752101398Z" level=warning msg="cleanup warnings time=\"2025-01-13T21:08:37Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 13 21:08:37.754458 containerd[1514]: time="2025-01-13T21:08:37.754428931Z" level=info msg="TearDown network for sandbox \"41ec423ab22bdea5a8b1815689f44317f15a21a93c5a95fb68a80fbe50957379\" successfully" Jan 13 21:08:37.754458 containerd[1514]: time="2025-01-13T21:08:37.754508180Z" level=info msg="StopPodSandbox for \"41ec423ab22bdea5a8b1815689f44317f15a21a93c5a95fb68a80fbe50957379\" returns successfully" Jan 13 21:08:37.763401 containerd[1514]: time="2025-01-13T21:08:37.763285753Z" level=info msg="TearDown network for sandbox \"51079f8fa5965f6550225547df0f512143410cd57d3c245e2ab5901800f79a6a\" successfully" Jan 13 21:08:37.763401 containerd[1514]: time="2025-01-13T21:08:37.763325906Z" level=info msg="StopPodSandbox for \"51079f8fa5965f6550225547df0f512143410cd57d3c245e2ab5901800f79a6a\" returns successfully" Jan 13 21:08:37.812884 kubelet[2774]: I0113 21:08:37.812206 2774 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3e18377e-529d-4307-a5cd-8bc992479c7d-cilium-config-path\") pod \"3e18377e-529d-4307-a5cd-8bc992479c7d\" (UID: \"3e18377e-529d-4307-a5cd-8bc992479c7d\") " Jan 13 21:08:37.812884 kubelet[2774]: I0113 21:08:37.812290 2774 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wffht\" (UniqueName: \"kubernetes.io/projected/3e18377e-529d-4307-a5cd-8bc992479c7d-kube-api-access-wffht\") pod \"3e18377e-529d-4307-a5cd-8bc992479c7d\" (UID: \"3e18377e-529d-4307-a5cd-8bc992479c7d\") " Jan 13 21:08:37.833066 kubelet[2774]: I0113 21:08:37.831578 2774 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e18377e-529d-4307-a5cd-8bc992479c7d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3e18377e-529d-4307-a5cd-8bc992479c7d" (UID: "3e18377e-529d-4307-a5cd-8bc992479c7d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 21:08:37.847646 kubelet[2774]: I0113 21:08:37.847587 2774 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e18377e-529d-4307-a5cd-8bc992479c7d-kube-api-access-wffht" (OuterVolumeSpecName: "kube-api-access-wffht") pod "3e18377e-529d-4307-a5cd-8bc992479c7d" (UID: "3e18377e-529d-4307-a5cd-8bc992479c7d"). InnerVolumeSpecName "kube-api-access-wffht". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 21:08:37.913122 kubelet[2774]: I0113 21:08:37.913042 2774 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9baad02a-9c4c-47de-bcc5-9bab8abac647-cilium-run\") pod \"9baad02a-9c4c-47de-bcc5-9bab8abac647\" (UID: \"9baad02a-9c4c-47de-bcc5-9bab8abac647\") " Jan 13 21:08:37.913122 kubelet[2774]: I0113 21:08:37.913116 2774 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9baad02a-9c4c-47de-bcc5-9bab8abac647-bpf-maps\") pod \"9baad02a-9c4c-47de-bcc5-9bab8abac647\" (UID: \"9baad02a-9c4c-47de-bcc5-9bab8abac647\") " Jan 13 21:08:37.913409 kubelet[2774]: I0113 21:08:37.913200 2774 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9baad02a-9c4c-47de-bcc5-9bab8abac647-host-proc-sys-kernel\") pod \"9baad02a-9c4c-47de-bcc5-9bab8abac647\" (UID: \"9baad02a-9c4c-47de-bcc5-9bab8abac647\") " Jan 13 21:08:37.913409 kubelet[2774]: I0113 21:08:37.913238 2774 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9baad02a-9c4c-47de-bcc5-9bab8abac647-hubble-tls\") pod \"9baad02a-9c4c-47de-bcc5-9bab8abac647\" (UID: \"9baad02a-9c4c-47de-bcc5-9bab8abac647\") " Jan 13 21:08:37.913409 kubelet[2774]: I0113 21:08:37.913261 2774 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9baad02a-9c4c-47de-bcc5-9bab8abac647-xtables-lock\") pod \"9baad02a-9c4c-47de-bcc5-9bab8abac647\" (UID: \"9baad02a-9c4c-47de-bcc5-9bab8abac647\") " Jan 13 21:08:37.913409 kubelet[2774]: I0113 21:08:37.913299 2774 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9baad02a-9c4c-47de-bcc5-9bab8abac647-clustermesh-secrets\") pod \"9baad02a-9c4c-47de-bcc5-9bab8abac647\" (UID: \"9baad02a-9c4c-47de-bcc5-9bab8abac647\") " Jan 13 21:08:37.913409 kubelet[2774]: I0113 21:08:37.913335 2774 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zjz2l\" (UniqueName: \"kubernetes.io/projected/9baad02a-9c4c-47de-bcc5-9bab8abac647-kube-api-access-zjz2l\") pod \"9baad02a-9c4c-47de-bcc5-9bab8abac647\" (UID: \"9baad02a-9c4c-47de-bcc5-9bab8abac647\") " Jan 13 21:08:37.913409 kubelet[2774]: I0113 21:08:37.913361 2774 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9baad02a-9c4c-47de-bcc5-9bab8abac647-cilium-config-path\") pod \"9baad02a-9c4c-47de-bcc5-9bab8abac647\" (UID: \"9baad02a-9c4c-47de-bcc5-9bab8abac647\") " Jan 13 21:08:37.913742 kubelet[2774]: I0113 21:08:37.913391 2774 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9baad02a-9c4c-47de-bcc5-9bab8abac647-host-proc-sys-net\") pod \"9baad02a-9c4c-47de-bcc5-9bab8abac647\" (UID: \"9baad02a-9c4c-47de-bcc5-9bab8abac647\") " Jan 13 21:08:37.913742 kubelet[2774]: I0113 21:08:37.913436 2774 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9baad02a-9c4c-47de-bcc5-9bab8abac647-cni-path\") pod \"9baad02a-9c4c-47de-bcc5-9bab8abac647\" (UID: \"9baad02a-9c4c-47de-bcc5-9bab8abac647\") " Jan 13 21:08:37.913742 kubelet[2774]: I0113 21:08:37.913467 2774 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9baad02a-9c4c-47de-bcc5-9bab8abac647-lib-modules\") pod \"9baad02a-9c4c-47de-bcc5-9bab8abac647\" (UID: \"9baad02a-9c4c-47de-bcc5-9bab8abac647\") " Jan 13 21:08:37.913742 kubelet[2774]: I0113 21:08:37.913489 2774 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9baad02a-9c4c-47de-bcc5-9bab8abac647-hostproc\") pod \"9baad02a-9c4c-47de-bcc5-9bab8abac647\" (UID: \"9baad02a-9c4c-47de-bcc5-9bab8abac647\") " Jan 13 21:08:37.913742 kubelet[2774]: I0113 21:08:37.913532 2774 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9baad02a-9c4c-47de-bcc5-9bab8abac647-etc-cni-netd\") pod \"9baad02a-9c4c-47de-bcc5-9bab8abac647\" (UID: \"9baad02a-9c4c-47de-bcc5-9bab8abac647\") " Jan 13 21:08:37.913742 kubelet[2774]: I0113 21:08:37.913556 2774 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9baad02a-9c4c-47de-bcc5-9bab8abac647-cilium-cgroup\") pod \"9baad02a-9c4c-47de-bcc5-9bab8abac647\" (UID: \"9baad02a-9c4c-47de-bcc5-9bab8abac647\") " Jan 13 21:08:37.917353 kubelet[2774]: I0113 21:08:37.916506 2774 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3e18377e-529d-4307-a5cd-8bc992479c7d-cilium-config-path\") on node \"srv-85agx.gb1.brightbox.com\" DevicePath \"\"" Jan 13 21:08:37.917353 kubelet[2774]: I0113 21:08:37.916555 2774 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-wffht\" (UniqueName: \"kubernetes.io/projected/3e18377e-529d-4307-a5cd-8bc992479c7d-kube-api-access-wffht\") on node \"srv-85agx.gb1.brightbox.com\" DevicePath \"\"" Jan 13 21:08:37.917353 kubelet[2774]: I0113 21:08:37.916609 2774 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9baad02a-9c4c-47de-bcc5-9bab8abac647-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "9baad02a-9c4c-47de-bcc5-9bab8abac647" (UID: "9baad02a-9c4c-47de-bcc5-9bab8abac647"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:08:37.918332 kubelet[2774]: I0113 21:08:37.918276 2774 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9baad02a-9c4c-47de-bcc5-9bab8abac647-kube-api-access-zjz2l" (OuterVolumeSpecName: "kube-api-access-zjz2l") pod "9baad02a-9c4c-47de-bcc5-9bab8abac647" (UID: "9baad02a-9c4c-47de-bcc5-9bab8abac647"). InnerVolumeSpecName "kube-api-access-zjz2l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 21:08:37.918466 kubelet[2774]: I0113 21:08:37.918442 2774 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9baad02a-9c4c-47de-bcc5-9bab8abac647-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "9baad02a-9c4c-47de-bcc5-9bab8abac647" (UID: "9baad02a-9c4c-47de-bcc5-9bab8abac647"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:08:37.918625 kubelet[2774]: I0113 21:08:37.918601 2774 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9baad02a-9c4c-47de-bcc5-9bab8abac647-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "9baad02a-9c4c-47de-bcc5-9bab8abac647" (UID: "9baad02a-9c4c-47de-bcc5-9bab8abac647"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:08:37.918759 kubelet[2774]: I0113 21:08:37.918735 2774 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9baad02a-9c4c-47de-bcc5-9bab8abac647-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "9baad02a-9c4c-47de-bcc5-9bab8abac647" (UID: "9baad02a-9c4c-47de-bcc5-9bab8abac647"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:08:37.920709 kubelet[2774]: I0113 21:08:37.920665 2774 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9baad02a-9c4c-47de-bcc5-9bab8abac647-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9baad02a-9c4c-47de-bcc5-9bab8abac647" (UID: "9baad02a-9c4c-47de-bcc5-9bab8abac647"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 21:08:37.920850 kubelet[2774]: I0113 21:08:37.920728 2774 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9baad02a-9c4c-47de-bcc5-9bab8abac647-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "9baad02a-9c4c-47de-bcc5-9bab8abac647" (UID: "9baad02a-9c4c-47de-bcc5-9bab8abac647"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:08:37.920850 kubelet[2774]: I0113 21:08:37.920770 2774 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9baad02a-9c4c-47de-bcc5-9bab8abac647-cni-path" (OuterVolumeSpecName: "cni-path") pod "9baad02a-9c4c-47de-bcc5-9bab8abac647" (UID: "9baad02a-9c4c-47de-bcc5-9bab8abac647"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:08:37.920850 kubelet[2774]: I0113 21:08:37.920829 2774 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9baad02a-9c4c-47de-bcc5-9bab8abac647-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "9baad02a-9c4c-47de-bcc5-9bab8abac647" (UID: "9baad02a-9c4c-47de-bcc5-9bab8abac647"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:08:37.921002 kubelet[2774]: I0113 21:08:37.920859 2774 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9baad02a-9c4c-47de-bcc5-9bab8abac647-hostproc" (OuterVolumeSpecName: "hostproc") pod "9baad02a-9c4c-47de-bcc5-9bab8abac647" (UID: "9baad02a-9c4c-47de-bcc5-9bab8abac647"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:08:37.921002 kubelet[2774]: I0113 21:08:37.920900 2774 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9baad02a-9c4c-47de-bcc5-9bab8abac647-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "9baad02a-9c4c-47de-bcc5-9bab8abac647" (UID: "9baad02a-9c4c-47de-bcc5-9bab8abac647"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:08:37.921002 kubelet[2774]: I0113 21:08:37.920929 2774 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9baad02a-9c4c-47de-bcc5-9bab8abac647-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "9baad02a-9c4c-47de-bcc5-9bab8abac647" (UID: "9baad02a-9c4c-47de-bcc5-9bab8abac647"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:08:37.922611 kubelet[2774]: I0113 21:08:37.922564 2774 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9baad02a-9c4c-47de-bcc5-9bab8abac647-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "9baad02a-9c4c-47de-bcc5-9bab8abac647" (UID: "9baad02a-9c4c-47de-bcc5-9bab8abac647"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 21:08:37.924577 kubelet[2774]: I0113 21:08:37.924539 2774 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9baad02a-9c4c-47de-bcc5-9bab8abac647-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "9baad02a-9c4c-47de-bcc5-9bab8abac647" (UID: "9baad02a-9c4c-47de-bcc5-9bab8abac647"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 13 21:08:37.937181 kubelet[2774]: I0113 21:08:37.935199 2774 scope.go:117] "RemoveContainer" containerID="6d1f6a3d4497f57ee389dab8fa0a12152f8196f3b063140604032716f2fe657a" Jan 13 21:08:37.948242 systemd[1]: Removed slice kubepods-besteffort-pod3e18377e_529d_4307_a5cd_8bc992479c7d.slice - libcontainer container kubepods-besteffort-pod3e18377e_529d_4307_a5cd_8bc992479c7d.slice. Jan 13 21:08:37.963729 containerd[1514]: time="2025-01-13T21:08:37.962947706Z" level=info msg="RemoveContainer for \"6d1f6a3d4497f57ee389dab8fa0a12152f8196f3b063140604032716f2fe657a\"" Jan 13 21:08:37.970216 containerd[1514]: time="2025-01-13T21:08:37.970180688Z" level=info msg="RemoveContainer for \"6d1f6a3d4497f57ee389dab8fa0a12152f8196f3b063140604032716f2fe657a\" returns successfully" Jan 13 21:08:37.987069 kubelet[2774]: I0113 21:08:37.986437 2774 scope.go:117] "RemoveContainer" containerID="6d1f6a3d4497f57ee389dab8fa0a12152f8196f3b063140604032716f2fe657a" Jan 13 21:08:37.987214 containerd[1514]: time="2025-01-13T21:08:37.987042009Z" level=error msg="ContainerStatus for \"6d1f6a3d4497f57ee389dab8fa0a12152f8196f3b063140604032716f2fe657a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6d1f6a3d4497f57ee389dab8fa0a12152f8196f3b063140604032716f2fe657a\": not found" Jan 13 21:08:37.996505 systemd[1]: Removed slice kubepods-burstable-pod9baad02a_9c4c_47de_bcc5_9bab8abac647.slice - libcontainer container kubepods-burstable-pod9baad02a_9c4c_47de_bcc5_9bab8abac647.slice. Jan 13 21:08:37.997269 kubelet[2774]: E0113 21:08:37.996469 2774 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6d1f6a3d4497f57ee389dab8fa0a12152f8196f3b063140604032716f2fe657a\": not found" containerID="6d1f6a3d4497f57ee389dab8fa0a12152f8196f3b063140604032716f2fe657a" Jan 13 21:08:37.996631 systemd[1]: kubepods-burstable-pod9baad02a_9c4c_47de_bcc5_9bab8abac647.slice: Consumed 10.766s CPU time. Jan 13 21:08:38.000177 kubelet[2774]: I0113 21:08:37.996656 2774 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6d1f6a3d4497f57ee389dab8fa0a12152f8196f3b063140604032716f2fe657a"} err="failed to get container status \"6d1f6a3d4497f57ee389dab8fa0a12152f8196f3b063140604032716f2fe657a\": rpc error: code = NotFound desc = an error occurred when try to find container \"6d1f6a3d4497f57ee389dab8fa0a12152f8196f3b063140604032716f2fe657a\": not found" Jan 13 21:08:38.000177 kubelet[2774]: I0113 21:08:37.999898 2774 scope.go:117] "RemoveContainer" containerID="cd525d21d37f972403fb87a4f142a11bdef9ed14aff4e19af8b6431f2d425a7b" Jan 13 21:08:38.003915 containerd[1514]: time="2025-01-13T21:08:38.003862423Z" level=info msg="RemoveContainer for \"cd525d21d37f972403fb87a4f142a11bdef9ed14aff4e19af8b6431f2d425a7b\"" Jan 13 21:08:38.007336 containerd[1514]: time="2025-01-13T21:08:38.007303331Z" level=info msg="RemoveContainer for \"cd525d21d37f972403fb87a4f142a11bdef9ed14aff4e19af8b6431f2d425a7b\" returns successfully" Jan 13 21:08:38.008102 kubelet[2774]: I0113 21:08:38.007709 2774 scope.go:117] "RemoveContainer" containerID="e1578dced54102ea51eaea837fea4dd93789be3868cf25fb717941d4894e0ab1" Jan 13 21:08:38.011125 containerd[1514]: time="2025-01-13T21:08:38.011086923Z" level=info msg="RemoveContainer for \"e1578dced54102ea51eaea837fea4dd93789be3868cf25fb717941d4894e0ab1\"" Jan 13 21:08:38.015698 containerd[1514]: time="2025-01-13T21:08:38.015649419Z" level=info msg="RemoveContainer for \"e1578dced54102ea51eaea837fea4dd93789be3868cf25fb717941d4894e0ab1\" returns successfully" Jan 13 21:08:38.016115 kubelet[2774]: I0113 21:08:38.016080 2774 scope.go:117] "RemoveContainer" containerID="6a52ab2871dca8913ac8cae3994fe3584821f47c91b3f0a594b5c248089f5647" Jan 13 21:08:38.016729 kubelet[2774]: I0113 21:08:38.016703 2774 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9baad02a-9c4c-47de-bcc5-9bab8abac647-cilium-cgroup\") on node \"srv-85agx.gb1.brightbox.com\" DevicePath \"\"" Jan 13 21:08:38.016729 kubelet[2774]: I0113 21:08:38.016733 2774 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9baad02a-9c4c-47de-bcc5-9bab8abac647-cilium-run\") on node \"srv-85agx.gb1.brightbox.com\" DevicePath \"\"" Jan 13 21:08:38.016930 kubelet[2774]: I0113 21:08:38.016749 2774 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9baad02a-9c4c-47de-bcc5-9bab8abac647-bpf-maps\") on node \"srv-85agx.gb1.brightbox.com\" DevicePath \"\"" Jan 13 21:08:38.016930 kubelet[2774]: I0113 21:08:38.016764 2774 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9baad02a-9c4c-47de-bcc5-9bab8abac647-host-proc-sys-kernel\") on node \"srv-85agx.gb1.brightbox.com\" DevicePath \"\"" Jan 13 21:08:38.016930 kubelet[2774]: I0113 21:08:38.016788 2774 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9baad02a-9c4c-47de-bcc5-9bab8abac647-hubble-tls\") on node \"srv-85agx.gb1.brightbox.com\" DevicePath \"\"" Jan 13 21:08:38.016930 kubelet[2774]: I0113 21:08:38.016817 2774 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9baad02a-9c4c-47de-bcc5-9bab8abac647-xtables-lock\") on node \"srv-85agx.gb1.brightbox.com\" DevicePath \"\"" Jan 13 21:08:38.016930 kubelet[2774]: I0113 21:08:38.016832 2774 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9baad02a-9c4c-47de-bcc5-9bab8abac647-clustermesh-secrets\") on node \"srv-85agx.gb1.brightbox.com\" DevicePath \"\"" Jan 13 21:08:38.016930 kubelet[2774]: I0113 21:08:38.016855 2774 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-zjz2l\" (UniqueName: \"kubernetes.io/projected/9baad02a-9c4c-47de-bcc5-9bab8abac647-kube-api-access-zjz2l\") on node \"srv-85agx.gb1.brightbox.com\" DevicePath \"\"" Jan 13 21:08:38.016930 kubelet[2774]: I0113 21:08:38.016871 2774 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9baad02a-9c4c-47de-bcc5-9bab8abac647-cilium-config-path\") on node \"srv-85agx.gb1.brightbox.com\" DevicePath \"\"" Jan 13 21:08:38.016930 kubelet[2774]: I0113 21:08:38.016888 2774 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9baad02a-9c4c-47de-bcc5-9bab8abac647-host-proc-sys-net\") on node \"srv-85agx.gb1.brightbox.com\" DevicePath \"\"" Jan 13 21:08:38.017742 kubelet[2774]: I0113 21:08:38.016902 2774 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9baad02a-9c4c-47de-bcc5-9bab8abac647-cni-path\") on node \"srv-85agx.gb1.brightbox.com\" DevicePath \"\"" Jan 13 21:08:38.017742 kubelet[2774]: I0113 21:08:38.016918 2774 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9baad02a-9c4c-47de-bcc5-9bab8abac647-lib-modules\") on node \"srv-85agx.gb1.brightbox.com\" DevicePath \"\"" Jan 13 21:08:38.017742 kubelet[2774]: I0113 21:08:38.016931 2774 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9baad02a-9c4c-47de-bcc5-9bab8abac647-hostproc\") on node \"srv-85agx.gb1.brightbox.com\" DevicePath \"\"" Jan 13 21:08:38.017742 kubelet[2774]: I0113 21:08:38.016945 2774 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9baad02a-9c4c-47de-bcc5-9bab8abac647-etc-cni-netd\") on node \"srv-85agx.gb1.brightbox.com\" DevicePath \"\"" Jan 13 21:08:38.019130 containerd[1514]: time="2025-01-13T21:08:38.017939232Z" level=info msg="RemoveContainer for \"6a52ab2871dca8913ac8cae3994fe3584821f47c91b3f0a594b5c248089f5647\"" Jan 13 21:08:38.021044 containerd[1514]: time="2025-01-13T21:08:38.020974029Z" level=info msg="RemoveContainer for \"6a52ab2871dca8913ac8cae3994fe3584821f47c91b3f0a594b5c248089f5647\" returns successfully" Jan 13 21:08:38.021493 kubelet[2774]: I0113 21:08:38.021171 2774 scope.go:117] "RemoveContainer" containerID="4e9c7d1b5fe1aa1cb276e05346f4b1471b0e1ce0c89426cf3aa504297a8f0e52" Jan 13 21:08:38.024066 containerd[1514]: time="2025-01-13T21:08:38.024032225Z" level=info msg="RemoveContainer for \"4e9c7d1b5fe1aa1cb276e05346f4b1471b0e1ce0c89426cf3aa504297a8f0e52\"" Jan 13 21:08:38.028579 containerd[1514]: time="2025-01-13T21:08:38.028533452Z" level=info msg="RemoveContainer for \"4e9c7d1b5fe1aa1cb276e05346f4b1471b0e1ce0c89426cf3aa504297a8f0e52\" returns successfully" Jan 13 21:08:38.029008 kubelet[2774]: I0113 21:08:38.028808 2774 scope.go:117] "RemoveContainer" containerID="6ad6908883bacdaaa1af7c9aee8be428c57a3e2fc804ad91399aa3a357051253" Jan 13 21:08:38.030223 containerd[1514]: time="2025-01-13T21:08:38.030085893Z" level=info msg="RemoveContainer for \"6ad6908883bacdaaa1af7c9aee8be428c57a3e2fc804ad91399aa3a357051253\"" Jan 13 21:08:38.033378 containerd[1514]: time="2025-01-13T21:08:38.033326778Z" level=info msg="RemoveContainer for \"6ad6908883bacdaaa1af7c9aee8be428c57a3e2fc804ad91399aa3a357051253\" returns successfully" Jan 13 21:08:38.035031 kubelet[2774]: I0113 21:08:38.035006 2774 scope.go:117] "RemoveContainer" containerID="cd525d21d37f972403fb87a4f142a11bdef9ed14aff4e19af8b6431f2d425a7b" Jan 13 21:08:38.035752 containerd[1514]: time="2025-01-13T21:08:38.035711382Z" level=error msg="ContainerStatus for \"cd525d21d37f972403fb87a4f142a11bdef9ed14aff4e19af8b6431f2d425a7b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cd525d21d37f972403fb87a4f142a11bdef9ed14aff4e19af8b6431f2d425a7b\": not found" Jan 13 21:08:38.036002 kubelet[2774]: E0113 21:08:38.035922 2774 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cd525d21d37f972403fb87a4f142a11bdef9ed14aff4e19af8b6431f2d425a7b\": not found" containerID="cd525d21d37f972403fb87a4f142a11bdef9ed14aff4e19af8b6431f2d425a7b" Jan 13 21:08:38.036002 kubelet[2774]: I0113 21:08:38.035974 2774 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cd525d21d37f972403fb87a4f142a11bdef9ed14aff4e19af8b6431f2d425a7b"} err="failed to get container status \"cd525d21d37f972403fb87a4f142a11bdef9ed14aff4e19af8b6431f2d425a7b\": rpc error: code = NotFound desc = an error occurred when try to find container \"cd525d21d37f972403fb87a4f142a11bdef9ed14aff4e19af8b6431f2d425a7b\": not found" Jan 13 21:08:38.036002 kubelet[2774]: I0113 21:08:38.036003 2774 scope.go:117] "RemoveContainer" containerID="e1578dced54102ea51eaea837fea4dd93789be3868cf25fb717941d4894e0ab1" Jan 13 21:08:38.036369 containerd[1514]: time="2025-01-13T21:08:38.036213243Z" level=error msg="ContainerStatus for \"e1578dced54102ea51eaea837fea4dd93789be3868cf25fb717941d4894e0ab1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e1578dced54102ea51eaea837fea4dd93789be3868cf25fb717941d4894e0ab1\": not found" Jan 13 21:08:38.036438 kubelet[2774]: E0113 21:08:38.036358 2774 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e1578dced54102ea51eaea837fea4dd93789be3868cf25fb717941d4894e0ab1\": not found" containerID="e1578dced54102ea51eaea837fea4dd93789be3868cf25fb717941d4894e0ab1" Jan 13 21:08:38.036668 kubelet[2774]: I0113 21:08:38.036446 2774 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e1578dced54102ea51eaea837fea4dd93789be3868cf25fb717941d4894e0ab1"} err="failed to get container status \"e1578dced54102ea51eaea837fea4dd93789be3868cf25fb717941d4894e0ab1\": rpc error: code = NotFound desc = an error occurred when try to find container \"e1578dced54102ea51eaea837fea4dd93789be3868cf25fb717941d4894e0ab1\": not found" Jan 13 21:08:38.036668 kubelet[2774]: I0113 21:08:38.036469 2774 scope.go:117] "RemoveContainer" containerID="6a52ab2871dca8913ac8cae3994fe3584821f47c91b3f0a594b5c248089f5647" Jan 13 21:08:38.037204 containerd[1514]: time="2025-01-13T21:08:38.036989714Z" level=error msg="ContainerStatus for \"6a52ab2871dca8913ac8cae3994fe3584821f47c91b3f0a594b5c248089f5647\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6a52ab2871dca8913ac8cae3994fe3584821f47c91b3f0a594b5c248089f5647\": not found" Jan 13 21:08:38.037434 kubelet[2774]: E0113 21:08:38.037387 2774 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6a52ab2871dca8913ac8cae3994fe3584821f47c91b3f0a594b5c248089f5647\": not found" containerID="6a52ab2871dca8913ac8cae3994fe3584821f47c91b3f0a594b5c248089f5647" Jan 13 21:08:38.037571 kubelet[2774]: I0113 21:08:38.037435 2774 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6a52ab2871dca8913ac8cae3994fe3584821f47c91b3f0a594b5c248089f5647"} err="failed to get container status \"6a52ab2871dca8913ac8cae3994fe3584821f47c91b3f0a594b5c248089f5647\": rpc error: code = NotFound desc = an error occurred when try to find container \"6a52ab2871dca8913ac8cae3994fe3584821f47c91b3f0a594b5c248089f5647\": not found" Jan 13 21:08:38.037571 kubelet[2774]: I0113 21:08:38.037456 2774 scope.go:117] "RemoveContainer" containerID="4e9c7d1b5fe1aa1cb276e05346f4b1471b0e1ce0c89426cf3aa504297a8f0e52" Jan 13 21:08:38.038010 kubelet[2774]: E0113 21:08:38.037830 2774 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4e9c7d1b5fe1aa1cb276e05346f4b1471b0e1ce0c89426cf3aa504297a8f0e52\": not found" containerID="4e9c7d1b5fe1aa1cb276e05346f4b1471b0e1ce0c89426cf3aa504297a8f0e52" Jan 13 21:08:38.038010 kubelet[2774]: I0113 21:08:38.037867 2774 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4e9c7d1b5fe1aa1cb276e05346f4b1471b0e1ce0c89426cf3aa504297a8f0e52"} err="failed to get container status \"4e9c7d1b5fe1aa1cb276e05346f4b1471b0e1ce0c89426cf3aa504297a8f0e52\": rpc error: code = NotFound desc = an error occurred when try to find container \"4e9c7d1b5fe1aa1cb276e05346f4b1471b0e1ce0c89426cf3aa504297a8f0e52\": not found" Jan 13 21:08:38.038010 kubelet[2774]: I0113 21:08:38.037888 2774 scope.go:117] "RemoveContainer" containerID="6ad6908883bacdaaa1af7c9aee8be428c57a3e2fc804ad91399aa3a357051253" Jan 13 21:08:38.038235 containerd[1514]: time="2025-01-13T21:08:38.037640258Z" level=error msg="ContainerStatus for \"4e9c7d1b5fe1aa1cb276e05346f4b1471b0e1ce0c89426cf3aa504297a8f0e52\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4e9c7d1b5fe1aa1cb276e05346f4b1471b0e1ce0c89426cf3aa504297a8f0e52\": not found" Jan 13 21:08:38.038636 containerd[1514]: time="2025-01-13T21:08:38.038501172Z" level=error msg="ContainerStatus for \"6ad6908883bacdaaa1af7c9aee8be428c57a3e2fc804ad91399aa3a357051253\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6ad6908883bacdaaa1af7c9aee8be428c57a3e2fc804ad91399aa3a357051253\": not found" Jan 13 21:08:38.038750 kubelet[2774]: E0113 21:08:38.038708 2774 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6ad6908883bacdaaa1af7c9aee8be428c57a3e2fc804ad91399aa3a357051253\": not found" containerID="6ad6908883bacdaaa1af7c9aee8be428c57a3e2fc804ad91399aa3a357051253" Jan 13 21:08:38.038855 kubelet[2774]: I0113 21:08:38.038747 2774 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6ad6908883bacdaaa1af7c9aee8be428c57a3e2fc804ad91399aa3a357051253"} err="failed to get container status \"6ad6908883bacdaaa1af7c9aee8be428c57a3e2fc804ad91399aa3a357051253\": rpc error: code = NotFound desc = an error occurred when try to find container \"6ad6908883bacdaaa1af7c9aee8be428c57a3e2fc804ad91399aa3a357051253\": not found" Jan 13 21:08:38.373016 kubelet[2774]: I0113 21:08:38.372885 2774 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e18377e-529d-4307-a5cd-8bc992479c7d" path="/var/lib/kubelet/pods/3e18377e-529d-4307-a5cd-8bc992479c7d/volumes" Jan 13 21:08:38.375582 kubelet[2774]: I0113 21:08:38.374780 2774 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9baad02a-9c4c-47de-bcc5-9bab8abac647" path="/var/lib/kubelet/pods/9baad02a-9c4c-47de-bcc5-9bab8abac647/volumes" Jan 13 21:08:38.466469 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-41ec423ab22bdea5a8b1815689f44317f15a21a93c5a95fb68a80fbe50957379-rootfs.mount: Deactivated successfully. Jan 13 21:08:38.466647 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-51079f8fa5965f6550225547df0f512143410cd57d3c245e2ab5901800f79a6a-rootfs.mount: Deactivated successfully. Jan 13 21:08:38.466821 systemd[1]: var-lib-kubelet-pods-3e18377e\x2d529d\x2d4307\x2da5cd\x2d8bc992479c7d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwffht.mount: Deactivated successfully. Jan 13 21:08:38.466963 systemd[1]: var-lib-kubelet-pods-9baad02a\x2d9c4c\x2d47de\x2dbcc5\x2d9bab8abac647-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzjz2l.mount: Deactivated successfully. Jan 13 21:08:38.467102 systemd[1]: var-lib-kubelet-pods-9baad02a\x2d9c4c\x2d47de\x2dbcc5\x2d9bab8abac647-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 13 21:08:38.467248 systemd[1]: var-lib-kubelet-pods-9baad02a\x2d9c4c\x2d47de\x2dbcc5\x2d9bab8abac647-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 13 21:08:39.411420 sshd[4359]: Connection closed by 139.178.68.195 port 44850 Jan 13 21:08:39.414365 sshd-session[4357]: pam_unix(sshd:session): session closed for user core Jan 13 21:08:39.421705 systemd[1]: sshd@22-10.230.36.54:22-139.178.68.195:44850.service: Deactivated successfully. Jan 13 21:08:39.425014 systemd[1]: session-25.scope: Deactivated successfully. Jan 13 21:08:39.427164 systemd-logind[1501]: Session 25 logged out. Waiting for processes to exit. Jan 13 21:08:39.430664 systemd-logind[1501]: Removed session 25. Jan 13 21:08:39.572510 systemd[1]: Started sshd@23-10.230.36.54:22-139.178.68.195:38782.service - OpenSSH per-connection server daemon (139.178.68.195:38782). Jan 13 21:08:40.479385 sshd[4519]: Accepted publickey for core from 139.178.68.195 port 38782 ssh2: RSA SHA256:hnRa+lrXktC2wPLY5bcSKNUrJK0GTTLH7jAG9gNraiM Jan 13 21:08:40.481623 sshd-session[4519]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:08:40.490272 systemd-logind[1501]: New session 26 of user core. Jan 13 21:08:40.496395 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 13 21:08:41.926495 kubelet[2774]: I0113 21:08:41.922805 2774 topology_manager.go:215] "Topology Admit Handler" podUID="c3c07beb-c298-4c4c-b153-8f1ef8f60f40" podNamespace="kube-system" podName="cilium-wf49k" Jan 13 21:08:41.932694 kubelet[2774]: E0113 21:08:41.931274 2774 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9baad02a-9c4c-47de-bcc5-9bab8abac647" containerName="cilium-agent" Jan 13 21:08:41.932694 kubelet[2774]: E0113 21:08:41.931317 2774 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9baad02a-9c4c-47de-bcc5-9bab8abac647" containerName="mount-cgroup" Jan 13 21:08:41.932694 kubelet[2774]: E0113 21:08:41.931338 2774 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9baad02a-9c4c-47de-bcc5-9bab8abac647" containerName="apply-sysctl-overwrites" Jan 13 21:08:41.932694 kubelet[2774]: E0113 21:08:41.931348 2774 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9baad02a-9c4c-47de-bcc5-9bab8abac647" containerName="mount-bpf-fs" Jan 13 21:08:41.932694 kubelet[2774]: E0113 21:08:41.931359 2774 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9baad02a-9c4c-47de-bcc5-9bab8abac647" containerName="clean-cilium-state" Jan 13 21:08:41.932694 kubelet[2774]: E0113 21:08:41.931374 2774 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3e18377e-529d-4307-a5cd-8bc992479c7d" containerName="cilium-operator" Jan 13 21:08:41.937017 kubelet[2774]: I0113 21:08:41.931506 2774 memory_manager.go:354] "RemoveStaleState removing state" podUID="9baad02a-9c4c-47de-bcc5-9bab8abac647" containerName="cilium-agent" Jan 13 21:08:41.937162 kubelet[2774]: I0113 21:08:41.937128 2774 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e18377e-529d-4307-a5cd-8bc992479c7d" containerName="cilium-operator" Jan 13 21:08:41.992146 systemd[1]: Created slice kubepods-burstable-podc3c07beb_c298_4c4c_b153_8f1ef8f60f40.slice - libcontainer container kubepods-burstable-podc3c07beb_c298_4c4c_b153_8f1ef8f60f40.slice. Jan 13 21:08:42.036451 sshd[4521]: Connection closed by 139.178.68.195 port 38782 Jan 13 21:08:42.039658 sshd-session[4519]: pam_unix(sshd:session): session closed for user core Jan 13 21:08:42.046313 systemd-logind[1501]: Session 26 logged out. Waiting for processes to exit. Jan 13 21:08:42.047197 kubelet[2774]: I0113 21:08:42.047118 2774 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c3c07beb-c298-4c4c-b153-8f1ef8f60f40-lib-modules\") pod \"cilium-wf49k\" (UID: \"c3c07beb-c298-4c4c-b153-8f1ef8f60f40\") " pod="kube-system/cilium-wf49k" Jan 13 21:08:42.047571 kubelet[2774]: I0113 21:08:42.047307 2774 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c3c07beb-c298-4c4c-b153-8f1ef8f60f40-cilium-run\") pod \"cilium-wf49k\" (UID: \"c3c07beb-c298-4c4c-b153-8f1ef8f60f40\") " pod="kube-system/cilium-wf49k" Jan 13 21:08:42.047672 kubelet[2774]: I0113 21:08:42.047623 2774 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c3c07beb-c298-4c4c-b153-8f1ef8f60f40-cilium-cgroup\") pod \"cilium-wf49k\" (UID: \"c3c07beb-c298-4c4c-b153-8f1ef8f60f40\") " pod="kube-system/cilium-wf49k" Jan 13 21:08:42.047672 kubelet[2774]: I0113 21:08:42.047653 2774 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c3c07beb-c298-4c4c-b153-8f1ef8f60f40-hostproc\") pod \"cilium-wf49k\" (UID: \"c3c07beb-c298-4c4c-b153-8f1ef8f60f40\") " pod="kube-system/cilium-wf49k" Jan 13 21:08:42.047799 kubelet[2774]: I0113 21:08:42.047755 2774 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c3c07beb-c298-4c4c-b153-8f1ef8f60f40-cni-path\") pod \"cilium-wf49k\" (UID: \"c3c07beb-c298-4c4c-b153-8f1ef8f60f40\") " pod="kube-system/cilium-wf49k" Jan 13 21:08:42.048578 kubelet[2774]: I0113 21:08:42.047848 2774 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c3c07beb-c298-4c4c-b153-8f1ef8f60f40-host-proc-sys-net\") pod \"cilium-wf49k\" (UID: \"c3c07beb-c298-4c4c-b153-8f1ef8f60f40\") " pod="kube-system/cilium-wf49k" Jan 13 21:08:42.048578 kubelet[2774]: I0113 21:08:42.047911 2774 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlftg\" (UniqueName: \"kubernetes.io/projected/c3c07beb-c298-4c4c-b153-8f1ef8f60f40-kube-api-access-vlftg\") pod \"cilium-wf49k\" (UID: \"c3c07beb-c298-4c4c-b153-8f1ef8f60f40\") " pod="kube-system/cilium-wf49k" Jan 13 21:08:42.048578 kubelet[2774]: I0113 21:08:42.047943 2774 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c3c07beb-c298-4c4c-b153-8f1ef8f60f40-bpf-maps\") pod \"cilium-wf49k\" (UID: \"c3c07beb-c298-4c4c-b153-8f1ef8f60f40\") " pod="kube-system/cilium-wf49k" Jan 13 21:08:42.048578 kubelet[2774]: I0113 21:08:42.048016 2774 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c3c07beb-c298-4c4c-b153-8f1ef8f60f40-etc-cni-netd\") pod \"cilium-wf49k\" (UID: \"c3c07beb-c298-4c4c-b153-8f1ef8f60f40\") " pod="kube-system/cilium-wf49k" Jan 13 21:08:42.048578 kubelet[2774]: I0113 21:08:42.048064 2774 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c3c07beb-c298-4c4c-b153-8f1ef8f60f40-xtables-lock\") pod \"cilium-wf49k\" (UID: \"c3c07beb-c298-4c4c-b153-8f1ef8f60f40\") " pod="kube-system/cilium-wf49k" Jan 13 21:08:42.048578 kubelet[2774]: I0113 21:08:42.048124 2774 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c3c07beb-c298-4c4c-b153-8f1ef8f60f40-cilium-config-path\") pod \"cilium-wf49k\" (UID: \"c3c07beb-c298-4c4c-b153-8f1ef8f60f40\") " pod="kube-system/cilium-wf49k" Jan 13 21:08:42.048013 systemd[1]: sshd@23-10.230.36.54:22-139.178.68.195:38782.service: Deactivated successfully. Jan 13 21:08:42.049047 kubelet[2774]: I0113 21:08:42.048225 2774 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c3c07beb-c298-4c4c-b153-8f1ef8f60f40-host-proc-sys-kernel\") pod \"cilium-wf49k\" (UID: \"c3c07beb-c298-4c4c-b153-8f1ef8f60f40\") " pod="kube-system/cilium-wf49k" Jan 13 21:08:42.049047 kubelet[2774]: I0113 21:08:42.048266 2774 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c3c07beb-c298-4c4c-b153-8f1ef8f60f40-hubble-tls\") pod \"cilium-wf49k\" (UID: \"c3c07beb-c298-4c4c-b153-8f1ef8f60f40\") " pod="kube-system/cilium-wf49k" Jan 13 21:08:42.049047 kubelet[2774]: I0113 21:08:42.048337 2774 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c3c07beb-c298-4c4c-b153-8f1ef8f60f40-clustermesh-secrets\") pod \"cilium-wf49k\" (UID: \"c3c07beb-c298-4c4c-b153-8f1ef8f60f40\") " pod="kube-system/cilium-wf49k" Jan 13 21:08:42.049047 kubelet[2774]: I0113 21:08:42.048507 2774 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c3c07beb-c298-4c4c-b153-8f1ef8f60f40-cilium-ipsec-secrets\") pod \"cilium-wf49k\" (UID: \"c3c07beb-c298-4c4c-b153-8f1ef8f60f40\") " pod="kube-system/cilium-wf49k" Jan 13 21:08:42.051799 systemd[1]: session-26.scope: Deactivated successfully. Jan 13 21:08:42.053422 systemd-logind[1501]: Removed session 26. Jan 13 21:08:42.204492 systemd[1]: Started sshd@24-10.230.36.54:22-139.178.68.195:38796.service - OpenSSH per-connection server daemon (139.178.68.195:38796). Jan 13 21:08:42.304137 containerd[1514]: time="2025-01-13T21:08:42.304060413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wf49k,Uid:c3c07beb-c298-4c4c-b153-8f1ef8f60f40,Namespace:kube-system,Attempt:0,}" Jan 13 21:08:42.348069 containerd[1514]: time="2025-01-13T21:08:42.347084361Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:08:42.348069 containerd[1514]: time="2025-01-13T21:08:42.347563950Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:08:42.348069 containerd[1514]: time="2025-01-13T21:08:42.347590680Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:08:42.348069 containerd[1514]: time="2025-01-13T21:08:42.347758021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:08:42.378449 systemd[1]: Started cri-containerd-77fdf82bb12911600d65464b517c9f2b623f4d9ecf8b317eeba674b5686b7ee0.scope - libcontainer container 77fdf82bb12911600d65464b517c9f2b623f4d9ecf8b317eeba674b5686b7ee0. Jan 13 21:08:42.417617 containerd[1514]: time="2025-01-13T21:08:42.417469964Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wf49k,Uid:c3c07beb-c298-4c4c-b153-8f1ef8f60f40,Namespace:kube-system,Attempt:0,} returns sandbox id \"77fdf82bb12911600d65464b517c9f2b623f4d9ecf8b317eeba674b5686b7ee0\"" Jan 13 21:08:42.426660 containerd[1514]: time="2025-01-13T21:08:42.426376482Z" level=info msg="CreateContainer within sandbox \"77fdf82bb12911600d65464b517c9f2b623f4d9ecf8b317eeba674b5686b7ee0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 21:08:42.440144 containerd[1514]: time="2025-01-13T21:08:42.439999711Z" level=info msg="CreateContainer within sandbox \"77fdf82bb12911600d65464b517c9f2b623f4d9ecf8b317eeba674b5686b7ee0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f95a8f1636e6b1e05104fb209b80f930c94450ef5a48e07a8a27f4722ecce038\"" Jan 13 21:08:42.441319 containerd[1514]: time="2025-01-13T21:08:42.441088526Z" level=info msg="StartContainer for \"f95a8f1636e6b1e05104fb209b80f930c94450ef5a48e07a8a27f4722ecce038\"" Jan 13 21:08:42.480465 systemd[1]: Started cri-containerd-f95a8f1636e6b1e05104fb209b80f930c94450ef5a48e07a8a27f4722ecce038.scope - libcontainer container f95a8f1636e6b1e05104fb209b80f930c94450ef5a48e07a8a27f4722ecce038. Jan 13 21:08:42.543277 containerd[1514]: time="2025-01-13T21:08:42.543020930Z" level=info msg="StartContainer for \"f95a8f1636e6b1e05104fb209b80f930c94450ef5a48e07a8a27f4722ecce038\" returns successfully" Jan 13 21:08:42.566545 systemd[1]: cri-containerd-f95a8f1636e6b1e05104fb209b80f930c94450ef5a48e07a8a27f4722ecce038.scope: Deactivated successfully. Jan 13 21:08:42.581472 kubelet[2774]: E0113 21:08:42.581406 2774 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 13 21:08:42.615277 containerd[1514]: time="2025-01-13T21:08:42.615167898Z" level=info msg="shim disconnected" id=f95a8f1636e6b1e05104fb209b80f930c94450ef5a48e07a8a27f4722ecce038 namespace=k8s.io Jan 13 21:08:42.615608 containerd[1514]: time="2025-01-13T21:08:42.615569389Z" level=warning msg="cleaning up after shim disconnected" id=f95a8f1636e6b1e05104fb209b80f930c94450ef5a48e07a8a27f4722ecce038 namespace=k8s.io Jan 13 21:08:42.615741 containerd[1514]: time="2025-01-13T21:08:42.615702102Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:08:43.019709 containerd[1514]: time="2025-01-13T21:08:43.019642438Z" level=info msg="CreateContainer within sandbox \"77fdf82bb12911600d65464b517c9f2b623f4d9ecf8b317eeba674b5686b7ee0\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 21:08:43.041044 containerd[1514]: time="2025-01-13T21:08:43.041003847Z" level=info msg="CreateContainer within sandbox \"77fdf82bb12911600d65464b517c9f2b623f4d9ecf8b317eeba674b5686b7ee0\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f057daa58fa3b621d134bbade5765bf164a007f1d4df1b658ca60ae0cbf980d4\"" Jan 13 21:08:43.042593 containerd[1514]: time="2025-01-13T21:08:43.042406144Z" level=info msg="StartContainer for \"f057daa58fa3b621d134bbade5765bf164a007f1d4df1b658ca60ae0cbf980d4\"" Jan 13 21:08:43.087432 systemd[1]: Started cri-containerd-f057daa58fa3b621d134bbade5765bf164a007f1d4df1b658ca60ae0cbf980d4.scope - libcontainer container f057daa58fa3b621d134bbade5765bf164a007f1d4df1b658ca60ae0cbf980d4. Jan 13 21:08:43.102210 sshd[4535]: Accepted publickey for core from 139.178.68.195 port 38796 ssh2: RSA SHA256:hnRa+lrXktC2wPLY5bcSKNUrJK0GTTLH7jAG9gNraiM Jan 13 21:08:43.105049 sshd-session[4535]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:08:43.117098 systemd-logind[1501]: New session 27 of user core. Jan 13 21:08:43.123841 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 13 21:08:43.140753 containerd[1514]: time="2025-01-13T21:08:43.140680326Z" level=info msg="StartContainer for \"f057daa58fa3b621d134bbade5765bf164a007f1d4df1b658ca60ae0cbf980d4\" returns successfully" Jan 13 21:08:43.165553 systemd[1]: cri-containerd-f057daa58fa3b621d134bbade5765bf164a007f1d4df1b658ca60ae0cbf980d4.scope: Deactivated successfully. Jan 13 21:08:43.201866 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f057daa58fa3b621d134bbade5765bf164a007f1d4df1b658ca60ae0cbf980d4-rootfs.mount: Deactivated successfully. Jan 13 21:08:43.207261 containerd[1514]: time="2025-01-13T21:08:43.206686692Z" level=info msg="shim disconnected" id=f057daa58fa3b621d134bbade5765bf164a007f1d4df1b658ca60ae0cbf980d4 namespace=k8s.io Jan 13 21:08:43.207261 containerd[1514]: time="2025-01-13T21:08:43.206772038Z" level=warning msg="cleaning up after shim disconnected" id=f057daa58fa3b621d134bbade5765bf164a007f1d4df1b658ca60ae0cbf980d4 namespace=k8s.io Jan 13 21:08:43.207261 containerd[1514]: time="2025-01-13T21:08:43.206788523Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:08:43.214654 containerd[1514]: time="2025-01-13T21:08:43.214618624Z" level=error msg="collecting metrics for f057daa58fa3b621d134bbade5765bf164a007f1d4df1b658ca60ae0cbf980d4" error="ttrpc: closed: unknown" Jan 13 21:08:43.240800 containerd[1514]: time="2025-01-13T21:08:43.239848174Z" level=warning msg="cleanup warnings time=\"2025-01-13T21:08:43Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 13 21:08:43.717183 sshd[4674]: Connection closed by 139.178.68.195 port 38796 Jan 13 21:08:43.716247 sshd-session[4535]: pam_unix(sshd:session): session closed for user core Jan 13 21:08:43.720559 systemd[1]: sshd@24-10.230.36.54:22-139.178.68.195:38796.service: Deactivated successfully. Jan 13 21:08:43.723568 systemd[1]: session-27.scope: Deactivated successfully. Jan 13 21:08:43.725907 systemd-logind[1501]: Session 27 logged out. Waiting for processes to exit. Jan 13 21:08:43.727593 systemd-logind[1501]: Removed session 27. Jan 13 21:08:43.873477 systemd[1]: Started sshd@25-10.230.36.54:22-139.178.68.195:38812.service - OpenSSH per-connection server daemon (139.178.68.195:38812). Jan 13 21:08:44.000171 containerd[1514]: time="2025-01-13T21:08:43.999020829Z" level=info msg="CreateContainer within sandbox \"77fdf82bb12911600d65464b517c9f2b623f4d9ecf8b317eeba674b5686b7ee0\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 21:08:44.021649 containerd[1514]: time="2025-01-13T21:08:44.021515732Z" level=info msg="CreateContainer within sandbox \"77fdf82bb12911600d65464b517c9f2b623f4d9ecf8b317eeba674b5686b7ee0\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"89804731265899601d097f11e956adbb0c59951ada24f28a97796848dc46679d\"" Jan 13 21:08:44.023298 containerd[1514]: time="2025-01-13T21:08:44.023261866Z" level=info msg="StartContainer for \"89804731265899601d097f11e956adbb0c59951ada24f28a97796848dc46679d\"" Jan 13 21:08:44.074763 systemd[1]: Started cri-containerd-89804731265899601d097f11e956adbb0c59951ada24f28a97796848dc46679d.scope - libcontainer container 89804731265899601d097f11e956adbb0c59951ada24f28a97796848dc46679d. Jan 13 21:08:44.135596 containerd[1514]: time="2025-01-13T21:08:44.135350569Z" level=info msg="StartContainer for \"89804731265899601d097f11e956adbb0c59951ada24f28a97796848dc46679d\" returns successfully" Jan 13 21:08:44.145120 systemd[1]: cri-containerd-89804731265899601d097f11e956adbb0c59951ada24f28a97796848dc46679d.scope: Deactivated successfully. Jan 13 21:08:44.193624 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-89804731265899601d097f11e956adbb0c59951ada24f28a97796848dc46679d-rootfs.mount: Deactivated successfully. Jan 13 21:08:44.198447 containerd[1514]: time="2025-01-13T21:08:44.198109510Z" level=info msg="shim disconnected" id=89804731265899601d097f11e956adbb0c59951ada24f28a97796848dc46679d namespace=k8s.io Jan 13 21:08:44.198447 containerd[1514]: time="2025-01-13T21:08:44.198239731Z" level=warning msg="cleaning up after shim disconnected" id=89804731265899601d097f11e956adbb0c59951ada24f28a97796848dc46679d namespace=k8s.io Jan 13 21:08:44.198447 containerd[1514]: time="2025-01-13T21:08:44.198257119Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:08:44.776496 sshd[4711]: Accepted publickey for core from 139.178.68.195 port 38812 ssh2: RSA SHA256:hnRa+lrXktC2wPLY5bcSKNUrJK0GTTLH7jAG9gNraiM Jan 13 21:08:44.778616 sshd-session[4711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:08:44.786193 systemd-logind[1501]: New session 28 of user core. Jan 13 21:08:44.791474 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 13 21:08:45.004625 containerd[1514]: time="2025-01-13T21:08:45.004130715Z" level=info msg="CreateContainer within sandbox \"77fdf82bb12911600d65464b517c9f2b623f4d9ecf8b317eeba674b5686b7ee0\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 21:08:45.031724 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2954909649.mount: Deactivated successfully. Jan 13 21:08:45.035853 containerd[1514]: time="2025-01-13T21:08:45.035663339Z" level=info msg="CreateContainer within sandbox \"77fdf82bb12911600d65464b517c9f2b623f4d9ecf8b317eeba674b5686b7ee0\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7efe02d3855498afd393bf14a6556a036d41ebd081c25846fad7aa5d9ce1ddd4\"" Jan 13 21:08:45.036789 containerd[1514]: time="2025-01-13T21:08:45.036600774Z" level=info msg="StartContainer for \"7efe02d3855498afd393bf14a6556a036d41ebd081c25846fad7aa5d9ce1ddd4\"" Jan 13 21:08:45.081445 systemd[1]: Started cri-containerd-7efe02d3855498afd393bf14a6556a036d41ebd081c25846fad7aa5d9ce1ddd4.scope - libcontainer container 7efe02d3855498afd393bf14a6556a036d41ebd081c25846fad7aa5d9ce1ddd4. Jan 13 21:08:45.124102 systemd[1]: cri-containerd-7efe02d3855498afd393bf14a6556a036d41ebd081c25846fad7aa5d9ce1ddd4.scope: Deactivated successfully. Jan 13 21:08:45.126703 containerd[1514]: time="2025-01-13T21:08:45.126470061Z" level=info msg="StartContainer for \"7efe02d3855498afd393bf14a6556a036d41ebd081c25846fad7aa5d9ce1ddd4\" returns successfully" Jan 13 21:08:45.158174 containerd[1514]: time="2025-01-13T21:08:45.157891842Z" level=info msg="shim disconnected" id=7efe02d3855498afd393bf14a6556a036d41ebd081c25846fad7aa5d9ce1ddd4 namespace=k8s.io Jan 13 21:08:45.158174 containerd[1514]: time="2025-01-13T21:08:45.158014192Z" level=warning msg="cleaning up after shim disconnected" id=7efe02d3855498afd393bf14a6556a036d41ebd081c25846fad7aa5d9ce1ddd4 namespace=k8s.io Jan 13 21:08:45.158174 containerd[1514]: time="2025-01-13T21:08:45.158030070Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:08:45.159896 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7efe02d3855498afd393bf14a6556a036d41ebd081c25846fad7aa5d9ce1ddd4-rootfs.mount: Deactivated successfully. Jan 13 21:08:45.868112 kubelet[2774]: I0113 21:08:45.867842 2774 setters.go:580] "Node became not ready" node="srv-85agx.gb1.brightbox.com" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-13T21:08:45Z","lastTransitionTime":"2025-01-13T21:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 13 21:08:46.011795 containerd[1514]: time="2025-01-13T21:08:46.011701338Z" level=info msg="CreateContainer within sandbox \"77fdf82bb12911600d65464b517c9f2b623f4d9ecf8b317eeba674b5686b7ee0\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 21:08:46.041833 containerd[1514]: time="2025-01-13T21:08:46.041769964Z" level=info msg="CreateContainer within sandbox \"77fdf82bb12911600d65464b517c9f2b623f4d9ecf8b317eeba674b5686b7ee0\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b88915289d7eab6f654fafca9af5ab4b80ecdc2e1d85aa7447fad1eac837abea\"" Jan 13 21:08:46.046467 containerd[1514]: time="2025-01-13T21:08:46.044685739Z" level=info msg="StartContainer for \"b88915289d7eab6f654fafca9af5ab4b80ecdc2e1d85aa7447fad1eac837abea\"" Jan 13 21:08:46.108387 systemd[1]: Started cri-containerd-b88915289d7eab6f654fafca9af5ab4b80ecdc2e1d85aa7447fad1eac837abea.scope - libcontainer container b88915289d7eab6f654fafca9af5ab4b80ecdc2e1d85aa7447fad1eac837abea. Jan 13 21:08:46.160272 containerd[1514]: time="2025-01-13T21:08:46.158440464Z" level=info msg="StartContainer for \"b88915289d7eab6f654fafca9af5ab4b80ecdc2e1d85aa7447fad1eac837abea\" returns successfully" Jan 13 21:08:46.905442 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 13 21:08:47.045413 kubelet[2774]: I0113 21:08:47.044764 2774 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-wf49k" podStartSLOduration=6.044720004 podStartE2EDuration="6.044720004s" podCreationTimestamp="2025-01-13 21:08:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:08:47.043354947 +0000 UTC m=+154.935957124" watchObservedRunningTime="2025-01-13 21:08:47.044720004 +0000 UTC m=+154.937322149" Jan 13 21:08:50.747005 systemd-networkd[1428]: lxc_health: Link UP Jan 13 21:08:50.755310 systemd-networkd[1428]: lxc_health: Gained carrier Jan 13 21:08:52.305443 systemd[1]: run-containerd-runc-k8s.io-b88915289d7eab6f654fafca9af5ab4b80ecdc2e1d85aa7447fad1eac837abea-runc.nHZVYs.mount: Deactivated successfully. Jan 13 21:08:52.382406 systemd-networkd[1428]: lxc_health: Gained IPv6LL Jan 13 21:08:57.151371 kubelet[2774]: E0113 21:08:57.149835 2774 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 127.0.0.1:38202->127.0.0.1:37955: read tcp 127.0.0.1:38202->127.0.0.1:37955: read: connection reset by peer Jan 13 21:08:57.151371 kubelet[2774]: E0113 21:08:57.151376 2774 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:38202->127.0.0.1:37955: write tcp 127.0.0.1:38202->127.0.0.1:37955: write: broken pipe Jan 13 21:08:57.301172 sshd[4769]: Connection closed by 139.178.68.195 port 38812 Jan 13 21:08:57.301860 sshd-session[4711]: pam_unix(sshd:session): session closed for user core Jan 13 21:08:57.312440 systemd[1]: sshd@25-10.230.36.54:22-139.178.68.195:38812.service: Deactivated successfully. Jan 13 21:08:57.319608 systemd[1]: session-28.scope: Deactivated successfully. Jan 13 21:08:57.325200 systemd-logind[1501]: Session 28 logged out. Waiting for processes to exit. Jan 13 21:08:57.329651 systemd-logind[1501]: Removed session 28.