Jan 30 15:44:13.025879 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 09:29:54 -00 2025 Jan 30 15:44:13.025927 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=fe60919b0c6f6abb7495678f87f7024e97a038fc343fa31a123a43ef5f489466 Jan 30 15:44:13.025942 kernel: BIOS-provided physical RAM map: Jan 30 15:44:13.025957 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 30 15:44:13.025967 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 30 15:44:13.025977 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 30 15:44:13.025988 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Jan 30 15:44:13.025998 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Jan 30 15:44:13.026008 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 30 15:44:13.026018 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 30 15:44:13.026029 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 30 15:44:13.026039 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 30 15:44:13.026054 kernel: NX (Execute Disable) protection: active Jan 30 15:44:13.026064 kernel: APIC: Static calls initialized Jan 30 15:44:13.026076 kernel: SMBIOS 2.8 present. Jan 30 15:44:13.026088 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.16.0-3.module_el8.7.0+3346+68867adb 04/01/2014 Jan 30 15:44:13.026099 kernel: Hypervisor detected: KVM Jan 30 15:44:13.026114 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 30 15:44:13.026125 kernel: kvm-clock: using sched offset of 4483684761 cycles Jan 30 15:44:13.026137 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 30 15:44:13.026149 kernel: tsc: Detected 2799.998 MHz processor Jan 30 15:44:13.026160 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 30 15:44:13.026187 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 30 15:44:13.026210 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Jan 30 15:44:13.026222 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 30 15:44:13.026234 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 30 15:44:13.026251 kernel: Using GB pages for direct mapping Jan 30 15:44:13.026263 kernel: ACPI: Early table checksum verification disabled Jan 30 15:44:13.026274 kernel: ACPI: RSDP 0x00000000000F59E0 000014 (v00 BOCHS ) Jan 30 15:44:13.026285 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 15:44:13.026296 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 15:44:13.026307 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 15:44:13.026319 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Jan 30 15:44:13.026330 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 15:44:13.026341 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 15:44:13.026357 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 15:44:13.026368 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 15:44:13.026379 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Jan 30 15:44:13.026390 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Jan 30 15:44:13.026402 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Jan 30 15:44:13.026419 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Jan 30 15:44:13.026431 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Jan 30 15:44:13.026447 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Jan 30 15:44:13.026459 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Jan 30 15:44:13.026482 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 30 15:44:13.026493 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 30 15:44:13.026504 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Jan 30 15:44:13.026515 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Jan 30 15:44:13.026525 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Jan 30 15:44:13.026536 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Jan 30 15:44:13.026551 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Jan 30 15:44:13.026574 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Jan 30 15:44:13.026585 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Jan 30 15:44:13.026595 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Jan 30 15:44:13.026606 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Jan 30 15:44:13.026616 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Jan 30 15:44:13.026626 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Jan 30 15:44:13.026637 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Jan 30 15:44:13.026647 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Jan 30 15:44:13.026657 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Jan 30 15:44:13.026672 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 30 15:44:13.026683 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 30 15:44:13.026693 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Jan 30 15:44:13.026704 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Jan 30 15:44:13.026727 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Jan 30 15:44:13.026738 kernel: Zone ranges: Jan 30 15:44:13.026749 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 30 15:44:13.026759 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Jan 30 15:44:13.026783 kernel: Normal empty Jan 30 15:44:13.026798 kernel: Movable zone start for each node Jan 30 15:44:13.026810 kernel: Early memory node ranges Jan 30 15:44:13.026821 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 30 15:44:13.026844 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Jan 30 15:44:13.026855 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Jan 30 15:44:13.026866 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 15:44:13.026877 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 30 15:44:13.026888 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Jan 30 15:44:13.026898 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 30 15:44:13.026914 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 30 15:44:13.026925 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 30 15:44:13.026948 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 30 15:44:13.026959 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 30 15:44:13.026970 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 30 15:44:13.026981 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 30 15:44:13.026993 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 30 15:44:13.027004 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 30 15:44:13.027015 kernel: TSC deadline timer available Jan 30 15:44:13.027030 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Jan 30 15:44:13.027042 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 30 15:44:13.027053 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 30 15:44:13.027076 kernel: Booting paravirtualized kernel on KVM Jan 30 15:44:13.027087 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 30 15:44:13.027098 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Jan 30 15:44:13.027109 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Jan 30 15:44:13.027133 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Jan 30 15:44:13.027144 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jan 30 15:44:13.027166 kernel: kvm-guest: PV spinlocks enabled Jan 30 15:44:13.027178 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 30 15:44:13.027191 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=fe60919b0c6f6abb7495678f87f7024e97a038fc343fa31a123a43ef5f489466 Jan 30 15:44:13.027282 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 15:44:13.027295 kernel: random: crng init done Jan 30 15:44:13.027307 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 15:44:13.027319 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 30 15:44:13.027330 kernel: Fallback order for Node 0: 0 Jan 30 15:44:13.027348 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Jan 30 15:44:13.027360 kernel: Policy zone: DMA32 Jan 30 15:44:13.027372 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 15:44:13.027383 kernel: software IO TLB: area num 16. Jan 30 15:44:13.027395 kernel: Memory: 1899476K/2096616K available (14336K kernel code, 2301K rwdata, 22800K rodata, 43320K init, 1752K bss, 196880K reserved, 0K cma-reserved) Jan 30 15:44:13.027407 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jan 30 15:44:13.027419 kernel: Kernel/User page tables isolation: enabled Jan 30 15:44:13.027430 kernel: ftrace: allocating 37893 entries in 149 pages Jan 30 15:44:13.027442 kernel: ftrace: allocated 149 pages with 4 groups Jan 30 15:44:13.027458 kernel: Dynamic Preempt: voluntary Jan 30 15:44:13.027470 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 15:44:13.027495 kernel: rcu: RCU event tracing is enabled. Jan 30 15:44:13.027516 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jan 30 15:44:13.027527 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 15:44:13.027567 kernel: Rude variant of Tasks RCU enabled. Jan 30 15:44:13.027583 kernel: Tracing variant of Tasks RCU enabled. Jan 30 15:44:13.027596 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 15:44:13.027608 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jan 30 15:44:13.027620 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Jan 30 15:44:13.027632 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 15:44:13.027644 kernel: Console: colour VGA+ 80x25 Jan 30 15:44:13.027660 kernel: printk: console [tty0] enabled Jan 30 15:44:13.027673 kernel: printk: console [ttyS0] enabled Jan 30 15:44:13.027685 kernel: ACPI: Core revision 20230628 Jan 30 15:44:13.027697 kernel: APIC: Switch to symmetric I/O mode setup Jan 30 15:44:13.027721 kernel: x2apic enabled Jan 30 15:44:13.027736 kernel: APIC: Switched APIC routing to: physical x2apic Jan 30 15:44:13.027748 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Jan 30 15:44:13.027760 kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998) Jan 30 15:44:13.027784 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 30 15:44:13.027796 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 30 15:44:13.027808 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 30 15:44:13.027819 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 30 15:44:13.027831 kernel: Spectre V2 : Mitigation: Retpolines Jan 30 15:44:13.027843 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 30 15:44:13.027854 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 30 15:44:13.027871 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 30 15:44:13.027883 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 30 15:44:13.027894 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 30 15:44:13.027918 kernel: MDS: Mitigation: Clear CPU buffers Jan 30 15:44:13.027936 kernel: MMIO Stale Data: Unknown: No mitigations Jan 30 15:44:13.027947 kernel: SRBDS: Unknown: Dependent on hypervisor status Jan 30 15:44:13.027958 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 30 15:44:13.027982 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 30 15:44:13.027994 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 30 15:44:13.028005 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 30 15:44:13.028044 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 30 15:44:13.028056 kernel: Freeing SMP alternatives memory: 32K Jan 30 15:44:13.028068 kernel: pid_max: default: 32768 minimum: 301 Jan 30 15:44:13.028080 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 15:44:13.028092 kernel: landlock: Up and running. Jan 30 15:44:13.028104 kernel: SELinux: Initializing. Jan 30 15:44:13.028116 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 30 15:44:13.028128 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 30 15:44:13.028140 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Jan 30 15:44:13.028153 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 30 15:44:13.028165 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 30 15:44:13.028182 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 30 15:44:13.028214 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Jan 30 15:44:13.028230 kernel: signal: max sigframe size: 1776 Jan 30 15:44:13.028243 kernel: rcu: Hierarchical SRCU implementation. Jan 30 15:44:13.028255 kernel: rcu: Max phase no-delay instances is 400. Jan 30 15:44:13.028267 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 30 15:44:13.028280 kernel: smp: Bringing up secondary CPUs ... Jan 30 15:44:13.028292 kernel: smpboot: x86: Booting SMP configuration: Jan 30 15:44:13.028304 kernel: .... node #0, CPUs: #1 Jan 30 15:44:13.028322 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Jan 30 15:44:13.028334 kernel: smp: Brought up 1 node, 2 CPUs Jan 30 15:44:13.028346 kernel: smpboot: Max logical packages: 16 Jan 30 15:44:13.028359 kernel: smpboot: Total of 2 processors activated (11199.99 BogoMIPS) Jan 30 15:44:13.028371 kernel: devtmpfs: initialized Jan 30 15:44:13.028383 kernel: x86/mm: Memory block size: 128MB Jan 30 15:44:13.028395 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 15:44:13.028407 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jan 30 15:44:13.028419 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 15:44:13.028436 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 15:44:13.028449 kernel: audit: initializing netlink subsys (disabled) Jan 30 15:44:13.028461 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 15:44:13.028473 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 30 15:44:13.028485 kernel: audit: type=2000 audit(1738251851.623:1): state=initialized audit_enabled=0 res=1 Jan 30 15:44:13.028497 kernel: cpuidle: using governor menu Jan 30 15:44:13.028509 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 15:44:13.028521 kernel: dca service started, version 1.12.1 Jan 30 15:44:13.028534 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 30 15:44:13.028550 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 30 15:44:13.028563 kernel: PCI: Using configuration type 1 for base access Jan 30 15:44:13.028576 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 30 15:44:13.028588 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 15:44:13.028600 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 15:44:13.028625 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 15:44:13.028636 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 15:44:13.028648 kernel: ACPI: Added _OSI(Module Device) Jan 30 15:44:13.028660 kernel: ACPI: Added _OSI(Processor Device) Jan 30 15:44:13.028688 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 15:44:13.028701 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 15:44:13.028713 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 15:44:13.028725 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 30 15:44:13.028737 kernel: ACPI: Interpreter enabled Jan 30 15:44:13.028749 kernel: ACPI: PM: (supports S0 S5) Jan 30 15:44:13.028761 kernel: ACPI: Using IOAPIC for interrupt routing Jan 30 15:44:13.028773 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 30 15:44:13.028785 kernel: PCI: Using E820 reservations for host bridge windows Jan 30 15:44:13.028802 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 30 15:44:13.028814 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 30 15:44:13.029092 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 30 15:44:13.029307 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 30 15:44:13.029496 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 30 15:44:13.029534 kernel: PCI host bridge to bus 0000:00 Jan 30 15:44:13.029730 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 30 15:44:13.029913 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 30 15:44:13.030082 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 30 15:44:13.030268 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Jan 30 15:44:13.030417 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 30 15:44:13.030585 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Jan 30 15:44:13.030739 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 30 15:44:13.030949 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 30 15:44:13.031154 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Jan 30 15:44:13.031359 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Jan 30 15:44:13.031584 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Jan 30 15:44:13.031861 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Jan 30 15:44:13.032057 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 30 15:44:13.032329 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jan 30 15:44:13.032525 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Jan 30 15:44:13.032733 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jan 30 15:44:13.032896 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Jan 30 15:44:13.033112 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jan 30 15:44:13.034146 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Jan 30 15:44:13.034374 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jan 30 15:44:13.034536 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Jan 30 15:44:13.034731 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jan 30 15:44:13.034905 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Jan 30 15:44:13.035085 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jan 30 15:44:13.035269 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Jan 30 15:44:13.035441 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jan 30 15:44:13.035609 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Jan 30 15:44:13.037461 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jan 30 15:44:13.037669 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Jan 30 15:44:13.037840 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 30 15:44:13.037993 kernel: pci 0000:00:03.0: reg 0x10: [io 0xd0c0-0xd0df] Jan 30 15:44:13.038172 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Jan 30 15:44:13.039458 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Jan 30 15:44:13.039644 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Jan 30 15:44:13.039856 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jan 30 15:44:13.040041 kernel: pci 0000:00:04.0: reg 0x10: [io 0xd000-0xd07f] Jan 30 15:44:13.040857 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Jan 30 15:44:13.041033 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Jan 30 15:44:13.041300 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 30 15:44:13.041465 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 30 15:44:13.041679 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 30 15:44:13.041843 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xd0e0-0xd0ff] Jan 30 15:44:13.042000 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Jan 30 15:44:13.043265 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 30 15:44:13.043447 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 30 15:44:13.043627 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Jan 30 15:44:13.043794 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Jan 30 15:44:13.043969 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 30 15:44:13.044165 kernel: pci 0000:00:02.0: bridge window [io 0xc000-0xcfff] Jan 30 15:44:13.044351 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 30 15:44:13.044515 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 30 15:44:13.044693 kernel: pci_bus 0000:02: extended config space not accessible Jan 30 15:44:13.044922 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Jan 30 15:44:13.045095 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Jan 30 15:44:13.048308 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 30 15:44:13.048503 kernel: pci 0000:01:00.0: bridge window [io 0xc000-0xcfff] Jan 30 15:44:13.048687 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 30 15:44:13.048872 kernel: pci 0000:01:00.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 30 15:44:13.049095 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Jan 30 15:44:13.049385 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Jan 30 15:44:13.049562 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 30 15:44:13.049746 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 30 15:44:13.049928 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 30 15:44:13.050101 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Jan 30 15:44:13.050348 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Jan 30 15:44:13.050515 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 30 15:44:13.050682 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 30 15:44:13.050841 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 30 15:44:13.051002 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 30 15:44:13.051181 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 30 15:44:13.052421 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 30 15:44:13.052586 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 30 15:44:13.052745 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 30 15:44:13.052912 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 30 15:44:13.053088 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 30 15:44:13.054333 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 30 15:44:13.054512 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 30 15:44:13.054699 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 30 15:44:13.054869 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 30 15:44:13.055046 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 30 15:44:13.055263 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 30 15:44:13.055423 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 30 15:44:13.055581 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 30 15:44:13.055608 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 30 15:44:13.055621 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 30 15:44:13.055634 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 30 15:44:13.055646 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 30 15:44:13.055658 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 30 15:44:13.055671 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 30 15:44:13.055683 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 30 15:44:13.055695 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 30 15:44:13.055713 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 30 15:44:13.055726 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 30 15:44:13.055738 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 30 15:44:13.055751 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 30 15:44:13.055763 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 30 15:44:13.055775 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 30 15:44:13.055787 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 30 15:44:13.055799 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 30 15:44:13.055812 kernel: iommu: Default domain type: Translated Jan 30 15:44:13.055829 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 30 15:44:13.055842 kernel: PCI: Using ACPI for IRQ routing Jan 30 15:44:13.055854 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 30 15:44:13.055866 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 30 15:44:13.055878 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Jan 30 15:44:13.056034 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 30 15:44:13.057267 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 30 15:44:13.057437 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 30 15:44:13.057457 kernel: vgaarb: loaded Jan 30 15:44:13.057477 kernel: clocksource: Switched to clocksource kvm-clock Jan 30 15:44:13.057490 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 15:44:13.057503 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 15:44:13.057516 kernel: pnp: PnP ACPI init Jan 30 15:44:13.057702 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 30 15:44:13.057723 kernel: pnp: PnP ACPI: found 5 devices Jan 30 15:44:13.057736 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 30 15:44:13.057749 kernel: NET: Registered PF_INET protocol family Jan 30 15:44:13.057777 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 15:44:13.057790 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 30 15:44:13.057802 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 15:44:13.057815 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 30 15:44:13.057827 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 30 15:44:13.057852 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 30 15:44:13.057864 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 30 15:44:13.057876 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 30 15:44:13.057888 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 15:44:13.057905 kernel: NET: Registered PF_XDP protocol family Jan 30 15:44:13.058087 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 30 15:44:13.058274 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 30 15:44:13.058435 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 30 15:44:13.058595 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 30 15:44:13.058755 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 30 15:44:13.058942 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 30 15:44:13.059104 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 30 15:44:13.060348 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x1000-0x1fff] Jan 30 15:44:13.060529 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x2000-0x2fff] Jan 30 15:44:13.060691 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x3000-0x3fff] Jan 30 15:44:13.060882 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x4000-0x4fff] Jan 30 15:44:13.061040 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x5000-0x5fff] Jan 30 15:44:13.061244 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x6000-0x6fff] Jan 30 15:44:13.061431 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x7000-0x7fff] Jan 30 15:44:13.061629 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 30 15:44:13.061808 kernel: pci 0000:01:00.0: bridge window [io 0xc000-0xcfff] Jan 30 15:44:13.061993 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 30 15:44:13.062686 kernel: pci 0000:01:00.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 30 15:44:13.062897 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 30 15:44:13.063057 kernel: pci 0000:00:02.0: bridge window [io 0xc000-0xcfff] Jan 30 15:44:13.063265 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 30 15:44:13.063434 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 30 15:44:13.063593 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 30 15:44:13.063751 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x1fff] Jan 30 15:44:13.063909 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 30 15:44:13.064066 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 30 15:44:13.064338 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 30 15:44:13.064532 kernel: pci 0000:00:02.2: bridge window [io 0x2000-0x2fff] Jan 30 15:44:13.064691 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 30 15:44:13.064849 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 30 15:44:13.065007 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 30 15:44:13.065176 kernel: pci 0000:00:02.3: bridge window [io 0x3000-0x3fff] Jan 30 15:44:13.065380 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 30 15:44:13.065538 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 30 15:44:13.065696 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 30 15:44:13.065854 kernel: pci 0000:00:02.4: bridge window [io 0x4000-0x4fff] Jan 30 15:44:13.066076 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 30 15:44:13.066273 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 30 15:44:13.066448 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 30 15:44:13.066651 kernel: pci 0000:00:02.5: bridge window [io 0x5000-0x5fff] Jan 30 15:44:13.066848 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 30 15:44:13.067036 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 30 15:44:13.069245 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 30 15:44:13.069412 kernel: pci 0000:00:02.6: bridge window [io 0x6000-0x6fff] Jan 30 15:44:13.069593 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 30 15:44:13.069777 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 30 15:44:13.069946 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 30 15:44:13.070117 kernel: pci 0000:00:02.7: bridge window [io 0x7000-0x7fff] Jan 30 15:44:13.072345 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 30 15:44:13.072519 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 30 15:44:13.072702 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 30 15:44:13.072850 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 30 15:44:13.072996 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 30 15:44:13.073141 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Jan 30 15:44:13.073315 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 30 15:44:13.073464 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Jan 30 15:44:13.073662 kernel: pci_bus 0000:01: resource 0 [io 0xc000-0xcfff] Jan 30 15:44:13.073824 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Jan 30 15:44:13.073976 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Jan 30 15:44:13.074147 kernel: pci_bus 0000:02: resource 0 [io 0xc000-0xcfff] Jan 30 15:44:13.078379 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Jan 30 15:44:13.078572 kernel: pci_bus 0000:02: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Jan 30 15:44:13.078786 kernel: pci_bus 0000:03: resource 0 [io 0x1000-0x1fff] Jan 30 15:44:13.078956 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Jan 30 15:44:13.079118 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 30 15:44:13.081322 kernel: pci_bus 0000:04: resource 0 [io 0x2000-0x2fff] Jan 30 15:44:13.081485 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Jan 30 15:44:13.081642 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 30 15:44:13.081835 kernel: pci_bus 0000:05: resource 0 [io 0x3000-0x3fff] Jan 30 15:44:13.081996 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Jan 30 15:44:13.082174 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 30 15:44:13.083100 kernel: pci_bus 0000:06: resource 0 [io 0x4000-0x4fff] Jan 30 15:44:13.083308 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Jan 30 15:44:13.083465 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 30 15:44:13.083635 kernel: pci_bus 0000:07: resource 0 [io 0x5000-0x5fff] Jan 30 15:44:13.083789 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Jan 30 15:44:13.083940 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 30 15:44:13.084119 kernel: pci_bus 0000:08: resource 0 [io 0x6000-0x6fff] Jan 30 15:44:13.084322 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Jan 30 15:44:13.084474 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 30 15:44:13.084651 kernel: pci_bus 0000:09: resource 0 [io 0x7000-0x7fff] Jan 30 15:44:13.084840 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Jan 30 15:44:13.084998 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 30 15:44:13.085019 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 30 15:44:13.085040 kernel: PCI: CLS 0 bytes, default 64 Jan 30 15:44:13.085054 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 30 15:44:13.085068 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Jan 30 15:44:13.085081 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 30 15:44:13.085098 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Jan 30 15:44:13.085111 kernel: Initialise system trusted keyrings Jan 30 15:44:13.085124 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 30 15:44:13.085137 kernel: Key type asymmetric registered Jan 30 15:44:13.085150 kernel: Asymmetric key parser 'x509' registered Jan 30 15:44:13.085168 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 30 15:44:13.085185 kernel: io scheduler mq-deadline registered Jan 30 15:44:13.085231 kernel: io scheduler kyber registered Jan 30 15:44:13.085254 kernel: io scheduler bfq registered Jan 30 15:44:13.085420 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jan 30 15:44:13.085585 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jan 30 15:44:13.085751 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 15:44:13.085923 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jan 30 15:44:13.086098 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jan 30 15:44:13.086333 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 15:44:13.086501 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jan 30 15:44:13.086665 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jan 30 15:44:13.086824 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 15:44:13.086982 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jan 30 15:44:13.087150 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jan 30 15:44:13.087365 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 15:44:13.087528 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jan 30 15:44:13.087698 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jan 30 15:44:13.087856 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 15:44:13.088025 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jan 30 15:44:13.088303 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jan 30 15:44:13.088465 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 15:44:13.088636 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jan 30 15:44:13.088818 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jan 30 15:44:13.088991 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 15:44:13.089152 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jan 30 15:44:13.089390 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jan 30 15:44:13.089549 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 15:44:13.089570 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 30 15:44:13.089584 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 30 15:44:13.089604 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 30 15:44:13.089617 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 15:44:13.089631 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 30 15:44:13.089644 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 30 15:44:13.089662 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 30 15:44:13.089675 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 30 15:44:13.089688 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 30 15:44:13.089870 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 30 15:44:13.090043 kernel: rtc_cmos 00:03: registered as rtc0 Jan 30 15:44:13.090250 kernel: rtc_cmos 00:03: setting system clock to 2025-01-30T15:44:12 UTC (1738251852) Jan 30 15:44:13.090401 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 30 15:44:13.090420 kernel: intel_pstate: CPU model not supported Jan 30 15:44:13.090441 kernel: NET: Registered PF_INET6 protocol family Jan 30 15:44:13.090454 kernel: Segment Routing with IPv6 Jan 30 15:44:13.090467 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 15:44:13.090480 kernel: NET: Registered PF_PACKET protocol family Jan 30 15:44:13.090493 kernel: Key type dns_resolver registered Jan 30 15:44:13.090506 kernel: IPI shorthand broadcast: enabled Jan 30 15:44:13.090519 kernel: sched_clock: Marking stable (1219123946, 228517840)->(1575559383, -127917597) Jan 30 15:44:13.090532 kernel: registered taskstats version 1 Jan 30 15:44:13.090545 kernel: Loading compiled-in X.509 certificates Jan 30 15:44:13.090563 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 7f0738935740330d55027faa5877e7155d5f24f4' Jan 30 15:44:13.090576 kernel: Key type .fscrypt registered Jan 30 15:44:13.090588 kernel: Key type fscrypt-provisioning registered Jan 30 15:44:13.090601 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 15:44:13.090614 kernel: ima: Allocated hash algorithm: sha1 Jan 30 15:44:13.090627 kernel: ima: No architecture policies found Jan 30 15:44:13.090640 kernel: clk: Disabling unused clocks Jan 30 15:44:13.090652 kernel: Freeing unused kernel image (initmem) memory: 43320K Jan 30 15:44:13.090665 kernel: Write protecting the kernel read-only data: 38912k Jan 30 15:44:13.090683 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Jan 30 15:44:13.090696 kernel: Run /init as init process Jan 30 15:44:13.090709 kernel: with arguments: Jan 30 15:44:13.090721 kernel: /init Jan 30 15:44:13.090734 kernel: with environment: Jan 30 15:44:13.090747 kernel: HOME=/ Jan 30 15:44:13.090759 kernel: TERM=linux Jan 30 15:44:13.090772 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 15:44:13.090802 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 15:44:13.090826 systemd[1]: Detected virtualization kvm. Jan 30 15:44:13.090841 systemd[1]: Detected architecture x86-64. Jan 30 15:44:13.090859 systemd[1]: Running in initrd. Jan 30 15:44:13.090872 systemd[1]: No hostname configured, using default hostname. Jan 30 15:44:13.090886 systemd[1]: Hostname set to . Jan 30 15:44:13.090900 systemd[1]: Initializing machine ID from VM UUID. Jan 30 15:44:13.090919 systemd[1]: Queued start job for default target initrd.target. Jan 30 15:44:13.090938 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 15:44:13.090952 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 15:44:13.090966 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 15:44:13.090990 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 15:44:13.091004 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 15:44:13.091018 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 15:44:13.091034 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 15:44:13.091062 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 15:44:13.091076 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 15:44:13.091090 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 15:44:13.091104 systemd[1]: Reached target paths.target - Path Units. Jan 30 15:44:13.091118 systemd[1]: Reached target slices.target - Slice Units. Jan 30 15:44:13.091132 systemd[1]: Reached target swap.target - Swaps. Jan 30 15:44:13.091146 systemd[1]: Reached target timers.target - Timer Units. Jan 30 15:44:13.091185 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 15:44:13.091220 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 15:44:13.091238 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 15:44:13.091252 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 15:44:13.091265 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 15:44:13.091279 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 15:44:13.091293 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 15:44:13.091312 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 15:44:13.091327 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 15:44:13.091341 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 15:44:13.091359 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 15:44:13.091373 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 15:44:13.091387 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 15:44:13.091401 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 15:44:13.091414 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 15:44:13.091465 systemd-journald[201]: Collecting audit messages is disabled. Jan 30 15:44:13.091503 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 15:44:13.091517 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 15:44:13.091540 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 15:44:13.091559 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 15:44:13.091574 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 15:44:13.091587 kernel: Bridge firewalling registered Jan 30 15:44:13.091606 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 15:44:13.091627 systemd-journald[201]: Journal started Jan 30 15:44:13.091669 systemd-journald[201]: Runtime Journal (/run/log/journal/68b71b6fdca041af898c653bead033ad) is 4.7M, max 37.9M, 33.2M free. Jan 30 15:44:13.032081 systemd-modules-load[202]: Inserted module 'overlay' Jan 30 15:44:13.153068 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 15:44:13.078931 systemd-modules-load[202]: Inserted module 'br_netfilter' Jan 30 15:44:13.154064 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 15:44:13.156903 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 15:44:13.164367 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 15:44:13.173428 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 15:44:13.177352 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 15:44:13.182410 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 15:44:13.186524 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 15:44:13.201771 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 15:44:13.210386 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 15:44:13.211485 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 15:44:13.213468 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 15:44:13.224403 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 15:44:13.231523 dracut-cmdline[234]: dracut-dracut-053 Jan 30 15:44:13.235154 dracut-cmdline[234]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=fe60919b0c6f6abb7495678f87f7024e97a038fc343fa31a123a43ef5f489466 Jan 30 15:44:13.273221 systemd-resolved[240]: Positive Trust Anchors: Jan 30 15:44:13.273266 systemd-resolved[240]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 15:44:13.273307 systemd-resolved[240]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 15:44:13.282045 systemd-resolved[240]: Defaulting to hostname 'linux'. Jan 30 15:44:13.284010 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 15:44:13.285080 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 15:44:13.340225 kernel: SCSI subsystem initialized Jan 30 15:44:13.351212 kernel: Loading iSCSI transport class v2.0-870. Jan 30 15:44:13.364229 kernel: iscsi: registered transport (tcp) Jan 30 15:44:13.389636 kernel: iscsi: registered transport (qla4xxx) Jan 30 15:44:13.389697 kernel: QLogic iSCSI HBA Driver Jan 30 15:44:13.444278 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 15:44:13.451406 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 15:44:13.485339 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 15:44:13.485388 kernel: device-mapper: uevent: version 1.0.3 Jan 30 15:44:13.488679 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 15:44:13.535216 kernel: raid6: sse2x4 gen() 13359 MB/s Jan 30 15:44:13.553260 kernel: raid6: sse2x2 gen() 9487 MB/s Jan 30 15:44:13.571844 kernel: raid6: sse2x1 gen() 9353 MB/s Jan 30 15:44:13.571879 kernel: raid6: using algorithm sse2x4 gen() 13359 MB/s Jan 30 15:44:13.590732 kernel: raid6: .... xor() 7833 MB/s, rmw enabled Jan 30 15:44:13.590784 kernel: raid6: using ssse3x2 recovery algorithm Jan 30 15:44:13.615209 kernel: xor: automatically using best checksumming function avx Jan 30 15:44:13.782295 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 15:44:13.797102 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 15:44:13.805392 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 15:44:13.829944 systemd-udevd[420]: Using default interface naming scheme 'v255'. Jan 30 15:44:13.837466 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 15:44:13.847347 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 15:44:13.869753 dracut-pre-trigger[427]: rd.md=0: removing MD RAID activation Jan 30 15:44:13.914013 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 15:44:13.922456 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 15:44:14.032474 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 15:44:14.043367 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 15:44:14.073224 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 15:44:14.076331 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 15:44:14.078033 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 15:44:14.080003 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 15:44:14.086384 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 15:44:14.116763 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 15:44:14.179247 kernel: cryptd: max_cpu_qlen set to 1000 Jan 30 15:44:14.179341 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Jan 30 15:44:14.250412 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 30 15:44:14.250663 kernel: AVX version of gcm_enc/dec engaged. Jan 30 15:44:14.250696 kernel: AES CTR mode by8 optimization enabled Jan 30 15:44:14.250733 kernel: ACPI: bus type USB registered Jan 30 15:44:14.250750 kernel: usbcore: registered new interface driver usbfs Jan 30 15:44:14.250777 kernel: usbcore: registered new interface driver hub Jan 30 15:44:14.250795 kernel: usbcore: registered new device driver usb Jan 30 15:44:14.250812 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 15:44:14.250839 kernel: GPT:17805311 != 125829119 Jan 30 15:44:14.250855 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 15:44:14.250872 kernel: GPT:17805311 != 125829119 Jan 30 15:44:14.250894 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 15:44:14.250912 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 15:44:14.224097 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 15:44:14.224319 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 15:44:14.225286 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 15:44:14.225960 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 15:44:14.226122 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 15:44:14.240764 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 15:44:14.249616 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 15:44:14.283360 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 30 15:44:14.343421 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Jan 30 15:44:14.343687 kernel: libata version 3.00 loaded. Jan 30 15:44:14.343709 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 30 15:44:14.343897 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 30 15:44:14.344088 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Jan 30 15:44:14.344338 kernel: ahci 0000:00:1f.2: version 3.0 Jan 30 15:44:14.344580 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 30 15:44:14.344609 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 30 15:44:14.344798 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 30 15:44:14.344996 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Jan 30 15:44:14.345242 kernel: hub 1-0:1.0: USB hub found Jan 30 15:44:14.345516 kernel: hub 1-0:1.0: 4 ports detected Jan 30 15:44:14.345728 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 30 15:44:14.345945 kernel: hub 2-0:1.0: USB hub found Jan 30 15:44:14.347193 kernel: hub 2-0:1.0: 4 ports detected Jan 30 15:44:14.347410 kernel: scsi host0: ahci Jan 30 15:44:14.347640 kernel: scsi host1: ahci Jan 30 15:44:14.347838 kernel: scsi host2: ahci Jan 30 15:44:14.348083 kernel: scsi host3: ahci Jan 30 15:44:14.348355 kernel: scsi host4: ahci Jan 30 15:44:14.348546 kernel: scsi host5: ahci Jan 30 15:44:14.348746 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 41 Jan 30 15:44:14.348775 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 41 Jan 30 15:44:14.348792 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 41 Jan 30 15:44:14.348809 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 41 Jan 30 15:44:14.348825 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 41 Jan 30 15:44:14.348842 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 41 Jan 30 15:44:14.348859 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (465) Jan 30 15:44:14.357673 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 30 15:44:14.433827 kernel: BTRFS: device fsid f8084233-4a6f-4e67-af0b-519e43b19e58 devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (469) Jan 30 15:44:14.433518 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 15:44:14.452554 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 30 15:44:14.465318 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 15:44:14.471383 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 30 15:44:14.472249 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 30 15:44:14.487664 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 15:44:14.493307 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 15:44:14.503202 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 15:44:14.503271 disk-uuid[563]: Primary Header is updated. Jan 30 15:44:14.503271 disk-uuid[563]: Secondary Entries is updated. Jan 30 15:44:14.503271 disk-uuid[563]: Secondary Header is updated. Jan 30 15:44:14.541255 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 15:44:14.573187 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 30 15:44:14.646336 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 30 15:44:14.654215 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 30 15:44:14.654274 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 30 15:44:14.656282 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 30 15:44:14.659210 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 30 15:44:14.659249 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 30 15:44:14.718193 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 30 15:44:14.724444 kernel: usbcore: registered new interface driver usbhid Jan 30 15:44:14.724492 kernel: usbhid: USB HID core driver Jan 30 15:44:14.732624 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Jan 30 15:44:14.732665 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Jan 30 15:44:15.519213 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 15:44:15.520624 disk-uuid[564]: The operation has completed successfully. Jan 30 15:44:15.579118 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 15:44:15.579297 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 15:44:15.598427 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 15:44:15.604804 sh[585]: Success Jan 30 15:44:15.622235 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Jan 30 15:44:15.699952 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 15:44:15.702299 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 15:44:15.705777 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 15:44:15.736720 kernel: BTRFS info (device dm-0): first mount of filesystem f8084233-4a6f-4e67-af0b-519e43b19e58 Jan 30 15:44:15.736820 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 30 15:44:15.738775 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 15:44:15.741987 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 15:44:15.742024 kernel: BTRFS info (device dm-0): using free space tree Jan 30 15:44:15.753812 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 15:44:15.755406 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 15:44:15.761388 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 15:44:15.765341 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 15:44:15.781225 kernel: BTRFS info (device vda6): first mount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 15:44:15.781294 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 15:44:15.781316 kernel: BTRFS info (device vda6): using free space tree Jan 30 15:44:15.787189 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 15:44:15.801481 kernel: BTRFS info (device vda6): last unmount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 15:44:15.801023 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 15:44:15.813410 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 15:44:15.821342 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 15:44:15.904594 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 15:44:15.914542 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 15:44:15.961823 systemd-networkd[770]: lo: Link UP Jan 30 15:44:15.963215 systemd-networkd[770]: lo: Gained carrier Jan 30 15:44:15.966721 systemd-networkd[770]: Enumeration completed Jan 30 15:44:15.966952 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 15:44:15.969405 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 15:44:15.969411 systemd-networkd[770]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 15:44:15.970841 systemd[1]: Reached target network.target - Network. Jan 30 15:44:15.974327 systemd-networkd[770]: eth0: Link UP Jan 30 15:44:15.974333 systemd-networkd[770]: eth0: Gained carrier Jan 30 15:44:15.974349 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 15:44:15.986941 ignition[683]: Ignition 2.20.0 Jan 30 15:44:15.986971 ignition[683]: Stage: fetch-offline Jan 30 15:44:15.987070 ignition[683]: no configs at "/usr/lib/ignition/base.d" Jan 30 15:44:15.989351 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 15:44:15.987101 ignition[683]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 30 15:44:15.987310 ignition[683]: parsed url from cmdline: "" Jan 30 15:44:15.987317 ignition[683]: no config URL provided Jan 30 15:44:15.987327 ignition[683]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 15:44:15.987347 ignition[683]: no config at "/usr/lib/ignition/user.ign" Jan 30 15:44:15.994680 systemd-networkd[770]: eth0: DHCPv4 address 10.243.85.194/30, gateway 10.243.85.193 acquired from 10.243.85.193 Jan 30 15:44:15.987364 ignition[683]: failed to fetch config: resource requires networking Jan 30 15:44:15.987684 ignition[683]: Ignition finished successfully Jan 30 15:44:16.000479 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 30 15:44:16.022002 ignition[777]: Ignition 2.20.0 Jan 30 15:44:16.022028 ignition[777]: Stage: fetch Jan 30 15:44:16.022398 ignition[777]: no configs at "/usr/lib/ignition/base.d" Jan 30 15:44:16.022428 ignition[777]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 30 15:44:16.022588 ignition[777]: parsed url from cmdline: "" Jan 30 15:44:16.022595 ignition[777]: no config URL provided Jan 30 15:44:16.022605 ignition[777]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 15:44:16.022620 ignition[777]: no config at "/usr/lib/ignition/user.ign" Jan 30 15:44:16.022803 ignition[777]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 30 15:44:16.023017 ignition[777]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 30 15:44:16.023067 ignition[777]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 30 15:44:16.044811 ignition[777]: GET result: OK Jan 30 15:44:16.046077 ignition[777]: parsing config with SHA512: 8927a51e7ddc875a2e72c61e4cadfb850a4a0973671167ccd29df83fa47ef9bc43ae70f56e3137e256d798567e2d20d1c45cc7b68275a60ac4fc56a490e29e47 Jan 30 15:44:16.055331 unknown[777]: fetched base config from "system" Jan 30 15:44:16.055488 unknown[777]: fetched base config from "system" Jan 30 15:44:16.056023 ignition[777]: fetch: fetch complete Jan 30 15:44:16.055498 unknown[777]: fetched user config from "openstack" Jan 30 15:44:16.056032 ignition[777]: fetch: fetch passed Jan 30 15:44:16.059227 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 30 15:44:16.056114 ignition[777]: Ignition finished successfully Jan 30 15:44:16.081380 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 15:44:16.100225 ignition[784]: Ignition 2.20.0 Jan 30 15:44:16.100250 ignition[784]: Stage: kargs Jan 30 15:44:16.100543 ignition[784]: no configs at "/usr/lib/ignition/base.d" Jan 30 15:44:16.100563 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 30 15:44:16.101745 ignition[784]: kargs: kargs passed Jan 30 15:44:16.101840 ignition[784]: Ignition finished successfully Jan 30 15:44:16.106482 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 15:44:16.113425 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 15:44:16.134440 ignition[790]: Ignition 2.20.0 Jan 30 15:44:16.135625 ignition[790]: Stage: disks Jan 30 15:44:16.135924 ignition[790]: no configs at "/usr/lib/ignition/base.d" Jan 30 15:44:16.135944 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 30 15:44:16.137149 ignition[790]: disks: disks passed Jan 30 15:44:16.140139 ignition[790]: Ignition finished successfully Jan 30 15:44:16.141523 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 15:44:16.143098 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 15:44:16.143871 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 15:44:16.145573 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 15:44:16.147274 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 15:44:16.148622 systemd[1]: Reached target basic.target - Basic System. Jan 30 15:44:16.155380 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 15:44:16.185129 systemd-fsck[798]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 30 15:44:16.188973 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 15:44:16.195311 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 15:44:16.310253 kernel: EXT4-fs (vda9): mounted filesystem cdc615db-d057-439f-af25-aa57b1c399e2 r/w with ordered data mode. Quota mode: none. Jan 30 15:44:16.310989 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 15:44:16.312897 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 15:44:16.319290 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 15:44:16.323838 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 15:44:16.325476 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 30 15:44:16.327930 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 30 15:44:16.332218 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 15:44:16.333449 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 15:44:16.340206 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (806) Jan 30 15:44:16.340920 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 15:44:16.342197 kernel: BTRFS info (device vda6): first mount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 15:44:16.342229 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 15:44:16.342248 kernel: BTRFS info (device vda6): using free space tree Jan 30 15:44:16.356351 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 15:44:16.362789 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 15:44:16.369154 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 15:44:16.415629 initrd-setup-root[832]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 15:44:16.424120 initrd-setup-root[841]: cut: /sysroot/etc/group: No such file or directory Jan 30 15:44:16.432027 initrd-setup-root[849]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 15:44:16.437267 initrd-setup-root[856]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 15:44:16.542719 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 15:44:16.556391 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 15:44:16.560349 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 15:44:16.571207 kernel: BTRFS info (device vda6): last unmount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 15:44:16.602806 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 15:44:16.606944 ignition[924]: INFO : Ignition 2.20.0 Jan 30 15:44:16.606944 ignition[924]: INFO : Stage: mount Jan 30 15:44:16.609243 ignition[924]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 15:44:16.609243 ignition[924]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 30 15:44:16.609243 ignition[924]: INFO : mount: mount passed Jan 30 15:44:16.609243 ignition[924]: INFO : Ignition finished successfully Jan 30 15:44:16.610186 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 15:44:16.734886 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 15:44:17.998631 systemd-networkd[770]: eth0: Gained IPv6LL Jan 30 15:44:19.507057 systemd-networkd[770]: eth0: Ignoring DHCPv6 address 2a02:1348:17c:d570:24:19ff:fef3:55c2/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17c:d570:24:19ff:fef3:55c2/64 assigned by NDisc. Jan 30 15:44:19.507076 systemd-networkd[770]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 30 15:44:23.504032 coreos-metadata[808]: Jan 30 15:44:23.503 WARN failed to locate config-drive, using the metadata service API instead Jan 30 15:44:23.528279 coreos-metadata[808]: Jan 30 15:44:23.528 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 30 15:44:23.546655 coreos-metadata[808]: Jan 30 15:44:23.546 INFO Fetch successful Jan 30 15:44:23.547523 coreos-metadata[808]: Jan 30 15:44:23.547 INFO wrote hostname srv-eom3a.gb1.brightbox.com to /sysroot/etc/hostname Jan 30 15:44:23.550397 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 30 15:44:23.550612 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 30 15:44:23.569826 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 15:44:23.582394 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 15:44:23.596230 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (940) Jan 30 15:44:23.601454 kernel: BTRFS info (device vda6): first mount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 15:44:23.601494 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 15:44:23.603238 kernel: BTRFS info (device vda6): using free space tree Jan 30 15:44:23.608200 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 15:44:23.612319 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 15:44:23.636551 ignition[957]: INFO : Ignition 2.20.0 Jan 30 15:44:23.636551 ignition[957]: INFO : Stage: files Jan 30 15:44:23.638323 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 15:44:23.638323 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 30 15:44:23.638323 ignition[957]: DEBUG : files: compiled without relabeling support, skipping Jan 30 15:44:23.640799 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 15:44:23.640799 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 15:44:23.642688 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 15:44:23.643763 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 15:44:23.643763 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 15:44:23.643468 unknown[957]: wrote ssh authorized keys file for user: core Jan 30 15:44:23.646570 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 30 15:44:23.646570 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 30 15:44:23.929705 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 30 15:44:25.130872 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 30 15:44:25.130872 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 30 15:44:25.140677 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 30 15:44:25.789842 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 30 15:44:26.130126 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 30 15:44:26.130126 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 30 15:44:26.133061 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 15:44:26.133061 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 15:44:26.133061 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 15:44:26.133061 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 15:44:26.133061 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 15:44:26.133061 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 15:44:26.133061 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 15:44:26.133061 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 15:44:26.133061 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 15:44:26.133061 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 15:44:26.133061 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 15:44:26.133061 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 15:44:26.133061 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Jan 30 15:44:26.389368 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 30 15:44:27.470012 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 15:44:27.472372 ignition[957]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 30 15:44:27.475245 ignition[957]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 15:44:27.475245 ignition[957]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 15:44:27.475245 ignition[957]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 30 15:44:27.475245 ignition[957]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 30 15:44:27.481093 ignition[957]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 15:44:27.481093 ignition[957]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 15:44:27.481093 ignition[957]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 15:44:27.481093 ignition[957]: INFO : files: files passed Jan 30 15:44:27.481093 ignition[957]: INFO : Ignition finished successfully Jan 30 15:44:27.480821 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 15:44:27.489494 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 15:44:27.491918 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 15:44:27.501820 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 15:44:27.502118 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 15:44:27.518752 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 15:44:27.518752 initrd-setup-root-after-ignition[986]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 15:44:27.522470 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 15:44:27.524075 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 15:44:27.525707 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 15:44:27.531400 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 15:44:27.574066 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 15:44:27.574290 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 15:44:27.576462 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 15:44:27.577955 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 15:44:27.578782 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 15:44:27.584433 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 15:44:27.604964 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 15:44:27.611395 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 15:44:27.627440 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 15:44:27.629491 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 15:44:27.630421 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 15:44:27.631823 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 15:44:27.632007 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 15:44:27.633705 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 15:44:27.634597 systemd[1]: Stopped target basic.target - Basic System. Jan 30 15:44:27.635984 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 15:44:27.637260 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 15:44:27.638517 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 15:44:27.639979 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 15:44:27.641423 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 15:44:27.642999 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 15:44:27.644369 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 15:44:27.645826 systemd[1]: Stopped target swap.target - Swaps. Jan 30 15:44:27.647073 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 15:44:27.647341 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 15:44:27.648966 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 15:44:27.649938 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 15:44:27.651247 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 15:44:27.653335 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 15:44:27.654449 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 15:44:27.654647 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 15:44:27.656564 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 15:44:27.656848 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 15:44:27.658482 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 15:44:27.658741 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 15:44:27.671540 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 15:44:27.676431 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 15:44:27.688247 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 15:44:27.689413 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 15:44:27.691306 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 15:44:27.692211 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 15:44:27.695238 ignition[1010]: INFO : Ignition 2.20.0 Jan 30 15:44:27.695238 ignition[1010]: INFO : Stage: umount Jan 30 15:44:27.695238 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 15:44:27.695238 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 30 15:44:27.699428 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 15:44:27.704103 ignition[1010]: INFO : umount: umount passed Jan 30 15:44:27.704103 ignition[1010]: INFO : Ignition finished successfully Jan 30 15:44:27.699605 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 15:44:27.700983 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 15:44:27.702288 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 15:44:27.710717 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 15:44:27.710796 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 15:44:27.714065 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 15:44:27.714137 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 15:44:27.714841 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 30 15:44:27.714916 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 30 15:44:27.718401 systemd[1]: Stopped target network.target - Network. Jan 30 15:44:27.719313 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 15:44:27.719394 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 15:44:27.720163 systemd[1]: Stopped target paths.target - Path Units. Jan 30 15:44:27.721570 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 15:44:27.727269 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 15:44:27.728098 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 15:44:27.729822 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 15:44:27.731304 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 15:44:27.731381 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 15:44:27.732683 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 15:44:27.732746 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 15:44:27.733953 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 15:44:27.734026 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 15:44:27.735321 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 15:44:27.735392 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 15:44:27.736897 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 15:44:27.739219 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 15:44:27.742157 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 15:44:27.742333 systemd-networkd[770]: eth0: DHCPv6 lease lost Jan 30 15:44:27.744540 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 15:44:27.744695 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 15:44:27.746309 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 15:44:27.746467 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 15:44:27.749157 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 15:44:27.749689 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 15:44:27.752383 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 15:44:27.752471 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 15:44:27.757298 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 15:44:27.759529 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 15:44:27.759603 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 15:44:27.761187 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 15:44:27.762972 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 15:44:27.763118 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 15:44:27.770024 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 15:44:27.770182 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 15:44:27.771331 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 15:44:27.771392 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 15:44:27.772800 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 15:44:27.772863 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 15:44:27.774860 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 15:44:27.775101 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 15:44:27.786903 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 15:44:27.787008 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 15:44:27.790318 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 15:44:27.790410 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 15:44:27.791110 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 15:44:27.791242 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 15:44:27.793533 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 15:44:27.793604 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 15:44:27.794851 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 15:44:27.794928 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 15:44:27.803454 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 15:44:27.804716 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 15:44:27.804793 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 15:44:27.805542 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 30 15:44:27.805621 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 15:44:27.806374 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 15:44:27.806445 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 15:44:27.807990 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 15:44:27.808059 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 15:44:27.811919 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 15:44:27.812074 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 15:44:27.817376 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 15:44:27.817595 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 15:44:27.819829 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 15:44:27.826455 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 15:44:27.838721 systemd[1]: Switching root. Jan 30 15:44:27.872483 systemd-journald[201]: Journal stopped Jan 30 15:44:29.503608 systemd-journald[201]: Received SIGTERM from PID 1 (systemd). Jan 30 15:44:29.503715 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 15:44:29.503750 kernel: SELinux: policy capability open_perms=1 Jan 30 15:44:29.503783 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 15:44:29.503815 kernel: SELinux: policy capability always_check_network=0 Jan 30 15:44:29.503845 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 15:44:29.503864 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 15:44:29.503882 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 15:44:29.503901 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 15:44:29.503920 kernel: audit: type=1403 audit(1738251868.318:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 15:44:29.503946 systemd[1]: Successfully loaded SELinux policy in 54.841ms. Jan 30 15:44:29.505488 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.597ms. Jan 30 15:44:29.505520 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 15:44:29.505553 systemd[1]: Detected virtualization kvm. Jan 30 15:44:29.505705 systemd[1]: Detected architecture x86-64. Jan 30 15:44:29.505732 systemd[1]: Detected first boot. Jan 30 15:44:29.505754 systemd[1]: Hostname set to . Jan 30 15:44:29.505782 systemd[1]: Initializing machine ID from VM UUID. Jan 30 15:44:29.505805 zram_generator::config[1053]: No configuration found. Jan 30 15:44:29.506641 systemd[1]: Populated /etc with preset unit settings. Jan 30 15:44:29.506670 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 30 15:44:29.506805 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 30 15:44:29.506831 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 30 15:44:29.506853 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 15:44:29.506886 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 15:44:29.506909 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 15:44:29.506937 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 15:44:29.506974 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 15:44:29.507010 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 15:44:29.507033 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 15:44:29.507066 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 15:44:29.507094 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 15:44:29.507117 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 15:44:29.507138 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 15:44:29.508238 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 15:44:29.508280 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 15:44:29.508329 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 15:44:29.508353 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 30 15:44:29.508374 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 15:44:29.508403 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 30 15:44:29.508425 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 30 15:44:29.508459 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 30 15:44:29.508494 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 15:44:29.508538 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 15:44:29.508584 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 15:44:29.508615 systemd[1]: Reached target slices.target - Slice Units. Jan 30 15:44:29.508637 systemd[1]: Reached target swap.target - Swaps. Jan 30 15:44:29.508669 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 15:44:29.508708 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 15:44:29.508731 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 15:44:29.508757 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 15:44:29.508779 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 15:44:29.508799 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 15:44:29.508820 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 15:44:29.508855 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 15:44:29.508876 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 15:44:29.508906 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 15:44:29.508942 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 15:44:29.508965 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 15:44:29.508991 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 15:44:29.509021 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 15:44:29.509042 systemd[1]: Reached target machines.target - Containers. Jan 30 15:44:29.509080 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 15:44:29.509101 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 15:44:29.509120 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 15:44:29.509140 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 15:44:29.513233 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 15:44:29.513285 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 15:44:29.513316 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 15:44:29.513338 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 15:44:29.513360 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 15:44:29.513388 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 15:44:29.513409 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 30 15:44:29.513430 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 30 15:44:29.513466 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 30 15:44:29.513489 systemd[1]: Stopped systemd-fsck-usr.service. Jan 30 15:44:29.513510 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 15:44:29.513530 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 15:44:29.513550 kernel: fuse: init (API version 7.39) Jan 30 15:44:29.513584 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 15:44:29.513607 kernel: loop: module loaded Jan 30 15:44:29.513635 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 15:44:29.513657 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 15:44:29.513690 systemd[1]: verity-setup.service: Deactivated successfully. Jan 30 15:44:29.513713 systemd[1]: Stopped verity-setup.service. Jan 30 15:44:29.513734 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 15:44:29.513763 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 15:44:29.513784 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 15:44:29.513805 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 15:44:29.513838 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 15:44:29.513860 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 15:44:29.513880 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 15:44:29.513907 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 15:44:29.513966 systemd-journald[1149]: Collecting audit messages is disabled. Jan 30 15:44:29.514034 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 15:44:29.514081 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 15:44:29.514104 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 15:44:29.514150 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 15:44:29.515238 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 15:44:29.515266 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 15:44:29.515287 kernel: ACPI: bus type drm_connector registered Jan 30 15:44:29.515320 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 15:44:29.515370 systemd-journald[1149]: Journal started Jan 30 15:44:29.515406 systemd-journald[1149]: Runtime Journal (/run/log/journal/68b71b6fdca041af898c653bead033ad) is 4.7M, max 37.9M, 33.2M free. Jan 30 15:44:29.113255 systemd[1]: Queued start job for default target multi-user.target. Jan 30 15:44:29.135531 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 30 15:44:29.518083 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 15:44:29.136200 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 30 15:44:29.519007 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 15:44:29.519376 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 15:44:29.520768 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 15:44:29.520994 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 15:44:29.522383 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 15:44:29.522620 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 15:44:29.524093 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 15:44:29.525507 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 15:44:29.526659 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 15:44:29.543473 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 15:44:29.554198 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 15:44:29.565412 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 15:44:29.567321 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 15:44:29.567375 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 15:44:29.571702 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 15:44:29.580413 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 15:44:29.585584 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 15:44:29.588400 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 15:44:29.592443 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 15:44:29.604694 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 15:44:29.605526 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 15:44:29.607646 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 15:44:29.608512 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 15:44:29.620396 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 15:44:29.633379 systemd-journald[1149]: Time spent on flushing to /var/log/journal/68b71b6fdca041af898c653bead033ad is 97.162ms for 1143 entries. Jan 30 15:44:29.633379 systemd-journald[1149]: System Journal (/var/log/journal/68b71b6fdca041af898c653bead033ad) is 8.0M, max 584.8M, 576.8M free. Jan 30 15:44:29.751059 systemd-journald[1149]: Received client request to flush runtime journal. Jan 30 15:44:29.751517 kernel: loop0: detected capacity change from 0 to 141000 Jan 30 15:44:29.631333 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 15:44:29.636426 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 15:44:29.639785 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 15:44:29.641762 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 15:44:29.643652 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 15:44:29.690561 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 15:44:29.691586 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 15:44:29.704383 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 15:44:29.748625 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 15:44:29.755218 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 15:44:29.762753 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 15:44:29.793231 kernel: loop1: detected capacity change from 0 to 138184 Jan 30 15:44:29.799253 systemd-tmpfiles[1187]: ACLs are not supported, ignoring. Jan 30 15:44:29.799536 systemd-tmpfiles[1187]: ACLs are not supported, ignoring. Jan 30 15:44:29.805459 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 15:44:29.808886 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 15:44:29.820688 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 15:44:29.836408 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 15:44:29.877935 kernel: loop2: detected capacity change from 0 to 218376 Jan 30 15:44:29.935722 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 15:44:29.949187 kernel: loop3: detected capacity change from 0 to 8 Jan 30 15:44:29.944388 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 15:44:29.971015 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 15:44:29.983042 kernel: loop4: detected capacity change from 0 to 141000 Jan 30 15:44:29.982284 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 15:44:30.014921 systemd-tmpfiles[1208]: ACLs are not supported, ignoring. Jan 30 15:44:30.014950 systemd-tmpfiles[1208]: ACLs are not supported, ignoring. Jan 30 15:44:30.025188 kernel: loop5: detected capacity change from 0 to 138184 Jan 30 15:44:30.026703 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 15:44:30.038422 udevadm[1212]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 30 15:44:30.068956 kernel: loop6: detected capacity change from 0 to 218376 Jan 30 15:44:30.104193 kernel: loop7: detected capacity change from 0 to 8 Jan 30 15:44:30.108325 (sd-merge)[1213]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jan 30 15:44:30.110449 (sd-merge)[1213]: Merged extensions into '/usr'. Jan 30 15:44:30.122577 systemd[1]: Reloading requested from client PID 1186 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 15:44:30.122983 systemd[1]: Reloading... Jan 30 15:44:30.247199 zram_generator::config[1237]: No configuration found. Jan 30 15:44:30.396135 ldconfig[1181]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 15:44:30.493825 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 15:44:30.561969 systemd[1]: Reloading finished in 438 ms. Jan 30 15:44:30.593743 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 15:44:30.600206 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 15:44:30.614419 systemd[1]: Starting ensure-sysext.service... Jan 30 15:44:30.617416 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 15:44:30.647392 systemd[1]: Reloading requested from client PID 1296 ('systemctl') (unit ensure-sysext.service)... Jan 30 15:44:30.647426 systemd[1]: Reloading... Jan 30 15:44:30.648592 systemd-tmpfiles[1297]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 15:44:30.649039 systemd-tmpfiles[1297]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 15:44:30.650985 systemd-tmpfiles[1297]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 15:44:30.651547 systemd-tmpfiles[1297]: ACLs are not supported, ignoring. Jan 30 15:44:30.651747 systemd-tmpfiles[1297]: ACLs are not supported, ignoring. Jan 30 15:44:30.657249 systemd-tmpfiles[1297]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 15:44:30.657412 systemd-tmpfiles[1297]: Skipping /boot Jan 30 15:44:30.674997 systemd-tmpfiles[1297]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 15:44:30.675178 systemd-tmpfiles[1297]: Skipping /boot Jan 30 15:44:30.755255 zram_generator::config[1333]: No configuration found. Jan 30 15:44:30.905522 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 15:44:30.974114 systemd[1]: Reloading finished in 326 ms. Jan 30 15:44:30.993681 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 15:44:30.999760 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 15:44:31.013385 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 30 15:44:31.016368 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 15:44:31.026425 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 15:44:31.031383 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 15:44:31.042480 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 15:44:31.048388 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 15:44:31.067420 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 15:44:31.071943 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 15:44:31.074249 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 15:44:31.080466 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 15:44:31.092479 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 15:44:31.106883 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 15:44:31.108596 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 15:44:31.108780 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 15:44:31.111146 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 15:44:31.111732 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 15:44:31.127070 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 15:44:31.127434 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 15:44:31.135324 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 15:44:31.136146 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 15:44:31.136391 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 15:44:31.137416 systemd-udevd[1392]: Using default interface naming scheme 'v255'. Jan 30 15:44:31.141150 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 15:44:31.142995 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 15:44:31.145414 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 15:44:31.145649 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 15:44:31.152451 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 15:44:31.160630 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 15:44:31.165552 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 15:44:31.165852 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 15:44:31.176482 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 15:44:31.179467 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 15:44:31.180381 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 15:44:31.180606 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 15:44:31.181782 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 15:44:31.184226 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 15:44:31.185654 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 15:44:31.185860 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 15:44:31.188614 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 15:44:31.191828 systemd[1]: Finished ensure-sysext.service. Jan 30 15:44:31.215391 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 30 15:44:31.221270 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 15:44:31.222147 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 15:44:31.222569 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 15:44:31.231368 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 15:44:31.232584 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 15:44:31.241727 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 15:44:31.249030 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 15:44:31.249296 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 15:44:31.275349 augenrules[1442]: No rules Jan 30 15:44:31.277824 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 15:44:31.278859 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 30 15:44:31.282140 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 15:44:31.283617 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 15:44:31.285058 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 15:44:31.374966 systemd-resolved[1391]: Positive Trust Anchors: Jan 30 15:44:31.376678 systemd-resolved[1391]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 15:44:31.376727 systemd-resolved[1391]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 15:44:31.395892 systemd-resolved[1391]: Using system hostname 'srv-eom3a.gb1.brightbox.com'. Jan 30 15:44:31.400740 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 15:44:31.401594 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 15:44:31.404354 systemd-networkd[1425]: lo: Link UP Jan 30 15:44:31.404365 systemd-networkd[1425]: lo: Gained carrier Jan 30 15:44:31.405305 systemd-networkd[1425]: Enumeration completed Jan 30 15:44:31.405423 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 15:44:31.406210 systemd[1]: Reached target network.target - Network. Jan 30 15:44:31.413398 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 15:44:31.422628 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 30 15:44:31.434997 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 30 15:44:31.435841 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 15:44:31.518212 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1429) Jan 30 15:44:31.584949 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 15:44:31.590201 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 30 15:44:31.600828 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 15:44:31.604699 systemd-networkd[1425]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 15:44:31.604712 systemd-networkd[1425]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 15:44:31.608624 systemd-networkd[1425]: eth0: Link UP Jan 30 15:44:31.608636 systemd-networkd[1425]: eth0: Gained carrier Jan 30 15:44:31.608662 systemd-networkd[1425]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 15:44:31.610224 kernel: mousedev: PS/2 mouse device common for all mice Jan 30 15:44:31.620217 kernel: ACPI: button: Power Button [PWRF] Jan 30 15:44:31.633926 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 15:44:31.649275 systemd-networkd[1425]: eth0: DHCPv4 address 10.243.85.194/30, gateway 10.243.85.193 acquired from 10.243.85.193 Jan 30 15:44:31.651708 systemd-timesyncd[1421]: Network configuration changed, trying to establish connection. Jan 30 15:44:31.676558 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 30 15:44:31.682322 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 30 15:44:31.682640 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 30 15:44:31.695238 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 30 15:44:31.731297 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 15:44:31.903730 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 15:44:31.922283 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 15:44:31.927603 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 15:44:31.956689 lvm[1476]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 15:44:31.992909 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 15:44:31.994687 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 15:44:31.995480 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 15:44:31.996411 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 15:44:31.997402 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 15:44:31.998505 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 15:44:31.999427 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 15:44:32.000200 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 15:44:32.000927 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 15:44:32.001000 systemd[1]: Reached target paths.target - Path Units. Jan 30 15:44:32.001615 systemd[1]: Reached target timers.target - Timer Units. Jan 30 15:44:32.003742 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 15:44:32.006360 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 15:44:32.012450 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 15:44:32.032508 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 15:44:32.034274 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 15:44:32.035136 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 15:44:32.035779 systemd[1]: Reached target basic.target - Basic System. Jan 30 15:44:32.036454 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 15:44:32.036518 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 15:44:32.039523 lvm[1480]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 15:44:32.045310 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 15:44:32.051398 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 30 15:44:32.055380 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 15:44:32.060371 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 15:44:32.063563 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 15:44:32.065256 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 15:44:32.068393 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 15:44:32.072306 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 30 15:44:32.083385 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 15:44:32.087235 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 15:44:32.100453 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 15:44:32.102123 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 30 15:44:32.102854 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 15:44:32.108443 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 15:44:32.114298 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 15:44:32.118391 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 15:44:32.136820 extend-filesystems[1485]: Found loop4 Jan 30 15:44:32.136820 extend-filesystems[1485]: Found loop5 Jan 30 15:44:32.136820 extend-filesystems[1485]: Found loop6 Jan 30 15:44:32.136820 extend-filesystems[1485]: Found loop7 Jan 30 15:44:32.136820 extend-filesystems[1485]: Found vda Jan 30 15:44:32.136820 extend-filesystems[1485]: Found vda1 Jan 30 15:44:32.136820 extend-filesystems[1485]: Found vda2 Jan 30 15:44:32.136820 extend-filesystems[1485]: Found vda3 Jan 30 15:44:32.136820 extend-filesystems[1485]: Found usr Jan 30 15:44:32.159529 extend-filesystems[1485]: Found vda4 Jan 30 15:44:32.159529 extend-filesystems[1485]: Found vda6 Jan 30 15:44:32.159529 extend-filesystems[1485]: Found vda7 Jan 30 15:44:32.159529 extend-filesystems[1485]: Found vda9 Jan 30 15:44:32.159529 extend-filesystems[1485]: Checking size of /dev/vda9 Jan 30 15:44:32.151844 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 15:44:32.152106 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 15:44:32.153065 (ntainerd)[1507]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 15:44:32.183258 jq[1484]: false Jan 30 15:44:32.176629 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 15:44:32.177000 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 15:44:32.184036 jq[1495]: true Jan 30 15:44:32.218438 extend-filesystems[1485]: Resized partition /dev/vda9 Jan 30 15:44:32.224653 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 15:44:32.225023 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 15:44:32.232700 dbus-daemon[1483]: [system] SELinux support is enabled Jan 30 15:44:32.235518 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 15:44:32.238183 extend-filesystems[1521]: resize2fs 1.47.1 (20-May-2024) Jan 30 15:44:32.240933 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 15:44:32.246478 tar[1498]: linux-amd64/LICENSE Jan 30 15:44:32.246478 tar[1498]: linux-amd64/helm Jan 30 15:44:32.240974 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 15:44:32.244596 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 15:44:32.244630 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 15:44:32.279662 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Jan 30 15:44:32.279752 jq[1513]: true Jan 30 15:44:32.287625 dbus-daemon[1483]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1425 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 30 15:44:32.304396 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 30 15:44:32.306886 update_engine[1494]: I20250130 15:44:32.296149 1494 main.cc:92] Flatcar Update Engine starting Jan 30 15:44:32.320545 systemd[1]: Started update-engine.service - Update Engine. Jan 30 15:44:32.327221 update_engine[1494]: I20250130 15:44:32.326684 1494 update_check_scheduler.cc:74] Next update check in 5m19s Jan 30 15:44:32.329397 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 15:44:32.339213 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1432) Jan 30 15:44:32.516546 systemd-logind[1492]: Watching system buttons on /dev/input/event2 (Power Button) Jan 30 15:44:32.516590 systemd-logind[1492]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 30 15:44:32.519076 systemd-logind[1492]: New seat seat0. Jan 30 15:44:32.522166 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 15:44:32.604698 dbus-daemon[1483]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 30 15:44:32.605474 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 30 15:44:32.608639 dbus-daemon[1483]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1525 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 30 15:44:32.611209 locksmithd[1526]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 15:44:32.611948 bash[1545]: Updated "/home/core/.ssh/authorized_keys" Jan 30 15:44:32.619836 systemd[1]: Starting polkit.service - Authorization Manager... Jan 30 15:44:32.622347 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 15:44:33.210379 systemd-timesyncd[1421]: Contacted time server 46.101.52.249:123 (0.flatcar.pool.ntp.org). Jan 30 15:44:33.210459 systemd-timesyncd[1421]: Initial clock synchronization to Thu 2025-01-30 15:44:33.210150 UTC. Jan 30 15:44:33.210789 systemd-resolved[1391]: Clock change detected. Flushing caches. Jan 30 15:44:33.215421 systemd[1]: Starting sshkeys.service... Jan 30 15:44:33.245120 containerd[1507]: time="2025-01-30T15:44:33.242624425Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 30 15:44:33.260689 polkitd[1551]: Started polkitd version 121 Jan 30 15:44:33.271927 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 30 15:44:33.282579 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 30 15:44:33.306652 polkitd[1551]: Loading rules from directory /etc/polkit-1/rules.d Jan 30 15:44:33.308261 polkitd[1551]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 30 15:44:33.310377 polkitd[1551]: Finished loading, compiling and executing 2 rules Jan 30 15:44:33.312333 dbus-daemon[1483]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 30 15:44:33.312563 systemd[1]: Started polkit.service - Authorization Manager. Jan 30 15:44:33.312771 polkitd[1551]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 30 15:44:33.350374 containerd[1507]: time="2025-01-30T15:44:33.349661832Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 15:44:33.361639 containerd[1507]: time="2025-01-30T15:44:33.359658277Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 15:44:33.361639 containerd[1507]: time="2025-01-30T15:44:33.359710062Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 15:44:33.361639 containerd[1507]: time="2025-01-30T15:44:33.359735760Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 15:44:33.361639 containerd[1507]: time="2025-01-30T15:44:33.359990465Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 15:44:33.361639 containerd[1507]: time="2025-01-30T15:44:33.360038936Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 15:44:33.361639 containerd[1507]: time="2025-01-30T15:44:33.360178045Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 15:44:33.361639 containerd[1507]: time="2025-01-30T15:44:33.360200405Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 15:44:33.361639 containerd[1507]: time="2025-01-30T15:44:33.360426588Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 15:44:33.361639 containerd[1507]: time="2025-01-30T15:44:33.360450120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 15:44:33.361639 containerd[1507]: time="2025-01-30T15:44:33.360469999Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 15:44:33.361639 containerd[1507]: time="2025-01-30T15:44:33.360484966Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 15:44:33.362019 containerd[1507]: time="2025-01-30T15:44:33.360600208Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 15:44:33.362019 containerd[1507]: time="2025-01-30T15:44:33.360974143Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 15:44:33.365436 containerd[1507]: time="2025-01-30T15:44:33.365400728Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 15:44:33.365537 containerd[1507]: time="2025-01-30T15:44:33.365513699Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 15:44:33.365747 containerd[1507]: time="2025-01-30T15:44:33.365720060Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 15:44:33.366469 containerd[1507]: time="2025-01-30T15:44:33.366442093Z" level=info msg="metadata content store policy set" policy=shared Jan 30 15:44:33.368071 systemd-hostnamed[1525]: Hostname set to (static) Jan 30 15:44:33.374130 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jan 30 15:44:33.390666 extend-filesystems[1521]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 30 15:44:33.390666 extend-filesystems[1521]: old_desc_blocks = 1, new_desc_blocks = 8 Jan 30 15:44:33.390666 extend-filesystems[1521]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jan 30 15:44:33.399918 extend-filesystems[1485]: Resized filesystem in /dev/vda9 Jan 30 15:44:33.401036 containerd[1507]: time="2025-01-30T15:44:33.397431152Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 15:44:33.401036 containerd[1507]: time="2025-01-30T15:44:33.397528899Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 15:44:33.401036 containerd[1507]: time="2025-01-30T15:44:33.397556158Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 15:44:33.401036 containerd[1507]: time="2025-01-30T15:44:33.397581773Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 15:44:33.401036 containerd[1507]: time="2025-01-30T15:44:33.397603992Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 15:44:33.401036 containerd[1507]: time="2025-01-30T15:44:33.397892579Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 15:44:33.392907 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 15:44:33.401793 containerd[1507]: time="2025-01-30T15:44:33.401222414Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 15:44:33.395120 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 15:44:33.402089 containerd[1507]: time="2025-01-30T15:44:33.401973065Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 15:44:33.402089 containerd[1507]: time="2025-01-30T15:44:33.402022112Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 15:44:33.402089 containerd[1507]: time="2025-01-30T15:44:33.402048880Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 15:44:33.402418 containerd[1507]: time="2025-01-30T15:44:33.402070467Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 15:44:33.402418 containerd[1507]: time="2025-01-30T15:44:33.402299507Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 15:44:33.402680 containerd[1507]: time="2025-01-30T15:44:33.402321346Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 15:44:33.402680 containerd[1507]: time="2025-01-30T15:44:33.402614798Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 15:44:33.402851 containerd[1507]: time="2025-01-30T15:44:33.402651098Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 15:44:33.403127 containerd[1507]: time="2025-01-30T15:44:33.403079549Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 15:44:33.403283 containerd[1507]: time="2025-01-30T15:44:33.403216793Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 15:44:33.403283 containerd[1507]: time="2025-01-30T15:44:33.403243689Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 15:44:33.403570 containerd[1507]: time="2025-01-30T15:44:33.403430610Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 15:44:33.403570 containerd[1507]: time="2025-01-30T15:44:33.403461634Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 15:44:33.403570 containerd[1507]: time="2025-01-30T15:44:33.403503623Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 15:44:33.403570 containerd[1507]: time="2025-01-30T15:44:33.403526848Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 15:44:33.403570 containerd[1507]: time="2025-01-30T15:44:33.403546317Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 15:44:33.403928 containerd[1507]: time="2025-01-30T15:44:33.403788234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 15:44:33.403928 containerd[1507]: time="2025-01-30T15:44:33.403816878Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 15:44:33.403928 containerd[1507]: time="2025-01-30T15:44:33.403854869Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 15:44:33.403928 containerd[1507]: time="2025-01-30T15:44:33.403900463Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 15:44:33.404192 containerd[1507]: time="2025-01-30T15:44:33.404114985Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 15:44:33.404192 containerd[1507]: time="2025-01-30T15:44:33.404153751Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 15:44:33.404358 containerd[1507]: time="2025-01-30T15:44:33.404174668Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 15:44:33.404358 containerd[1507]: time="2025-01-30T15:44:33.404310274Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 15:44:33.404563 containerd[1507]: time="2025-01-30T15:44:33.404336083Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 15:44:33.404563 containerd[1507]: time="2025-01-30T15:44:33.404518927Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 15:44:33.404764 containerd[1507]: time="2025-01-30T15:44:33.404546811Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 15:44:33.404764 containerd[1507]: time="2025-01-30T15:44:33.404699039Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 15:44:33.404915 containerd[1507]: time="2025-01-30T15:44:33.404892236Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 15:44:33.405048 containerd[1507]: time="2025-01-30T15:44:33.405023384Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 15:44:33.405603 containerd[1507]: time="2025-01-30T15:44:33.405231533Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 15:44:33.405603 containerd[1507]: time="2025-01-30T15:44:33.405264374Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 15:44:33.405603 containerd[1507]: time="2025-01-30T15:44:33.405281645Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 15:44:33.405603 containerd[1507]: time="2025-01-30T15:44:33.405312677Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 15:44:33.405603 containerd[1507]: time="2025-01-30T15:44:33.405347645Z" level=info msg="NRI interface is disabled by configuration." Jan 30 15:44:33.405603 containerd[1507]: time="2025-01-30T15:44:33.405374986Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 15:44:33.406225 containerd[1507]: time="2025-01-30T15:44:33.406070117Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 15:44:33.407350 containerd[1507]: time="2025-01-30T15:44:33.406859144Z" level=info msg="Connect containerd service" Jan 30 15:44:33.407350 containerd[1507]: time="2025-01-30T15:44:33.406917650Z" level=info msg="using legacy CRI server" Jan 30 15:44:33.407350 containerd[1507]: time="2025-01-30T15:44:33.406933956Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 15:44:33.407350 containerd[1507]: time="2025-01-30T15:44:33.407119886Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 15:44:33.409172 containerd[1507]: time="2025-01-30T15:44:33.409127922Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 15:44:33.409826 containerd[1507]: time="2025-01-30T15:44:33.409675861Z" level=info msg="Start subscribing containerd event" Jan 30 15:44:33.409826 containerd[1507]: time="2025-01-30T15:44:33.409802824Z" level=info msg="Start recovering state" Jan 30 15:44:33.410051 containerd[1507]: time="2025-01-30T15:44:33.410025401Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 15:44:33.411181 containerd[1507]: time="2025-01-30T15:44:33.411153282Z" level=info msg="Start event monitor" Jan 30 15:44:33.411241 containerd[1507]: time="2025-01-30T15:44:33.411196075Z" level=info msg="Start snapshots syncer" Jan 30 15:44:33.411241 containerd[1507]: time="2025-01-30T15:44:33.411226262Z" level=info msg="Start cni network conf syncer for default" Jan 30 15:44:33.411309 containerd[1507]: time="2025-01-30T15:44:33.411239875Z" level=info msg="Start streaming server" Jan 30 15:44:33.412045 containerd[1507]: time="2025-01-30T15:44:33.411594481Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 15:44:33.417131 containerd[1507]: time="2025-01-30T15:44:33.416175257Z" level=info msg="containerd successfully booted in 0.182243s" Jan 30 15:44:33.416289 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 15:44:33.618641 systemd-networkd[1425]: eth0: Gained IPv6LL Jan 30 15:44:33.628546 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 15:44:33.633053 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 15:44:33.645651 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 15:44:33.652683 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 15:44:33.724958 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 15:44:33.920152 tar[1498]: linux-amd64/README.md Jan 30 15:44:33.946174 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 30 15:44:34.124963 sshd_keygen[1522]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 15:44:34.144751 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 15:44:34.157240 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 15:44:34.170803 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 15:44:34.177850 systemd[1]: Started sshd@0-10.243.85.194:22-139.178.89.65:45530.service - OpenSSH per-connection server daemon (139.178.89.65:45530). Jan 30 15:44:34.184249 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 15:44:34.184824 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 15:44:34.193779 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 15:44:34.246552 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 15:44:34.257781 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 15:44:34.260995 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 30 15:44:34.262112 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 15:44:34.798371 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 15:44:34.801737 (kubelet)[1612]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 15:44:35.114756 sshd[1597]: Accepted publickey for core from 139.178.89.65 port 45530 ssh2: RSA SHA256:Dn4MxDy04q+t+ei5/s5fxQGaUh9w4dS1NM4x1ar4TVQ Jan 30 15:44:35.117531 sshd-session[1597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:44:35.127651 systemd-networkd[1425]: eth0: Ignoring DHCPv6 address 2a02:1348:17c:d570:24:19ff:fef3:55c2/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17c:d570:24:19ff:fef3:55c2/64 assigned by NDisc. Jan 30 15:44:35.127664 systemd-networkd[1425]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 30 15:44:35.137497 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 15:44:35.152358 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 15:44:35.163433 systemd-logind[1492]: New session 1 of user core. Jan 30 15:44:35.176497 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 15:44:35.186516 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 15:44:35.198625 (systemd)[1620]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 15:44:35.334198 systemd[1620]: Queued start job for default target default.target. Jan 30 15:44:35.345358 systemd[1620]: Created slice app.slice - User Application Slice. Jan 30 15:44:35.345398 systemd[1620]: Reached target paths.target - Paths. Jan 30 15:44:35.345422 systemd[1620]: Reached target timers.target - Timers. Jan 30 15:44:35.349243 systemd[1620]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 15:44:35.364266 systemd[1620]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 15:44:35.365307 systemd[1620]: Reached target sockets.target - Sockets. Jan 30 15:44:35.365335 systemd[1620]: Reached target basic.target - Basic System. Jan 30 15:44:35.365398 systemd[1620]: Reached target default.target - Main User Target. Jan 30 15:44:35.365458 systemd[1620]: Startup finished in 157ms. Jan 30 15:44:35.365760 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 15:44:35.373458 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 15:44:35.479839 kubelet[1612]: E0130 15:44:35.479716 1612 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 15:44:35.482503 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 15:44:35.482742 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 15:44:35.483414 systemd[1]: kubelet.service: Consumed 1.025s CPU time. Jan 30 15:44:36.017688 systemd[1]: Started sshd@1-10.243.85.194:22-139.178.89.65:45544.service - OpenSSH per-connection server daemon (139.178.89.65:45544). Jan 30 15:44:36.953390 sshd[1634]: Accepted publickey for core from 139.178.89.65 port 45544 ssh2: RSA SHA256:Dn4MxDy04q+t+ei5/s5fxQGaUh9w4dS1NM4x1ar4TVQ Jan 30 15:44:36.955794 sshd-session[1634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:44:36.962598 systemd-logind[1492]: New session 2 of user core. Jan 30 15:44:36.973599 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 15:44:37.574790 sshd[1637]: Connection closed by 139.178.89.65 port 45544 Jan 30 15:44:37.574496 sshd-session[1634]: pam_unix(sshd:session): session closed for user core Jan 30 15:44:37.580729 systemd-logind[1492]: Session 2 logged out. Waiting for processes to exit. Jan 30 15:44:37.582300 systemd[1]: sshd@1-10.243.85.194:22-139.178.89.65:45544.service: Deactivated successfully. Jan 30 15:44:37.585650 systemd[1]: session-2.scope: Deactivated successfully. Jan 30 15:44:37.587869 systemd-logind[1492]: Removed session 2. Jan 30 15:44:37.734568 systemd[1]: Started sshd@2-10.243.85.194:22-139.178.89.65:45556.service - OpenSSH per-connection server daemon (139.178.89.65:45556). Jan 30 15:44:38.619871 sshd[1642]: Accepted publickey for core from 139.178.89.65 port 45556 ssh2: RSA SHA256:Dn4MxDy04q+t+ei5/s5fxQGaUh9w4dS1NM4x1ar4TVQ Jan 30 15:44:38.622353 sshd-session[1642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:44:38.629180 systemd-logind[1492]: New session 3 of user core. Jan 30 15:44:38.641527 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 15:44:39.292202 sshd[1644]: Connection closed by 139.178.89.65 port 45556 Jan 30 15:44:39.294722 sshd-session[1642]: pam_unix(sshd:session): session closed for user core Jan 30 15:44:39.301884 systemd[1]: sshd@2-10.243.85.194:22-139.178.89.65:45556.service: Deactivated successfully. Jan 30 15:44:39.305902 systemd[1]: session-3.scope: Deactivated successfully. Jan 30 15:44:39.308449 systemd-logind[1492]: Session 3 logged out. Waiting for processes to exit. Jan 30 15:44:39.310664 systemd-logind[1492]: Removed session 3. Jan 30 15:44:39.311016 agetty[1604]: failed to open credentials directory Jan 30 15:44:39.317638 agetty[1605]: failed to open credentials directory Jan 30 15:44:39.326700 login[1604]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 30 15:44:39.335511 systemd-logind[1492]: New session 4 of user core. Jan 30 15:44:39.338550 login[1605]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 30 15:44:39.345592 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 15:44:39.354039 systemd-logind[1492]: New session 5 of user core. Jan 30 15:44:39.361701 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 15:44:39.744616 coreos-metadata[1482]: Jan 30 15:44:39.744 WARN failed to locate config-drive, using the metadata service API instead Jan 30 15:44:39.769630 coreos-metadata[1482]: Jan 30 15:44:39.769 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 30 15:44:39.777128 coreos-metadata[1482]: Jan 30 15:44:39.777 INFO Fetch failed with 404: resource not found Jan 30 15:44:39.777338 coreos-metadata[1482]: Jan 30 15:44:39.777 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 30 15:44:39.778327 coreos-metadata[1482]: Jan 30 15:44:39.778 INFO Fetch successful Jan 30 15:44:39.778599 coreos-metadata[1482]: Jan 30 15:44:39.778 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 30 15:44:39.791307 coreos-metadata[1482]: Jan 30 15:44:39.791 INFO Fetch successful Jan 30 15:44:39.791469 coreos-metadata[1482]: Jan 30 15:44:39.791 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 30 15:44:39.804293 coreos-metadata[1482]: Jan 30 15:44:39.804 INFO Fetch successful Jan 30 15:44:39.804373 coreos-metadata[1482]: Jan 30 15:44:39.804 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 30 15:44:39.817953 coreos-metadata[1482]: Jan 30 15:44:39.817 INFO Fetch successful Jan 30 15:44:39.818178 coreos-metadata[1482]: Jan 30 15:44:39.818 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 30 15:44:39.832800 coreos-metadata[1482]: Jan 30 15:44:39.832 INFO Fetch successful Jan 30 15:44:39.864652 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 30 15:44:39.865614 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 15:44:40.355509 coreos-metadata[1556]: Jan 30 15:44:40.355 WARN failed to locate config-drive, using the metadata service API instead Jan 30 15:44:40.378239 coreos-metadata[1556]: Jan 30 15:44:40.378 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 30 15:44:40.404371 coreos-metadata[1556]: Jan 30 15:44:40.404 INFO Fetch successful Jan 30 15:44:40.404620 coreos-metadata[1556]: Jan 30 15:44:40.404 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 30 15:44:40.432908 coreos-metadata[1556]: Jan 30 15:44:40.432 INFO Fetch successful Jan 30 15:44:40.435302 unknown[1556]: wrote ssh authorized keys file for user: core Jan 30 15:44:40.461826 update-ssh-keys[1681]: Updated "/home/core/.ssh/authorized_keys" Jan 30 15:44:40.462385 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 30 15:44:40.465138 systemd[1]: Finished sshkeys.service. Jan 30 15:44:40.468435 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 15:44:40.468763 systemd[1]: Startup finished in 1.394s (kernel) + 15.559s (initrd) + 11.624s (userspace) = 28.577s. Jan 30 15:44:45.729342 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 15:44:45.738401 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 15:44:45.934066 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 15:44:45.946636 (kubelet)[1693]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 15:44:46.007533 kubelet[1693]: E0130 15:44:46.007262 1693 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 15:44:46.011384 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 15:44:46.011635 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 15:44:49.453595 systemd[1]: Started sshd@3-10.243.85.194:22-139.178.89.65:48662.service - OpenSSH per-connection server daemon (139.178.89.65:48662). Jan 30 15:44:50.343143 sshd[1702]: Accepted publickey for core from 139.178.89.65 port 48662 ssh2: RSA SHA256:Dn4MxDy04q+t+ei5/s5fxQGaUh9w4dS1NM4x1ar4TVQ Jan 30 15:44:50.345374 sshd-session[1702]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:44:50.352640 systemd-logind[1492]: New session 6 of user core. Jan 30 15:44:50.361367 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 15:44:50.962846 sshd[1704]: Connection closed by 139.178.89.65 port 48662 Jan 30 15:44:50.962498 sshd-session[1702]: pam_unix(sshd:session): session closed for user core Jan 30 15:44:50.967264 systemd[1]: sshd@3-10.243.85.194:22-139.178.89.65:48662.service: Deactivated successfully. Jan 30 15:44:50.969907 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 15:44:50.971676 systemd-logind[1492]: Session 6 logged out. Waiting for processes to exit. Jan 30 15:44:50.973658 systemd-logind[1492]: Removed session 6. Jan 30 15:44:51.115798 systemd[1]: Started sshd@4-10.243.85.194:22-139.178.89.65:48670.service - OpenSSH per-connection server daemon (139.178.89.65:48670). Jan 30 15:44:52.017992 sshd[1709]: Accepted publickey for core from 139.178.89.65 port 48670 ssh2: RSA SHA256:Dn4MxDy04q+t+ei5/s5fxQGaUh9w4dS1NM4x1ar4TVQ Jan 30 15:44:52.020197 sshd-session[1709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:44:52.029310 systemd-logind[1492]: New session 7 of user core. Jan 30 15:44:52.034516 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 15:44:52.629252 sshd[1711]: Connection closed by 139.178.89.65 port 48670 Jan 30 15:44:52.629015 sshd-session[1709]: pam_unix(sshd:session): session closed for user core Jan 30 15:44:52.633156 systemd[1]: sshd@4-10.243.85.194:22-139.178.89.65:48670.service: Deactivated successfully. Jan 30 15:44:52.635558 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 15:44:52.638184 systemd-logind[1492]: Session 7 logged out. Waiting for processes to exit. Jan 30 15:44:52.639797 systemd-logind[1492]: Removed session 7. Jan 30 15:44:52.789777 systemd[1]: Started sshd@5-10.243.85.194:22-139.178.89.65:53666.service - OpenSSH per-connection server daemon (139.178.89.65:53666). Jan 30 15:44:53.676634 sshd[1716]: Accepted publickey for core from 139.178.89.65 port 53666 ssh2: RSA SHA256:Dn4MxDy04q+t+ei5/s5fxQGaUh9w4dS1NM4x1ar4TVQ Jan 30 15:44:53.678527 sshd-session[1716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:44:53.684677 systemd-logind[1492]: New session 8 of user core. Jan 30 15:44:53.693303 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 30 15:44:54.295309 sshd[1718]: Connection closed by 139.178.89.65 port 53666 Jan 30 15:44:54.296206 sshd-session[1716]: pam_unix(sshd:session): session closed for user core Jan 30 15:44:54.300892 systemd[1]: sshd@5-10.243.85.194:22-139.178.89.65:53666.service: Deactivated successfully. Jan 30 15:44:54.302820 systemd[1]: session-8.scope: Deactivated successfully. Jan 30 15:44:54.303751 systemd-logind[1492]: Session 8 logged out. Waiting for processes to exit. Jan 30 15:44:54.305035 systemd-logind[1492]: Removed session 8. Jan 30 15:44:54.458582 systemd[1]: Started sshd@6-10.243.85.194:22-139.178.89.65:53674.service - OpenSSH per-connection server daemon (139.178.89.65:53674). Jan 30 15:44:54.718529 systemd[1]: Started sshd@7-10.243.85.194:22-217.65.82.98:36148.service - OpenSSH per-connection server daemon (217.65.82.98:36148). Jan 30 15:44:55.344304 sshd[1723]: Accepted publickey for core from 139.178.89.65 port 53674 ssh2: RSA SHA256:Dn4MxDy04q+t+ei5/s5fxQGaUh9w4dS1NM4x1ar4TVQ Jan 30 15:44:55.346360 sshd-session[1723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:44:55.353307 systemd-logind[1492]: New session 9 of user core. Jan 30 15:44:55.364487 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 30 15:44:55.500784 sshd[1726]: Invalid user ftpuser from 217.65.82.98 port 36148 Jan 30 15:44:55.648553 sshd[1726]: Received disconnect from 217.65.82.98 port 36148:11: Bye Bye [preauth] Jan 30 15:44:55.648553 sshd[1726]: Disconnected from invalid user ftpuser 217.65.82.98 port 36148 [preauth] Jan 30 15:44:55.650925 systemd[1]: sshd@7-10.243.85.194:22-217.65.82.98:36148.service: Deactivated successfully. Jan 30 15:44:55.830006 sudo[1731]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 15:44:55.831039 sudo[1731]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 15:44:55.846936 sudo[1731]: pam_unix(sudo:session): session closed for user root Jan 30 15:44:55.991678 sshd[1728]: Connection closed by 139.178.89.65 port 53674 Jan 30 15:44:55.990672 sshd-session[1723]: pam_unix(sshd:session): session closed for user core Jan 30 15:44:55.996902 systemd[1]: sshd@6-10.243.85.194:22-139.178.89.65:53674.service: Deactivated successfully. Jan 30 15:44:55.999081 systemd[1]: session-9.scope: Deactivated successfully. Jan 30 15:44:56.000066 systemd-logind[1492]: Session 9 logged out. Waiting for processes to exit. Jan 30 15:44:56.001841 systemd-logind[1492]: Removed session 9. Jan 30 15:44:56.140832 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 30 15:44:56.150346 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 15:44:56.152926 systemd[1]: Started sshd@8-10.243.85.194:22-139.178.89.65:53690.service - OpenSSH per-connection server daemon (139.178.89.65:53690). Jan 30 15:44:56.300524 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 15:44:56.315600 (kubelet)[1746]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 15:44:56.362488 kubelet[1746]: E0130 15:44:56.362365 1746 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 15:44:56.365499 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 15:44:56.365887 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 15:44:57.040331 sshd[1737]: Accepted publickey for core from 139.178.89.65 port 53690 ssh2: RSA SHA256:Dn4MxDy04q+t+ei5/s5fxQGaUh9w4dS1NM4x1ar4TVQ Jan 30 15:44:57.042192 sshd-session[1737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:44:57.048439 systemd-logind[1492]: New session 10 of user core. Jan 30 15:44:57.058911 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 30 15:44:57.515325 sudo[1755]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 15:44:57.516575 sudo[1755]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 15:44:57.522781 sudo[1755]: pam_unix(sudo:session): session closed for user root Jan 30 15:44:57.531004 sudo[1754]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 30 15:44:57.531474 sudo[1754]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 15:44:57.552791 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 30 15:44:57.596872 augenrules[1777]: No rules Jan 30 15:44:57.598782 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 15:44:57.599188 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 30 15:44:57.601179 sudo[1754]: pam_unix(sudo:session): session closed for user root Jan 30 15:44:57.744179 sshd[1753]: Connection closed by 139.178.89.65 port 53690 Jan 30 15:44:57.745278 sshd-session[1737]: pam_unix(sshd:session): session closed for user core Jan 30 15:44:57.750547 systemd[1]: sshd@8-10.243.85.194:22-139.178.89.65:53690.service: Deactivated successfully. Jan 30 15:44:57.752705 systemd[1]: session-10.scope: Deactivated successfully. Jan 30 15:44:57.753889 systemd-logind[1492]: Session 10 logged out. Waiting for processes to exit. Jan 30 15:44:57.755803 systemd-logind[1492]: Removed session 10. Jan 30 15:44:57.914912 systemd[1]: Started sshd@9-10.243.85.194:22-139.178.89.65:53694.service - OpenSSH per-connection server daemon (139.178.89.65:53694). Jan 30 15:44:58.807136 sshd[1785]: Accepted publickey for core from 139.178.89.65 port 53694 ssh2: RSA SHA256:Dn4MxDy04q+t+ei5/s5fxQGaUh9w4dS1NM4x1ar4TVQ Jan 30 15:44:58.809281 sshd-session[1785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:44:58.816454 systemd-logind[1492]: New session 11 of user core. Jan 30 15:44:58.827515 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 30 15:44:59.285543 sudo[1788]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 15:44:59.285994 sudo[1788]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 15:44:59.753429 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 30 15:44:59.765678 (dockerd)[1806]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 30 15:45:00.166351 dockerd[1806]: time="2025-01-30T15:45:00.165691418Z" level=info msg="Starting up" Jan 30 15:45:00.277862 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2474779823-merged.mount: Deactivated successfully. Jan 30 15:45:00.311165 dockerd[1806]: time="2025-01-30T15:45:00.310778048Z" level=info msg="Loading containers: start." Jan 30 15:45:00.526161 kernel: Initializing XFRM netlink socket Jan 30 15:45:00.637587 systemd-networkd[1425]: docker0: Link UP Jan 30 15:45:00.686981 dockerd[1806]: time="2025-01-30T15:45:00.686934646Z" level=info msg="Loading containers: done." Jan 30 15:45:00.713573 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4138506548-merged.mount: Deactivated successfully. Jan 30 15:45:00.715132 dockerd[1806]: time="2025-01-30T15:45:00.715052335Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 30 15:45:00.715356 dockerd[1806]: time="2025-01-30T15:45:00.715302710Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jan 30 15:45:00.715513 dockerd[1806]: time="2025-01-30T15:45:00.715479473Z" level=info msg="Daemon has completed initialization" Jan 30 15:45:00.751741 dockerd[1806]: time="2025-01-30T15:45:00.751625416Z" level=info msg="API listen on /run/docker.sock" Jan 30 15:45:00.752540 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 30 15:45:01.719120 containerd[1507]: time="2025-01-30T15:45:01.719018051Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.1\"" Jan 30 15:45:02.551025 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1618349165.mount: Deactivated successfully. Jan 30 15:45:04.335380 containerd[1507]: time="2025-01-30T15:45:04.335315514Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:45:04.338678 containerd[1507]: time="2025-01-30T15:45:04.338617098Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.1: active requests=0, bytes read=28674832" Jan 30 15:45:04.339753 containerd[1507]: time="2025-01-30T15:45:04.339691999Z" level=info msg="ImageCreate event name:\"sha256:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:45:04.345750 containerd[1507]: time="2025-01-30T15:45:04.345703536Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:45:04.349910 containerd[1507]: time="2025-01-30T15:45:04.349846732Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.1\" with image id \"sha256:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac\", size \"28671624\" in 2.630708011s" Jan 30 15:45:04.349910 containerd[1507]: time="2025-01-30T15:45:04.349900564Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.1\" returns image reference \"sha256:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a\"" Jan 30 15:45:04.350907 containerd[1507]: time="2025-01-30T15:45:04.350865508Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.1\"" Jan 30 15:45:05.176945 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 30 15:45:06.301854 containerd[1507]: time="2025-01-30T15:45:06.301766433Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:45:06.303209 containerd[1507]: time="2025-01-30T15:45:06.303166769Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.1: active requests=0, bytes read=24770719" Jan 30 15:45:06.303880 containerd[1507]: time="2025-01-30T15:45:06.303816149Z" level=info msg="ImageCreate event name:\"sha256:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:45:06.307499 containerd[1507]: time="2025-01-30T15:45:06.307430265Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:45:06.309178 containerd[1507]: time="2025-01-30T15:45:06.308953561Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.1\" with image id \"sha256:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954\", size \"26258470\" in 1.958045007s" Jan 30 15:45:06.309178 containerd[1507]: time="2025-01-30T15:45:06.309020847Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.1\" returns image reference \"sha256:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35\"" Jan 30 15:45:06.309957 containerd[1507]: time="2025-01-30T15:45:06.309930127Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.1\"" Jan 30 15:45:06.479390 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 30 15:45:06.492463 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 15:45:06.644256 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 15:45:06.650056 (kubelet)[2066]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 15:45:06.760847 kubelet[2066]: E0130 15:45:06.760764 2066 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 15:45:06.763576 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 15:45:06.763806 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 15:45:08.301542 containerd[1507]: time="2025-01-30T15:45:08.301462791Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:45:08.302953 containerd[1507]: time="2025-01-30T15:45:08.302911008Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.1: active requests=0, bytes read=19169767" Jan 30 15:45:08.303770 containerd[1507]: time="2025-01-30T15:45:08.303695964Z" level=info msg="ImageCreate event name:\"sha256:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:45:08.308879 containerd[1507]: time="2025-01-30T15:45:08.308802738Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:45:08.310605 containerd[1507]: time="2025-01-30T15:45:08.310412319Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.1\" with image id \"sha256:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e\", size \"20657536\" in 2.000332078s" Jan 30 15:45:08.310605 containerd[1507]: time="2025-01-30T15:45:08.310471048Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.1\" returns image reference \"sha256:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1\"" Jan 30 15:45:08.311422 containerd[1507]: time="2025-01-30T15:45:08.311392950Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\"" Jan 30 15:45:10.035703 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1201932543.mount: Deactivated successfully. Jan 30 15:45:10.884566 containerd[1507]: time="2025-01-30T15:45:10.884259747Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:45:10.885660 containerd[1507]: time="2025-01-30T15:45:10.885384427Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.1: active requests=0, bytes read=30909474" Jan 30 15:45:10.886576 containerd[1507]: time="2025-01-30T15:45:10.886504174Z" level=info msg="ImageCreate event name:\"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:45:10.889381 containerd[1507]: time="2025-01-30T15:45:10.889327917Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:45:10.890500 containerd[1507]: time="2025-01-30T15:45:10.890330583Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.1\" with image id \"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\", repo tag \"registry.k8s.io/kube-proxy:v1.32.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\", size \"30908485\" in 2.578897439s" Jan 30 15:45:10.890500 containerd[1507]: time="2025-01-30T15:45:10.890373459Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\" returns image reference \"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\"" Jan 30 15:45:10.891554 containerd[1507]: time="2025-01-30T15:45:10.891525319Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 30 15:45:11.474946 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4025773663.mount: Deactivated successfully. Jan 30 15:45:12.841832 containerd[1507]: time="2025-01-30T15:45:12.841648776Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:45:12.843025 containerd[1507]: time="2025-01-30T15:45:12.842982988Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" Jan 30 15:45:12.844220 containerd[1507]: time="2025-01-30T15:45:12.844143107Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:45:12.849014 containerd[1507]: time="2025-01-30T15:45:12.848957882Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:45:12.850876 containerd[1507]: time="2025-01-30T15:45:12.850418710Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.958851668s" Jan 30 15:45:12.850876 containerd[1507]: time="2025-01-30T15:45:12.850458807Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 30 15:45:12.851305 containerd[1507]: time="2025-01-30T15:45:12.851266787Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 30 15:45:13.406058 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1195209015.mount: Deactivated successfully. Jan 30 15:45:13.413165 containerd[1507]: time="2025-01-30T15:45:13.413088201Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:45:13.439969 containerd[1507]: time="2025-01-30T15:45:13.439876585Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Jan 30 15:45:13.441871 containerd[1507]: time="2025-01-30T15:45:13.441782170Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:45:13.449368 containerd[1507]: time="2025-01-30T15:45:13.449298094Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:45:13.450440 containerd[1507]: time="2025-01-30T15:45:13.450407026Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 599.001976ms" Jan 30 15:45:13.450746 containerd[1507]: time="2025-01-30T15:45:13.450567975Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 30 15:45:13.451952 containerd[1507]: time="2025-01-30T15:45:13.451904045Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 30 15:45:14.108199 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3866166405.mount: Deactivated successfully. Jan 30 15:45:16.979267 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 30 15:45:16.990568 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 15:45:17.006479 containerd[1507]: time="2025-01-30T15:45:17.006151302Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:45:17.010000 containerd[1507]: time="2025-01-30T15:45:17.009934022Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551328" Jan 30 15:45:17.023930 containerd[1507]: time="2025-01-30T15:45:17.023449958Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:45:17.030390 containerd[1507]: time="2025-01-30T15:45:17.030207748Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:45:17.033315 containerd[1507]: time="2025-01-30T15:45:17.033261551Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.581287819s" Jan 30 15:45:17.034237 containerd[1507]: time="2025-01-30T15:45:17.034054805Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 30 15:45:17.290819 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 15:45:17.300662 (kubelet)[2211]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 15:45:17.378857 kubelet[2211]: E0130 15:45:17.378686 2211 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 15:45:17.381381 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 15:45:17.381628 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 15:45:17.905548 update_engine[1494]: I20250130 15:45:17.904320 1494 update_attempter.cc:509] Updating boot flags... Jan 30 15:45:17.991125 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2238) Jan 30 15:45:18.067138 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2240) Jan 30 15:45:21.459543 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 15:45:21.467479 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 15:45:21.507573 systemd[1]: Reloading requested from client PID 2252 ('systemctl') (unit session-11.scope)... Jan 30 15:45:21.507611 systemd[1]: Reloading... Jan 30 15:45:21.681152 zram_generator::config[2291]: No configuration found. Jan 30 15:45:21.821853 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 15:45:21.928354 systemd[1]: Reloading finished in 420 ms. Jan 30 15:45:22.003406 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 30 15:45:22.003546 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 30 15:45:22.004344 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 15:45:22.009533 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 15:45:22.385514 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 15:45:22.399599 (kubelet)[2358]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 15:45:22.466513 kubelet[2358]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 15:45:22.466513 kubelet[2358]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 30 15:45:22.466513 kubelet[2358]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 15:45:22.467138 kubelet[2358]: I0130 15:45:22.466669 2358 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 15:45:23.202336 kubelet[2358]: I0130 15:45:23.201394 2358 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Jan 30 15:45:23.202336 kubelet[2358]: I0130 15:45:23.201453 2358 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 15:45:23.202336 kubelet[2358]: I0130 15:45:23.201815 2358 server.go:954] "Client rotation is on, will bootstrap in background" Jan 30 15:45:23.231012 kubelet[2358]: E0130 15:45:23.230929 2358 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.243.85.194:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.243.85.194:6443: connect: connection refused" logger="UnhandledError" Jan 30 15:45:23.231487 kubelet[2358]: I0130 15:45:23.231127 2358 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 15:45:23.246561 kubelet[2358]: E0130 15:45:23.246499 2358 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 30 15:45:23.247165 kubelet[2358]: I0130 15:45:23.246823 2358 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 30 15:45:23.253582 kubelet[2358]: I0130 15:45:23.253557 2358 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 15:45:23.257967 kubelet[2358]: I0130 15:45:23.257911 2358 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 15:45:23.258864 kubelet[2358]: I0130 15:45:23.258096 2358 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-eom3a.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 15:45:23.258864 kubelet[2358]: I0130 15:45:23.258463 2358 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 15:45:23.258864 kubelet[2358]: I0130 15:45:23.258481 2358 container_manager_linux.go:304] "Creating device plugin manager" Jan 30 15:45:23.258864 kubelet[2358]: I0130 15:45:23.258714 2358 state_mem.go:36] "Initialized new in-memory state store" Jan 30 15:45:23.262371 kubelet[2358]: I0130 15:45:23.262348 2358 kubelet.go:446] "Attempting to sync node with API server" Jan 30 15:45:23.262470 kubelet[2358]: I0130 15:45:23.262376 2358 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 15:45:23.262470 kubelet[2358]: I0130 15:45:23.262412 2358 kubelet.go:352] "Adding apiserver pod source" Jan 30 15:45:23.262470 kubelet[2358]: I0130 15:45:23.262432 2358 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 15:45:23.270482 kubelet[2358]: I0130 15:45:23.270451 2358 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 30 15:45:23.274263 kubelet[2358]: I0130 15:45:23.274220 2358 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 15:45:23.274655 kubelet[2358]: W0130 15:45:23.274592 2358 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.243.85.194:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-eom3a.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.243.85.194:6443: connect: connection refused Jan 30 15:45:23.274827 kubelet[2358]: E0130 15:45:23.274791 2358 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.243.85.194:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-eom3a.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.243.85.194:6443: connect: connection refused" logger="UnhandledError" Jan 30 15:45:23.274987 kubelet[2358]: W0130 15:45:23.274958 2358 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 15:45:23.275172 kubelet[2358]: W0130 15:45:23.275127 2358 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.243.85.194:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.243.85.194:6443: connect: connection refused Jan 30 15:45:23.275341 kubelet[2358]: E0130 15:45:23.275312 2358 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.243.85.194:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.243.85.194:6443: connect: connection refused" logger="UnhandledError" Jan 30 15:45:23.276404 kubelet[2358]: I0130 15:45:23.275984 2358 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 30 15:45:23.276404 kubelet[2358]: I0130 15:45:23.276039 2358 server.go:1287] "Started kubelet" Jan 30 15:45:23.276946 kubelet[2358]: I0130 15:45:23.276911 2358 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 15:45:23.279515 kubelet[2358]: I0130 15:45:23.279492 2358 server.go:490] "Adding debug handlers to kubelet server" Jan 30 15:45:23.279821 kubelet[2358]: I0130 15:45:23.279735 2358 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 15:45:23.280306 kubelet[2358]: I0130 15:45:23.280279 2358 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 15:45:23.284491 kubelet[2358]: I0130 15:45:23.284465 2358 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 15:45:23.284673 kubelet[2358]: E0130 15:45:23.281326 2358 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.243.85.194:6443/api/v1/namespaces/default/events\": dial tcp 10.243.85.194:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-eom3a.gb1.brightbox.com.181f82ea21018fe0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-eom3a.gb1.brightbox.com,UID:srv-eom3a.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-eom3a.gb1.brightbox.com,},FirstTimestamp:2025-01-30 15:45:23.27600944 +0000 UTC m=+0.868966477,LastTimestamp:2025-01-30 15:45:23.27600944 +0000 UTC m=+0.868966477,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-eom3a.gb1.brightbox.com,}" Jan 30 15:45:23.284673 kubelet[2358]: I0130 15:45:23.284769 2358 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 30 15:45:23.291412 kubelet[2358]: E0130 15:45:23.290795 2358 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"srv-eom3a.gb1.brightbox.com\" not found" Jan 30 15:45:23.291412 kubelet[2358]: I0130 15:45:23.290846 2358 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 30 15:45:23.294158 kubelet[2358]: I0130 15:45:23.292139 2358 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 15:45:23.294158 kubelet[2358]: I0130 15:45:23.292235 2358 reconciler.go:26] "Reconciler: start to sync state" Jan 30 15:45:23.294158 kubelet[2358]: W0130 15:45:23.292681 2358 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.243.85.194:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.243.85.194:6443: connect: connection refused Jan 30 15:45:23.294158 kubelet[2358]: E0130 15:45:23.292732 2358 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.243.85.194:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.243.85.194:6443: connect: connection refused" logger="UnhandledError" Jan 30 15:45:23.294158 kubelet[2358]: E0130 15:45:23.293012 2358 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.243.85.194:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-eom3a.gb1.brightbox.com?timeout=10s\": dial tcp 10.243.85.194:6443: connect: connection refused" interval="200ms" Jan 30 15:45:23.295523 kubelet[2358]: I0130 15:45:23.294816 2358 factory.go:221] Registration of the systemd container factory successfully Jan 30 15:45:23.295523 kubelet[2358]: I0130 15:45:23.294913 2358 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 15:45:23.297140 kubelet[2358]: I0130 15:45:23.297120 2358 factory.go:221] Registration of the containerd container factory successfully Jan 30 15:45:23.300202 kubelet[2358]: E0130 15:45:23.300174 2358 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 15:45:23.335724 kubelet[2358]: I0130 15:45:23.335652 2358 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 15:45:23.338205 kubelet[2358]: I0130 15:45:23.338170 2358 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 15:45:23.338375 kubelet[2358]: I0130 15:45:23.338355 2358 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 30 15:45:23.338591 kubelet[2358]: I0130 15:45:23.338482 2358 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 30 15:45:23.338591 kubelet[2358]: I0130 15:45:23.338531 2358 kubelet.go:2388] "Starting kubelet main sync loop" Jan 30 15:45:23.339044 kubelet[2358]: E0130 15:45:23.338832 2358 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 15:45:23.339044 kubelet[2358]: I0130 15:45:23.338971 2358 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 30 15:45:23.339044 kubelet[2358]: I0130 15:45:23.338988 2358 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 30 15:45:23.339044 kubelet[2358]: I0130 15:45:23.339017 2358 state_mem.go:36] "Initialized new in-memory state store" Jan 30 15:45:23.341734 kubelet[2358]: I0130 15:45:23.341431 2358 policy_none.go:49] "None policy: Start" Jan 30 15:45:23.341734 kubelet[2358]: I0130 15:45:23.341466 2358 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 30 15:45:23.341734 kubelet[2358]: I0130 15:45:23.341490 2358 state_mem.go:35] "Initializing new in-memory state store" Jan 30 15:45:23.344460 kubelet[2358]: W0130 15:45:23.344413 2358 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.243.85.194:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.243.85.194:6443: connect: connection refused Jan 30 15:45:23.344703 kubelet[2358]: E0130 15:45:23.344673 2358 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.243.85.194:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.243.85.194:6443: connect: connection refused" logger="UnhandledError" Jan 30 15:45:23.353228 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 30 15:45:23.366754 kubelet[2358]: E0130 15:45:23.366625 2358 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.243.85.194:6443/api/v1/namespaces/default/events\": dial tcp 10.243.85.194:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-eom3a.gb1.brightbox.com.181f82ea21018fe0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-eom3a.gb1.brightbox.com,UID:srv-eom3a.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-eom3a.gb1.brightbox.com,},FirstTimestamp:2025-01-30 15:45:23.27600944 +0000 UTC m=+0.868966477,LastTimestamp:2025-01-30 15:45:23.27600944 +0000 UTC m=+0.868966477,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-eom3a.gb1.brightbox.com,}" Jan 30 15:45:23.372016 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 30 15:45:23.378397 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 30 15:45:23.388639 kubelet[2358]: I0130 15:45:23.388597 2358 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 15:45:23.388886 kubelet[2358]: I0130 15:45:23.388865 2358 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 15:45:23.388970 kubelet[2358]: I0130 15:45:23.388899 2358 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 15:45:23.389680 kubelet[2358]: I0130 15:45:23.389660 2358 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 15:45:23.391395 kubelet[2358]: E0130 15:45:23.391301 2358 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 30 15:45:23.391395 kubelet[2358]: E0130 15:45:23.391368 2358 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-eom3a.gb1.brightbox.com\" not found" Jan 30 15:45:23.458682 systemd[1]: Created slice kubepods-burstable-pod2a229f0ee05c44ac32715b879a6a1355.slice - libcontainer container kubepods-burstable-pod2a229f0ee05c44ac32715b879a6a1355.slice. Jan 30 15:45:23.473650 kubelet[2358]: E0130 15:45:23.473345 2358 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-eom3a.gb1.brightbox.com\" not found" node="srv-eom3a.gb1.brightbox.com" Jan 30 15:45:23.475946 systemd[1]: Created slice kubepods-burstable-pod22ee2a199cd25103cff5807630a16274.slice - libcontainer container kubepods-burstable-pod22ee2a199cd25103cff5807630a16274.slice. Jan 30 15:45:23.488023 kubelet[2358]: E0130 15:45:23.487709 2358 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-eom3a.gb1.brightbox.com\" not found" node="srv-eom3a.gb1.brightbox.com" Jan 30 15:45:23.492972 kubelet[2358]: I0130 15:45:23.492924 2358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2a229f0ee05c44ac32715b879a6a1355-ca-certs\") pod \"kube-apiserver-srv-eom3a.gb1.brightbox.com\" (UID: \"2a229f0ee05c44ac32715b879a6a1355\") " pod="kube-system/kube-apiserver-srv-eom3a.gb1.brightbox.com" Jan 30 15:45:23.493016 systemd[1]: Created slice kubepods-burstable-poddbefacfc89319b26503c678c3b7d3cb9.slice - libcontainer container kubepods-burstable-poddbefacfc89319b26503c678c3b7d3cb9.slice. Jan 30 15:45:23.493491 kubelet[2358]: I0130 15:45:23.493348 2358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2a229f0ee05c44ac32715b879a6a1355-k8s-certs\") pod \"kube-apiserver-srv-eom3a.gb1.brightbox.com\" (UID: \"2a229f0ee05c44ac32715b879a6a1355\") " pod="kube-system/kube-apiserver-srv-eom3a.gb1.brightbox.com" Jan 30 15:45:23.493491 kubelet[2358]: I0130 15:45:23.493386 2358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2a229f0ee05c44ac32715b879a6a1355-usr-share-ca-certificates\") pod \"kube-apiserver-srv-eom3a.gb1.brightbox.com\" (UID: \"2a229f0ee05c44ac32715b879a6a1355\") " pod="kube-system/kube-apiserver-srv-eom3a.gb1.brightbox.com" Jan 30 15:45:23.493491 kubelet[2358]: I0130 15:45:23.493436 2358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/22ee2a199cd25103cff5807630a16274-ca-certs\") pod \"kube-controller-manager-srv-eom3a.gb1.brightbox.com\" (UID: \"22ee2a199cd25103cff5807630a16274\") " pod="kube-system/kube-controller-manager-srv-eom3a.gb1.brightbox.com" Jan 30 15:45:23.493730 kubelet[2358]: I0130 15:45:23.493468 2358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/22ee2a199cd25103cff5807630a16274-flexvolume-dir\") pod \"kube-controller-manager-srv-eom3a.gb1.brightbox.com\" (UID: \"22ee2a199cd25103cff5807630a16274\") " pod="kube-system/kube-controller-manager-srv-eom3a.gb1.brightbox.com" Jan 30 15:45:23.493730 kubelet[2358]: I0130 15:45:23.493682 2358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/22ee2a199cd25103cff5807630a16274-kubeconfig\") pod \"kube-controller-manager-srv-eom3a.gb1.brightbox.com\" (UID: \"22ee2a199cd25103cff5807630a16274\") " pod="kube-system/kube-controller-manager-srv-eom3a.gb1.brightbox.com" Jan 30 15:45:23.494025 kubelet[2358]: I0130 15:45:23.493712 2358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/22ee2a199cd25103cff5807630a16274-k8s-certs\") pod \"kube-controller-manager-srv-eom3a.gb1.brightbox.com\" (UID: \"22ee2a199cd25103cff5807630a16274\") " pod="kube-system/kube-controller-manager-srv-eom3a.gb1.brightbox.com" Jan 30 15:45:23.494025 kubelet[2358]: I0130 15:45:23.493906 2358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/22ee2a199cd25103cff5807630a16274-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-eom3a.gb1.brightbox.com\" (UID: \"22ee2a199cd25103cff5807630a16274\") " pod="kube-system/kube-controller-manager-srv-eom3a.gb1.brightbox.com" Jan 30 15:45:23.494025 kubelet[2358]: I0130 15:45:23.493956 2358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dbefacfc89319b26503c678c3b7d3cb9-kubeconfig\") pod \"kube-scheduler-srv-eom3a.gb1.brightbox.com\" (UID: \"dbefacfc89319b26503c678c3b7d3cb9\") " pod="kube-system/kube-scheduler-srv-eom3a.gb1.brightbox.com" Jan 30 15:45:23.495192 kubelet[2358]: E0130 15:45:23.495139 2358 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.243.85.194:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-eom3a.gb1.brightbox.com?timeout=10s\": dial tcp 10.243.85.194:6443: connect: connection refused" interval="400ms" Jan 30 15:45:23.496271 kubelet[2358]: E0130 15:45:23.496237 2358 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-eom3a.gb1.brightbox.com\" not found" node="srv-eom3a.gb1.brightbox.com" Jan 30 15:45:23.497642 kubelet[2358]: I0130 15:45:23.497611 2358 kubelet_node_status.go:76] "Attempting to register node" node="srv-eom3a.gb1.brightbox.com" Jan 30 15:45:23.498009 kubelet[2358]: E0130 15:45:23.497981 2358 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.243.85.194:6443/api/v1/nodes\": dial tcp 10.243.85.194:6443: connect: connection refused" node="srv-eom3a.gb1.brightbox.com" Jan 30 15:45:23.701797 kubelet[2358]: I0130 15:45:23.701744 2358 kubelet_node_status.go:76] "Attempting to register node" node="srv-eom3a.gb1.brightbox.com" Jan 30 15:45:23.702218 kubelet[2358]: E0130 15:45:23.702163 2358 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.243.85.194:6443/api/v1/nodes\": dial tcp 10.243.85.194:6443: connect: connection refused" node="srv-eom3a.gb1.brightbox.com" Jan 30 15:45:23.775411 containerd[1507]: time="2025-01-30T15:45:23.775240679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-eom3a.gb1.brightbox.com,Uid:2a229f0ee05c44ac32715b879a6a1355,Namespace:kube-system,Attempt:0,}" Jan 30 15:45:23.793140 containerd[1507]: time="2025-01-30T15:45:23.793078455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-eom3a.gb1.brightbox.com,Uid:22ee2a199cd25103cff5807630a16274,Namespace:kube-system,Attempt:0,}" Jan 30 15:45:23.797764 containerd[1507]: time="2025-01-30T15:45:23.797722164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-eom3a.gb1.brightbox.com,Uid:dbefacfc89319b26503c678c3b7d3cb9,Namespace:kube-system,Attempt:0,}" Jan 30 15:45:23.896207 kubelet[2358]: E0130 15:45:23.896092 2358 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.243.85.194:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-eom3a.gb1.brightbox.com?timeout=10s\": dial tcp 10.243.85.194:6443: connect: connection refused" interval="800ms" Jan 30 15:45:24.106156 kubelet[2358]: I0130 15:45:24.106026 2358 kubelet_node_status.go:76] "Attempting to register node" node="srv-eom3a.gb1.brightbox.com" Jan 30 15:45:24.106538 kubelet[2358]: E0130 15:45:24.106489 2358 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.243.85.194:6443/api/v1/nodes\": dial tcp 10.243.85.194:6443: connect: connection refused" node="srv-eom3a.gb1.brightbox.com" Jan 30 15:45:24.134444 kubelet[2358]: W0130 15:45:24.134333 2358 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.243.85.194:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.243.85.194:6443: connect: connection refused Jan 30 15:45:24.134444 kubelet[2358]: E0130 15:45:24.134438 2358 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.243.85.194:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.243.85.194:6443: connect: connection refused" logger="UnhandledError" Jan 30 15:45:24.341610 kubelet[2358]: W0130 15:45:24.338299 2358 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.243.85.194:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.243.85.194:6443: connect: connection refused Jan 30 15:45:24.341610 kubelet[2358]: E0130 15:45:24.338405 2358 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.243.85.194:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.243.85.194:6443: connect: connection refused" logger="UnhandledError" Jan 30 15:45:24.353183 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2974513.mount: Deactivated successfully. Jan 30 15:45:24.376384 containerd[1507]: time="2025-01-30T15:45:24.374934134Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 15:45:24.391260 containerd[1507]: time="2025-01-30T15:45:24.391160301Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jan 30 15:45:24.393429 containerd[1507]: time="2025-01-30T15:45:24.393392775Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 15:45:24.395301 containerd[1507]: time="2025-01-30T15:45:24.394935596Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 15:45:24.395301 containerd[1507]: time="2025-01-30T15:45:24.395066137Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 15:45:24.397906 containerd[1507]: time="2025-01-30T15:45:24.397870716Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 15:45:24.398387 containerd[1507]: time="2025-01-30T15:45:24.398334170Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 15:45:24.399505 containerd[1507]: time="2025-01-30T15:45:24.399472418Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 15:45:24.402346 containerd[1507]: time="2025-01-30T15:45:24.402313001Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 609.107823ms" Jan 30 15:45:24.404782 containerd[1507]: time="2025-01-30T15:45:24.404664633Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 629.115353ms" Jan 30 15:45:24.409129 containerd[1507]: time="2025-01-30T15:45:24.408807984Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 611.001392ms" Jan 30 15:45:24.464336 kubelet[2358]: W0130 15:45:24.464175 2358 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.243.85.194:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-eom3a.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.243.85.194:6443: connect: connection refused Jan 30 15:45:24.464336 kubelet[2358]: E0130 15:45:24.464341 2358 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.243.85.194:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-eom3a.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.243.85.194:6443: connect: connection refused" logger="UnhandledError" Jan 30 15:45:24.634887 kubelet[2358]: W0130 15:45:24.633863 2358 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.243.85.194:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.243.85.194:6443: connect: connection refused Jan 30 15:45:24.634887 kubelet[2358]: E0130 15:45:24.633940 2358 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.243.85.194:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.243.85.194:6443: connect: connection refused" logger="UnhandledError" Jan 30 15:45:24.651984 containerd[1507]: time="2025-01-30T15:45:24.651799267Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 15:45:24.652362 containerd[1507]: time="2025-01-30T15:45:24.651952017Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 15:45:24.652362 containerd[1507]: time="2025-01-30T15:45:24.651980020Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:45:24.652362 containerd[1507]: time="2025-01-30T15:45:24.652248104Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:45:24.658126 containerd[1507]: time="2025-01-30T15:45:24.657707364Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 15:45:24.658126 containerd[1507]: time="2025-01-30T15:45:24.657797229Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 15:45:24.658126 containerd[1507]: time="2025-01-30T15:45:24.657822049Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:45:24.659174 containerd[1507]: time="2025-01-30T15:45:24.658881644Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:45:24.664642 containerd[1507]: time="2025-01-30T15:45:24.662345958Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 15:45:24.664642 containerd[1507]: time="2025-01-30T15:45:24.662427051Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 15:45:24.664642 containerd[1507]: time="2025-01-30T15:45:24.662456232Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:45:24.664642 containerd[1507]: time="2025-01-30T15:45:24.662595279Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:45:24.698878 kubelet[2358]: E0130 15:45:24.697646 2358 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.243.85.194:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-eom3a.gb1.brightbox.com?timeout=10s\": dial tcp 10.243.85.194:6443: connect: connection refused" interval="1.6s" Jan 30 15:45:24.697712 systemd[1]: Started cri-containerd-b3f28f179b41fe0ccf741da23597451148056e27be89abe900c6f2981971943f.scope - libcontainer container b3f28f179b41fe0ccf741da23597451148056e27be89abe900c6f2981971943f. Jan 30 15:45:24.718648 systemd[1]: Started cri-containerd-2ff63766196bc4b8bf1cd2202f3be5eed8be8b5d23dbc15ab4bf480974e07729.scope - libcontainer container 2ff63766196bc4b8bf1cd2202f3be5eed8be8b5d23dbc15ab4bf480974e07729. Jan 30 15:45:24.729754 systemd[1]: Started cri-containerd-163442e133d0cadf9b4f61f4f942d8e1bc3045c31f752afb5403350072978fb2.scope - libcontainer container 163442e133d0cadf9b4f61f4f942d8e1bc3045c31f752afb5403350072978fb2. Jan 30 15:45:24.835275 containerd[1507]: time="2025-01-30T15:45:24.834701820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-eom3a.gb1.brightbox.com,Uid:dbefacfc89319b26503c678c3b7d3cb9,Namespace:kube-system,Attempt:0,} returns sandbox id \"b3f28f179b41fe0ccf741da23597451148056e27be89abe900c6f2981971943f\"" Jan 30 15:45:24.836609 containerd[1507]: time="2025-01-30T15:45:24.836313136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-eom3a.gb1.brightbox.com,Uid:22ee2a199cd25103cff5807630a16274,Namespace:kube-system,Attempt:0,} returns sandbox id \"2ff63766196bc4b8bf1cd2202f3be5eed8be8b5d23dbc15ab4bf480974e07729\"" Jan 30 15:45:24.843193 containerd[1507]: time="2025-01-30T15:45:24.843063061Z" level=info msg="CreateContainer within sandbox \"2ff63766196bc4b8bf1cd2202f3be5eed8be8b5d23dbc15ab4bf480974e07729\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 30 15:45:24.843714 containerd[1507]: time="2025-01-30T15:45:24.843578062Z" level=info msg="CreateContainer within sandbox \"b3f28f179b41fe0ccf741da23597451148056e27be89abe900c6f2981971943f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 30 15:45:24.855503 containerd[1507]: time="2025-01-30T15:45:24.855315146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-eom3a.gb1.brightbox.com,Uid:2a229f0ee05c44ac32715b879a6a1355,Namespace:kube-system,Attempt:0,} returns sandbox id \"163442e133d0cadf9b4f61f4f942d8e1bc3045c31f752afb5403350072978fb2\"" Jan 30 15:45:24.863403 containerd[1507]: time="2025-01-30T15:45:24.863293352Z" level=info msg="CreateContainer within sandbox \"163442e133d0cadf9b4f61f4f942d8e1bc3045c31f752afb5403350072978fb2\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 30 15:45:24.868369 containerd[1507]: time="2025-01-30T15:45:24.867830852Z" level=info msg="CreateContainer within sandbox \"2ff63766196bc4b8bf1cd2202f3be5eed8be8b5d23dbc15ab4bf480974e07729\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"414edfea6bcdcc86fb8033c7226732c8a98c80e7d3097f0a313759f2569ff094\"" Jan 30 15:45:24.869692 containerd[1507]: time="2025-01-30T15:45:24.869647265Z" level=info msg="StartContainer for \"414edfea6bcdcc86fb8033c7226732c8a98c80e7d3097f0a313759f2569ff094\"" Jan 30 15:45:24.900065 containerd[1507]: time="2025-01-30T15:45:24.898995783Z" level=info msg="CreateContainer within sandbox \"b3f28f179b41fe0ccf741da23597451148056e27be89abe900c6f2981971943f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"836e55715c072122ddbd24f8db0e4dd61286fa5d33296e7e7c9981940a43d8ef\"" Jan 30 15:45:24.902662 containerd[1507]: time="2025-01-30T15:45:24.902562467Z" level=info msg="StartContainer for \"836e55715c072122ddbd24f8db0e4dd61286fa5d33296e7e7c9981940a43d8ef\"" Jan 30 15:45:24.911502 containerd[1507]: time="2025-01-30T15:45:24.910334466Z" level=info msg="CreateContainer within sandbox \"163442e133d0cadf9b4f61f4f942d8e1bc3045c31f752afb5403350072978fb2\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4fe14d2b26ed827ef1391e7b7a715a1e898756ef100e949dcf818f8e6b5e109a\"" Jan 30 15:45:24.912856 containerd[1507]: time="2025-01-30T15:45:24.912813086Z" level=info msg="StartContainer for \"4fe14d2b26ed827ef1391e7b7a715a1e898756ef100e949dcf818f8e6b5e109a\"" Jan 30 15:45:24.914509 kubelet[2358]: I0130 15:45:24.914435 2358 kubelet_node_status.go:76] "Attempting to register node" node="srv-eom3a.gb1.brightbox.com" Jan 30 15:45:24.915456 kubelet[2358]: E0130 15:45:24.915389 2358 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.243.85.194:6443/api/v1/nodes\": dial tcp 10.243.85.194:6443: connect: connection refused" node="srv-eom3a.gb1.brightbox.com" Jan 30 15:45:24.926431 systemd[1]: Started cri-containerd-414edfea6bcdcc86fb8033c7226732c8a98c80e7d3097f0a313759f2569ff094.scope - libcontainer container 414edfea6bcdcc86fb8033c7226732c8a98c80e7d3097f0a313759f2569ff094. Jan 30 15:45:24.983334 systemd[1]: Started cri-containerd-4fe14d2b26ed827ef1391e7b7a715a1e898756ef100e949dcf818f8e6b5e109a.scope - libcontainer container 4fe14d2b26ed827ef1391e7b7a715a1e898756ef100e949dcf818f8e6b5e109a. Jan 30 15:45:24.994353 systemd[1]: Started cri-containerd-836e55715c072122ddbd24f8db0e4dd61286fa5d33296e7e7c9981940a43d8ef.scope - libcontainer container 836e55715c072122ddbd24f8db0e4dd61286fa5d33296e7e7c9981940a43d8ef. Jan 30 15:45:25.040453 containerd[1507]: time="2025-01-30T15:45:25.040383867Z" level=info msg="StartContainer for \"414edfea6bcdcc86fb8033c7226732c8a98c80e7d3097f0a313759f2569ff094\" returns successfully" Jan 30 15:45:25.111512 containerd[1507]: time="2025-01-30T15:45:25.111344448Z" level=info msg="StartContainer for \"4fe14d2b26ed827ef1391e7b7a715a1e898756ef100e949dcf818f8e6b5e109a\" returns successfully" Jan 30 15:45:25.120597 containerd[1507]: time="2025-01-30T15:45:25.120425628Z" level=info msg="StartContainer for \"836e55715c072122ddbd24f8db0e4dd61286fa5d33296e7e7c9981940a43d8ef\" returns successfully" Jan 30 15:45:25.306325 kubelet[2358]: E0130 15:45:25.306270 2358 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.243.85.194:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.243.85.194:6443: connect: connection refused" logger="UnhandledError" Jan 30 15:45:25.378320 kubelet[2358]: E0130 15:45:25.377721 2358 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-eom3a.gb1.brightbox.com\" not found" node="srv-eom3a.gb1.brightbox.com" Jan 30 15:45:25.379136 kubelet[2358]: E0130 15:45:25.379092 2358 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-eom3a.gb1.brightbox.com\" not found" node="srv-eom3a.gb1.brightbox.com" Jan 30 15:45:25.384166 kubelet[2358]: E0130 15:45:25.381668 2358 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-eom3a.gb1.brightbox.com\" not found" node="srv-eom3a.gb1.brightbox.com" Jan 30 15:45:26.380150 kubelet[2358]: E0130 15:45:26.378735 2358 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-eom3a.gb1.brightbox.com\" not found" node="srv-eom3a.gb1.brightbox.com" Jan 30 15:45:26.380769 kubelet[2358]: E0130 15:45:26.380340 2358 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-eom3a.gb1.brightbox.com\" not found" node="srv-eom3a.gb1.brightbox.com" Jan 30 15:45:26.520198 kubelet[2358]: I0130 15:45:26.519654 2358 kubelet_node_status.go:76] "Attempting to register node" node="srv-eom3a.gb1.brightbox.com" Jan 30 15:45:28.198801 kubelet[2358]: E0130 15:45:28.198746 2358 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-eom3a.gb1.brightbox.com\" not found" node="srv-eom3a.gb1.brightbox.com" Jan 30 15:45:28.267242 kubelet[2358]: I0130 15:45:28.267194 2358 apiserver.go:52] "Watching apiserver" Jan 30 15:45:28.283127 kubelet[2358]: I0130 15:45:28.282529 2358 kubelet_node_status.go:79] "Successfully registered node" node="srv-eom3a.gb1.brightbox.com" Jan 30 15:45:28.292983 kubelet[2358]: I0130 15:45:28.292947 2358 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 15:45:28.293366 kubelet[2358]: I0130 15:45:28.293180 2358 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-eom3a.gb1.brightbox.com" Jan 30 15:45:28.306490 kubelet[2358]: E0130 15:45:28.306235 2358 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-eom3a.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-eom3a.gb1.brightbox.com" Jan 30 15:45:28.306490 kubelet[2358]: I0130 15:45:28.306269 2358 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-eom3a.gb1.brightbox.com" Jan 30 15:45:28.313480 kubelet[2358]: E0130 15:45:28.313257 2358 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-eom3a.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-eom3a.gb1.brightbox.com" Jan 30 15:45:28.313480 kubelet[2358]: I0130 15:45:28.313289 2358 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-eom3a.gb1.brightbox.com" Jan 30 15:45:28.320434 kubelet[2358]: E0130 15:45:28.320377 2358 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-eom3a.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-eom3a.gb1.brightbox.com" Jan 30 15:45:30.350273 kubelet[2358]: I0130 15:45:30.350052 2358 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-eom3a.gb1.brightbox.com" Jan 30 15:45:30.360820 kubelet[2358]: W0130 15:45:30.360780 2358 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 15:45:30.417931 systemd[1]: Reloading requested from client PID 2643 ('systemctl') (unit session-11.scope)... Jan 30 15:45:30.418174 systemd[1]: Reloading... Jan 30 15:45:30.556128 zram_generator::config[2682]: No configuration found. Jan 30 15:45:30.729892 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 15:45:30.856038 systemd[1]: Reloading finished in 437 ms. Jan 30 15:45:30.911385 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 15:45:30.922707 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 15:45:30.923322 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 15:45:30.923536 systemd[1]: kubelet.service: Consumed 1.367s CPU time, 121.4M memory peak, 0B memory swap peak. Jan 30 15:45:30.929451 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 15:45:31.169836 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 15:45:31.182809 (kubelet)[2745]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 15:45:31.314081 kubelet[2745]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 15:45:31.314081 kubelet[2745]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 30 15:45:31.314081 kubelet[2745]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 15:45:31.314588 kubelet[2745]: I0130 15:45:31.314182 2745 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 15:45:31.324779 kubelet[2745]: I0130 15:45:31.324696 2745 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Jan 30 15:45:31.324779 kubelet[2745]: I0130 15:45:31.324744 2745 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 15:45:31.325089 kubelet[2745]: I0130 15:45:31.325058 2745 server.go:954] "Client rotation is on, will bootstrap in background" Jan 30 15:45:31.333058 kubelet[2745]: I0130 15:45:31.332931 2745 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 15:45:31.339619 kubelet[2745]: I0130 15:45:31.339582 2745 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 15:45:31.346136 kubelet[2745]: E0130 15:45:31.344866 2745 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 30 15:45:31.346136 kubelet[2745]: I0130 15:45:31.344904 2745 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 30 15:45:31.350531 kubelet[2745]: I0130 15:45:31.350503 2745 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 15:45:31.350904 kubelet[2745]: I0130 15:45:31.350860 2745 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 15:45:31.351129 kubelet[2745]: I0130 15:45:31.350904 2745 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-eom3a.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 15:45:31.351341 kubelet[2745]: I0130 15:45:31.351145 2745 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 15:45:31.351341 kubelet[2745]: I0130 15:45:31.351162 2745 container_manager_linux.go:304] "Creating device plugin manager" Jan 30 15:45:31.351341 kubelet[2745]: I0130 15:45:31.351236 2745 state_mem.go:36] "Initialized new in-memory state store" Jan 30 15:45:31.351484 kubelet[2745]: I0130 15:45:31.351450 2745 kubelet.go:446] "Attempting to sync node with API server" Jan 30 15:45:31.351484 kubelet[2745]: I0130 15:45:31.351470 2745 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 15:45:31.352444 kubelet[2745]: I0130 15:45:31.352421 2745 kubelet.go:352] "Adding apiserver pod source" Jan 30 15:45:31.352502 kubelet[2745]: I0130 15:45:31.352448 2745 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 15:45:31.356130 kubelet[2745]: I0130 15:45:31.354563 2745 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 30 15:45:31.356130 kubelet[2745]: I0130 15:45:31.355035 2745 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 15:45:31.356130 kubelet[2745]: I0130 15:45:31.355635 2745 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 30 15:45:31.356130 kubelet[2745]: I0130 15:45:31.355685 2745 server.go:1287] "Started kubelet" Jan 30 15:45:31.360159 kubelet[2745]: I0130 15:45:31.360124 2745 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 15:45:31.365130 kubelet[2745]: I0130 15:45:31.363578 2745 server.go:490] "Adding debug handlers to kubelet server" Jan 30 15:45:31.373296 kubelet[2745]: I0130 15:45:31.372959 2745 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 15:45:31.374329 kubelet[2745]: I0130 15:45:31.374308 2745 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 15:45:31.376582 kubelet[2745]: I0130 15:45:31.375453 2745 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 15:45:31.387627 kubelet[2745]: I0130 15:45:31.387595 2745 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 30 15:45:31.393976 kubelet[2745]: I0130 15:45:31.391977 2745 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 30 15:45:31.394755 kubelet[2745]: E0130 15:45:31.394459 2745 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"srv-eom3a.gb1.brightbox.com\" not found" Jan 30 15:45:31.415546 kubelet[2745]: I0130 15:45:31.415503 2745 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 15:45:31.417215 kubelet[2745]: I0130 15:45:31.416686 2745 reconciler.go:26] "Reconciler: start to sync state" Jan 30 15:45:31.423150 kubelet[2745]: I0130 15:45:31.422796 2745 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 15:45:31.424695 kubelet[2745]: I0130 15:45:31.424360 2745 factory.go:221] Registration of the systemd container factory successfully Jan 30 15:45:31.424695 kubelet[2745]: I0130 15:45:31.424488 2745 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 15:45:31.426697 kubelet[2745]: I0130 15:45:31.426306 2745 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 15:45:31.426697 kubelet[2745]: I0130 15:45:31.426339 2745 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 30 15:45:31.426697 kubelet[2745]: I0130 15:45:31.426360 2745 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 30 15:45:31.426697 kubelet[2745]: I0130 15:45:31.426370 2745 kubelet.go:2388] "Starting kubelet main sync loop" Jan 30 15:45:31.426697 kubelet[2745]: E0130 15:45:31.426439 2745 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 15:45:31.440624 kubelet[2745]: E0130 15:45:31.440426 2745 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 15:45:31.440765 kubelet[2745]: I0130 15:45:31.440741 2745 factory.go:221] Registration of the containerd container factory successfully Jan 30 15:45:31.464157 sudo[2772]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 30 15:45:31.464665 sudo[2772]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 30 15:45:31.526655 kubelet[2745]: E0130 15:45:31.526594 2745 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 30 15:45:31.546257 kubelet[2745]: I0130 15:45:31.545488 2745 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 30 15:45:31.546257 kubelet[2745]: I0130 15:45:31.545514 2745 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 30 15:45:31.546257 kubelet[2745]: I0130 15:45:31.545538 2745 state_mem.go:36] "Initialized new in-memory state store" Jan 30 15:45:31.546257 kubelet[2745]: I0130 15:45:31.545767 2745 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 30 15:45:31.546257 kubelet[2745]: I0130 15:45:31.545792 2745 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 30 15:45:31.546257 kubelet[2745]: I0130 15:45:31.545831 2745 policy_none.go:49] "None policy: Start" Jan 30 15:45:31.546257 kubelet[2745]: I0130 15:45:31.545846 2745 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 30 15:45:31.546257 kubelet[2745]: I0130 15:45:31.545872 2745 state_mem.go:35] "Initializing new in-memory state store" Jan 30 15:45:31.546257 kubelet[2745]: I0130 15:45:31.546032 2745 state_mem.go:75] "Updated machine memory state" Jan 30 15:45:31.558597 kubelet[2745]: I0130 15:45:31.558346 2745 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 15:45:31.561615 kubelet[2745]: I0130 15:45:31.561414 2745 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 15:45:31.561615 kubelet[2745]: I0130 15:45:31.561438 2745 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 15:45:31.562387 kubelet[2745]: I0130 15:45:31.561958 2745 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 15:45:31.577173 kubelet[2745]: E0130 15:45:31.575281 2745 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 30 15:45:31.696200 kubelet[2745]: I0130 15:45:31.696004 2745 kubelet_node_status.go:76] "Attempting to register node" node="srv-eom3a.gb1.brightbox.com" Jan 30 15:45:31.716306 kubelet[2745]: I0130 15:45:31.716205 2745 kubelet_node_status.go:125] "Node was previously registered" node="srv-eom3a.gb1.brightbox.com" Jan 30 15:45:31.716306 kubelet[2745]: I0130 15:45:31.716309 2745 kubelet_node_status.go:79] "Successfully registered node" node="srv-eom3a.gb1.brightbox.com" Jan 30 15:45:31.730191 kubelet[2745]: I0130 15:45:31.729464 2745 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-eom3a.gb1.brightbox.com" Jan 30 15:45:31.730191 kubelet[2745]: I0130 15:45:31.729551 2745 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-eom3a.gb1.brightbox.com" Jan 30 15:45:31.730191 kubelet[2745]: I0130 15:45:31.729926 2745 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-eom3a.gb1.brightbox.com" Jan 30 15:45:31.747118 kubelet[2745]: W0130 15:45:31.745301 2745 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 15:45:31.747411 kubelet[2745]: W0130 15:45:31.747390 2745 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 15:45:31.747653 kubelet[2745]: W0130 15:45:31.747634 2745 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 15:45:31.747795 kubelet[2745]: E0130 15:45:31.747771 2745 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-eom3a.gb1.brightbox.com\" already exists" pod="kube-system/kube-controller-manager-srv-eom3a.gb1.brightbox.com" Jan 30 15:45:31.818508 kubelet[2745]: I0130 15:45:31.818451 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2a229f0ee05c44ac32715b879a6a1355-k8s-certs\") pod \"kube-apiserver-srv-eom3a.gb1.brightbox.com\" (UID: \"2a229f0ee05c44ac32715b879a6a1355\") " pod="kube-system/kube-apiserver-srv-eom3a.gb1.brightbox.com" Jan 30 15:45:31.818695 kubelet[2745]: I0130 15:45:31.818518 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/22ee2a199cd25103cff5807630a16274-ca-certs\") pod \"kube-controller-manager-srv-eom3a.gb1.brightbox.com\" (UID: \"22ee2a199cd25103cff5807630a16274\") " pod="kube-system/kube-controller-manager-srv-eom3a.gb1.brightbox.com" Jan 30 15:45:31.818695 kubelet[2745]: I0130 15:45:31.818551 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/22ee2a199cd25103cff5807630a16274-flexvolume-dir\") pod \"kube-controller-manager-srv-eom3a.gb1.brightbox.com\" (UID: \"22ee2a199cd25103cff5807630a16274\") " pod="kube-system/kube-controller-manager-srv-eom3a.gb1.brightbox.com" Jan 30 15:45:31.818695 kubelet[2745]: I0130 15:45:31.818638 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/22ee2a199cd25103cff5807630a16274-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-eom3a.gb1.brightbox.com\" (UID: \"22ee2a199cd25103cff5807630a16274\") " pod="kube-system/kube-controller-manager-srv-eom3a.gb1.brightbox.com" Jan 30 15:45:31.818695 kubelet[2745]: I0130 15:45:31.818677 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dbefacfc89319b26503c678c3b7d3cb9-kubeconfig\") pod \"kube-scheduler-srv-eom3a.gb1.brightbox.com\" (UID: \"dbefacfc89319b26503c678c3b7d3cb9\") " pod="kube-system/kube-scheduler-srv-eom3a.gb1.brightbox.com" Jan 30 15:45:31.818878 kubelet[2745]: I0130 15:45:31.818704 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2a229f0ee05c44ac32715b879a6a1355-ca-certs\") pod \"kube-apiserver-srv-eom3a.gb1.brightbox.com\" (UID: \"2a229f0ee05c44ac32715b879a6a1355\") " pod="kube-system/kube-apiserver-srv-eom3a.gb1.brightbox.com" Jan 30 15:45:31.818878 kubelet[2745]: I0130 15:45:31.818729 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2a229f0ee05c44ac32715b879a6a1355-usr-share-ca-certificates\") pod \"kube-apiserver-srv-eom3a.gb1.brightbox.com\" (UID: \"2a229f0ee05c44ac32715b879a6a1355\") " pod="kube-system/kube-apiserver-srv-eom3a.gb1.brightbox.com" Jan 30 15:45:31.818878 kubelet[2745]: I0130 15:45:31.818755 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/22ee2a199cd25103cff5807630a16274-k8s-certs\") pod \"kube-controller-manager-srv-eom3a.gb1.brightbox.com\" (UID: \"22ee2a199cd25103cff5807630a16274\") " pod="kube-system/kube-controller-manager-srv-eom3a.gb1.brightbox.com" Jan 30 15:45:31.818878 kubelet[2745]: I0130 15:45:31.818783 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/22ee2a199cd25103cff5807630a16274-kubeconfig\") pod \"kube-controller-manager-srv-eom3a.gb1.brightbox.com\" (UID: \"22ee2a199cd25103cff5807630a16274\") " pod="kube-system/kube-controller-manager-srv-eom3a.gb1.brightbox.com" Jan 30 15:45:32.236399 sudo[2772]: pam_unix(sudo:session): session closed for user root Jan 30 15:45:32.379888 kubelet[2745]: I0130 15:45:32.377471 2745 apiserver.go:52] "Watching apiserver" Jan 30 15:45:32.416409 kubelet[2745]: I0130 15:45:32.416359 2745 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 15:45:32.488144 kubelet[2745]: I0130 15:45:32.487978 2745 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-eom3a.gb1.brightbox.com" Jan 30 15:45:32.488644 kubelet[2745]: I0130 15:45:32.488617 2745 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-eom3a.gb1.brightbox.com" Jan 30 15:45:32.497140 kubelet[2745]: W0130 15:45:32.497085 2745 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 15:45:32.497221 kubelet[2745]: E0130 15:45:32.497168 2745 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-eom3a.gb1.brightbox.com\" already exists" pod="kube-system/kube-scheduler-srv-eom3a.gb1.brightbox.com" Jan 30 15:45:32.498380 kubelet[2745]: W0130 15:45:32.498356 2745 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 15:45:32.498451 kubelet[2745]: E0130 15:45:32.498399 2745 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-eom3a.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-eom3a.gb1.brightbox.com" Jan 30 15:45:32.535026 kubelet[2745]: I0130 15:45:32.534944 2745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-eom3a.gb1.brightbox.com" podStartSLOduration=2.534916056 podStartE2EDuration="2.534916056s" podCreationTimestamp="2025-01-30 15:45:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 15:45:32.522332616 +0000 UTC m=+1.290915683" watchObservedRunningTime="2025-01-30 15:45:32.534916056 +0000 UTC m=+1.303499112" Jan 30 15:45:32.546129 kubelet[2745]: I0130 15:45:32.546044 2745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-eom3a.gb1.brightbox.com" podStartSLOduration=1.54603116 podStartE2EDuration="1.54603116s" podCreationTimestamp="2025-01-30 15:45:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 15:45:32.535729331 +0000 UTC m=+1.304312398" watchObservedRunningTime="2025-01-30 15:45:32.54603116 +0000 UTC m=+1.314614211" Jan 30 15:45:32.559382 kubelet[2745]: I0130 15:45:32.559328 2745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-eom3a.gb1.brightbox.com" podStartSLOduration=1.559316338 podStartE2EDuration="1.559316338s" podCreationTimestamp="2025-01-30 15:45:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 15:45:32.546551517 +0000 UTC m=+1.315134578" watchObservedRunningTime="2025-01-30 15:45:32.559316338 +0000 UTC m=+1.327899387" Jan 30 15:45:33.831443 sudo[1788]: pam_unix(sudo:session): session closed for user root Jan 30 15:45:33.976150 sshd[1787]: Connection closed by 139.178.89.65 port 53694 Jan 30 15:45:33.977776 sshd-session[1785]: pam_unix(sshd:session): session closed for user core Jan 30 15:45:33.981934 systemd[1]: sshd@9-10.243.85.194:22-139.178.89.65:53694.service: Deactivated successfully. Jan 30 15:45:33.985573 systemd[1]: session-11.scope: Deactivated successfully. Jan 30 15:45:33.986352 systemd[1]: session-11.scope: Consumed 6.419s CPU time, 136.4M memory peak, 0B memory swap peak. Jan 30 15:45:33.988417 systemd-logind[1492]: Session 11 logged out. Waiting for processes to exit. Jan 30 15:45:33.990363 systemd-logind[1492]: Removed session 11. Jan 30 15:45:35.847403 kubelet[2745]: I0130 15:45:35.847326 2745 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 30 15:45:35.847896 containerd[1507]: time="2025-01-30T15:45:35.847863984Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 15:45:35.848307 kubelet[2745]: I0130 15:45:35.848059 2745 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 30 15:45:36.766138 systemd[1]: Created slice kubepods-besteffort-poda23a91c5_5432_4f52_bc66_3e79109dbd50.slice - libcontainer container kubepods-besteffort-poda23a91c5_5432_4f52_bc66_3e79109dbd50.slice. Jan 30 15:45:36.802884 systemd[1]: Created slice kubepods-burstable-podd4bb8523_69fb_4b61_84f1_745013762ac4.slice - libcontainer container kubepods-burstable-podd4bb8523_69fb_4b61_84f1_745013762ac4.slice. Jan 30 15:45:36.847125 kubelet[2745]: I0130 15:45:36.845316 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d4bb8523-69fb-4b61-84f1-745013762ac4-hostproc\") pod \"cilium-ffm9c\" (UID: \"d4bb8523-69fb-4b61-84f1-745013762ac4\") " pod="kube-system/cilium-ffm9c" Jan 30 15:45:36.847125 kubelet[2745]: I0130 15:45:36.845378 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d4bb8523-69fb-4b61-84f1-745013762ac4-host-proc-sys-kernel\") pod \"cilium-ffm9c\" (UID: \"d4bb8523-69fb-4b61-84f1-745013762ac4\") " pod="kube-system/cilium-ffm9c" Jan 30 15:45:36.847125 kubelet[2745]: I0130 15:45:36.845408 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d4bb8523-69fb-4b61-84f1-745013762ac4-cilium-cgroup\") pod \"cilium-ffm9c\" (UID: \"d4bb8523-69fb-4b61-84f1-745013762ac4\") " pod="kube-system/cilium-ffm9c" Jan 30 15:45:36.847125 kubelet[2745]: I0130 15:45:36.845434 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d4bb8523-69fb-4b61-84f1-745013762ac4-xtables-lock\") pod \"cilium-ffm9c\" (UID: \"d4bb8523-69fb-4b61-84f1-745013762ac4\") " pod="kube-system/cilium-ffm9c" Jan 30 15:45:36.847125 kubelet[2745]: I0130 15:45:36.845459 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d4bb8523-69fb-4b61-84f1-745013762ac4-cilium-config-path\") pod \"cilium-ffm9c\" (UID: \"d4bb8523-69fb-4b61-84f1-745013762ac4\") " pod="kube-system/cilium-ffm9c" Jan 30 15:45:36.847125 kubelet[2745]: I0130 15:45:36.845484 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d4bb8523-69fb-4b61-84f1-745013762ac4-lib-modules\") pod \"cilium-ffm9c\" (UID: \"d4bb8523-69fb-4b61-84f1-745013762ac4\") " pod="kube-system/cilium-ffm9c" Jan 30 15:45:36.847881 kubelet[2745]: I0130 15:45:36.845509 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n87f9\" (UniqueName: \"kubernetes.io/projected/d4bb8523-69fb-4b61-84f1-745013762ac4-kube-api-access-n87f9\") pod \"cilium-ffm9c\" (UID: \"d4bb8523-69fb-4b61-84f1-745013762ac4\") " pod="kube-system/cilium-ffm9c" Jan 30 15:45:36.847881 kubelet[2745]: I0130 15:45:36.845564 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d4bb8523-69fb-4b61-84f1-745013762ac4-clustermesh-secrets\") pod \"cilium-ffm9c\" (UID: \"d4bb8523-69fb-4b61-84f1-745013762ac4\") " pod="kube-system/cilium-ffm9c" Jan 30 15:45:36.847881 kubelet[2745]: I0130 15:45:36.845597 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d4bb8523-69fb-4b61-84f1-745013762ac4-host-proc-sys-net\") pod \"cilium-ffm9c\" (UID: \"d4bb8523-69fb-4b61-84f1-745013762ac4\") " pod="kube-system/cilium-ffm9c" Jan 30 15:45:36.847881 kubelet[2745]: I0130 15:45:36.845627 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d4bb8523-69fb-4b61-84f1-745013762ac4-bpf-maps\") pod \"cilium-ffm9c\" (UID: \"d4bb8523-69fb-4b61-84f1-745013762ac4\") " pod="kube-system/cilium-ffm9c" Jan 30 15:45:36.847881 kubelet[2745]: I0130 15:45:36.845653 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a23a91c5-5432-4f52-bc66-3e79109dbd50-kube-proxy\") pod \"kube-proxy-9pfn6\" (UID: \"a23a91c5-5432-4f52-bc66-3e79109dbd50\") " pod="kube-system/kube-proxy-9pfn6" Jan 30 15:45:36.848087 kubelet[2745]: I0130 15:45:36.845678 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a23a91c5-5432-4f52-bc66-3e79109dbd50-xtables-lock\") pod \"kube-proxy-9pfn6\" (UID: \"a23a91c5-5432-4f52-bc66-3e79109dbd50\") " pod="kube-system/kube-proxy-9pfn6" Jan 30 15:45:36.848087 kubelet[2745]: I0130 15:45:36.845702 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d4bb8523-69fb-4b61-84f1-745013762ac4-etc-cni-netd\") pod \"cilium-ffm9c\" (UID: \"d4bb8523-69fb-4b61-84f1-745013762ac4\") " pod="kube-system/cilium-ffm9c" Jan 30 15:45:36.848087 kubelet[2745]: I0130 15:45:36.845727 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d4bb8523-69fb-4b61-84f1-745013762ac4-cni-path\") pod \"cilium-ffm9c\" (UID: \"d4bb8523-69fb-4b61-84f1-745013762ac4\") " pod="kube-system/cilium-ffm9c" Jan 30 15:45:36.848087 kubelet[2745]: I0130 15:45:36.845778 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d4bb8523-69fb-4b61-84f1-745013762ac4-cilium-run\") pod \"cilium-ffm9c\" (UID: \"d4bb8523-69fb-4b61-84f1-745013762ac4\") " pod="kube-system/cilium-ffm9c" Jan 30 15:45:36.848087 kubelet[2745]: I0130 15:45:36.845825 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d4bb8523-69fb-4b61-84f1-745013762ac4-hubble-tls\") pod \"cilium-ffm9c\" (UID: \"d4bb8523-69fb-4b61-84f1-745013762ac4\") " pod="kube-system/cilium-ffm9c" Jan 30 15:45:36.848087 kubelet[2745]: I0130 15:45:36.845851 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a23a91c5-5432-4f52-bc66-3e79109dbd50-lib-modules\") pod \"kube-proxy-9pfn6\" (UID: \"a23a91c5-5432-4f52-bc66-3e79109dbd50\") " pod="kube-system/kube-proxy-9pfn6" Jan 30 15:45:36.848402 kubelet[2745]: I0130 15:45:36.845882 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctw8b\" (UniqueName: \"kubernetes.io/projected/a23a91c5-5432-4f52-bc66-3e79109dbd50-kube-api-access-ctw8b\") pod \"kube-proxy-9pfn6\" (UID: \"a23a91c5-5432-4f52-bc66-3e79109dbd50\") " pod="kube-system/kube-proxy-9pfn6" Jan 30 15:45:36.852546 kubelet[2745]: W0130 15:45:36.852483 2745 reflector.go:569] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:srv-eom3a.gb1.brightbox.com" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'srv-eom3a.gb1.brightbox.com' and this object Jan 30 15:45:36.853222 kubelet[2745]: E0130 15:45:36.853155 2745 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:srv-eom3a.gb1.brightbox.com\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'srv-eom3a.gb1.brightbox.com' and this object" logger="UnhandledError" Jan 30 15:45:36.858118 kubelet[2745]: W0130 15:45:36.854092 2745 reflector.go:569] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:srv-eom3a.gb1.brightbox.com" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'srv-eom3a.gb1.brightbox.com' and this object Jan 30 15:45:36.858118 kubelet[2745]: E0130 15:45:36.854144 2745 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:srv-eom3a.gb1.brightbox.com\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'srv-eom3a.gb1.brightbox.com' and this object" logger="UnhandledError" Jan 30 15:45:36.858442 kubelet[2745]: W0130 15:45:36.858413 2745 reflector.go:569] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:srv-eom3a.gb1.brightbox.com" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'srv-eom3a.gb1.brightbox.com' and this object Jan 30 15:45:36.858527 kubelet[2745]: E0130 15:45:36.858461 2745 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:srv-eom3a.gb1.brightbox.com\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'srv-eom3a.gb1.brightbox.com' and this object" logger="UnhandledError" Jan 30 15:45:37.029221 systemd[1]: Created slice kubepods-besteffort-pod4d1374c7_bfed_44ba_85b0_1668ae143351.slice - libcontainer container kubepods-besteffort-pod4d1374c7_bfed_44ba_85b0_1668ae143351.slice. Jan 30 15:45:37.048293 kubelet[2745]: I0130 15:45:37.048195 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4d1374c7-bfed-44ba-85b0-1668ae143351-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-m82f7\" (UID: \"4d1374c7-bfed-44ba-85b0-1668ae143351\") " pod="kube-system/cilium-operator-6c4d7847fc-m82f7" Jan 30 15:45:37.048469 kubelet[2745]: I0130 15:45:37.048313 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9r7v\" (UniqueName: \"kubernetes.io/projected/4d1374c7-bfed-44ba-85b0-1668ae143351-kube-api-access-l9r7v\") pod \"cilium-operator-6c4d7847fc-m82f7\" (UID: \"4d1374c7-bfed-44ba-85b0-1668ae143351\") " pod="kube-system/cilium-operator-6c4d7847fc-m82f7" Jan 30 15:45:37.078381 containerd[1507]: time="2025-01-30T15:45:37.078326954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9pfn6,Uid:a23a91c5-5432-4f52-bc66-3e79109dbd50,Namespace:kube-system,Attempt:0,}" Jan 30 15:45:37.140595 containerd[1507]: time="2025-01-30T15:45:37.139747855Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 15:45:37.140595 containerd[1507]: time="2025-01-30T15:45:37.139836130Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 15:45:37.140595 containerd[1507]: time="2025-01-30T15:45:37.139901257Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:45:37.140595 containerd[1507]: time="2025-01-30T15:45:37.140069562Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:45:37.183318 systemd[1]: Started cri-containerd-db1836e48f4edeeb6bb40967b44d58b7702a5daf80e7e88d86f0ebe1cf0462c3.scope - libcontainer container db1836e48f4edeeb6bb40967b44d58b7702a5daf80e7e88d86f0ebe1cf0462c3. Jan 30 15:45:37.243476 containerd[1507]: time="2025-01-30T15:45:37.243428726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9pfn6,Uid:a23a91c5-5432-4f52-bc66-3e79109dbd50,Namespace:kube-system,Attempt:0,} returns sandbox id \"db1836e48f4edeeb6bb40967b44d58b7702a5daf80e7e88d86f0ebe1cf0462c3\"" Jan 30 15:45:37.248057 containerd[1507]: time="2025-01-30T15:45:37.247845006Z" level=info msg="CreateContainer within sandbox \"db1836e48f4edeeb6bb40967b44d58b7702a5daf80e7e88d86f0ebe1cf0462c3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 15:45:37.285943 containerd[1507]: time="2025-01-30T15:45:37.284884243Z" level=info msg="CreateContainer within sandbox \"db1836e48f4edeeb6bb40967b44d58b7702a5daf80e7e88d86f0ebe1cf0462c3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f62cc11ab1dfddc083a937693de0ba1813bcc5aaf6914b8a33ff14494e4857fa\"" Jan 30 15:45:37.288686 containerd[1507]: time="2025-01-30T15:45:37.288618380Z" level=info msg="StartContainer for \"f62cc11ab1dfddc083a937693de0ba1813bcc5aaf6914b8a33ff14494e4857fa\"" Jan 30 15:45:37.328336 systemd[1]: Started cri-containerd-f62cc11ab1dfddc083a937693de0ba1813bcc5aaf6914b8a33ff14494e4857fa.scope - libcontainer container f62cc11ab1dfddc083a937693de0ba1813bcc5aaf6914b8a33ff14494e4857fa. Jan 30 15:45:37.378456 containerd[1507]: time="2025-01-30T15:45:37.378396942Z" level=info msg="StartContainer for \"f62cc11ab1dfddc083a937693de0ba1813bcc5aaf6914b8a33ff14494e4857fa\" returns successfully" Jan 30 15:45:37.528091 kubelet[2745]: I0130 15:45:37.527989 2745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9pfn6" podStartSLOduration=1.5279675959999999 podStartE2EDuration="1.527967596s" podCreationTimestamp="2025-01-30 15:45:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 15:45:37.515472878 +0000 UTC m=+6.284055946" watchObservedRunningTime="2025-01-30 15:45:37.527967596 +0000 UTC m=+6.296550656" Jan 30 15:45:37.948941 kubelet[2745]: E0130 15:45:37.948758 2745 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Jan 30 15:45:37.948941 kubelet[2745]: E0130 15:45:37.948946 2745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d4bb8523-69fb-4b61-84f1-745013762ac4-cilium-config-path podName:d4bb8523-69fb-4b61-84f1-745013762ac4 nodeName:}" failed. No retries permitted until 2025-01-30 15:45:38.448900465 +0000 UTC m=+7.217483519 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/d4bb8523-69fb-4b61-84f1-745013762ac4-cilium-config-path") pod "cilium-ffm9c" (UID: "d4bb8523-69fb-4b61-84f1-745013762ac4") : failed to sync configmap cache: timed out waiting for the condition Jan 30 15:45:37.951327 kubelet[2745]: E0130 15:45:37.951153 2745 secret.go:189] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Jan 30 15:45:37.951327 kubelet[2745]: E0130 15:45:37.951276 2745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d4bb8523-69fb-4b61-84f1-745013762ac4-clustermesh-secrets podName:d4bb8523-69fb-4b61-84f1-745013762ac4 nodeName:}" failed. No retries permitted until 2025-01-30 15:45:38.451251516 +0000 UTC m=+7.219834570 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/d4bb8523-69fb-4b61-84f1-745013762ac4-clustermesh-secrets") pod "cilium-ffm9c" (UID: "d4bb8523-69fb-4b61-84f1-745013762ac4") : failed to sync secret cache: timed out waiting for the condition Jan 30 15:45:38.236204 containerd[1507]: time="2025-01-30T15:45:38.236040975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-m82f7,Uid:4d1374c7-bfed-44ba-85b0-1668ae143351,Namespace:kube-system,Attempt:0,}" Jan 30 15:45:38.281292 containerd[1507]: time="2025-01-30T15:45:38.280860434Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 15:45:38.281292 containerd[1507]: time="2025-01-30T15:45:38.280981202Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 15:45:38.281292 containerd[1507]: time="2025-01-30T15:45:38.281005290Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:45:38.281292 containerd[1507]: time="2025-01-30T15:45:38.281182484Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:45:38.321403 systemd[1]: Started cri-containerd-1cfc299118daac3e51850efc1e952f8ed6730052b08a76e1ac3adb7e27038f55.scope - libcontainer container 1cfc299118daac3e51850efc1e952f8ed6730052b08a76e1ac3adb7e27038f55. Jan 30 15:45:38.378159 containerd[1507]: time="2025-01-30T15:45:38.378086693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-m82f7,Uid:4d1374c7-bfed-44ba-85b0-1668ae143351,Namespace:kube-system,Attempt:0,} returns sandbox id \"1cfc299118daac3e51850efc1e952f8ed6730052b08a76e1ac3adb7e27038f55\"" Jan 30 15:45:38.382118 containerd[1507]: time="2025-01-30T15:45:38.381783788Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 30 15:45:38.610194 containerd[1507]: time="2025-01-30T15:45:38.610033040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ffm9c,Uid:d4bb8523-69fb-4b61-84f1-745013762ac4,Namespace:kube-system,Attempt:0,}" Jan 30 15:45:38.638934 containerd[1507]: time="2025-01-30T15:45:38.638635139Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 15:45:38.639850 containerd[1507]: time="2025-01-30T15:45:38.638827476Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 15:45:38.639850 containerd[1507]: time="2025-01-30T15:45:38.639705858Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:45:38.640281 containerd[1507]: time="2025-01-30T15:45:38.640189391Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:45:38.667393 systemd[1]: Started cri-containerd-86f5ea8de930770ac39b9bd56bb5827358df0db77aeb88eaefe4023a24a6751a.scope - libcontainer container 86f5ea8de930770ac39b9bd56bb5827358df0db77aeb88eaefe4023a24a6751a. Jan 30 15:45:38.705664 containerd[1507]: time="2025-01-30T15:45:38.705314821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ffm9c,Uid:d4bb8523-69fb-4b61-84f1-745013762ac4,Namespace:kube-system,Attempt:0,} returns sandbox id \"86f5ea8de930770ac39b9bd56bb5827358df0db77aeb88eaefe4023a24a6751a\"" Jan 30 15:45:39.339457 systemd[1]: Started sshd@10-10.243.85.194:22-103.187.146.254:34638.service - OpenSSH per-connection server daemon (103.187.146.254:34638). Jan 30 15:45:40.713550 sshd[3110]: Invalid user test from 103.187.146.254 port 34638 Jan 30 15:45:40.909570 sshd[3110]: Received disconnect from 103.187.146.254 port 34638:11: Bye Bye [preauth] Jan 30 15:45:40.909783 sshd[3110]: Disconnected from invalid user test 103.187.146.254 port 34638 [preauth] Jan 30 15:45:40.911557 systemd[1]: sshd@10-10.243.85.194:22-103.187.146.254:34638.service: Deactivated successfully. Jan 30 15:45:43.043261 systemd[1]: Started sshd@11-10.243.85.194:22-103.117.57.141:55948.service - OpenSSH per-connection server daemon (103.117.57.141:55948). Jan 30 15:45:43.795772 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3805550138.mount: Deactivated successfully. Jan 30 15:45:44.819143 containerd[1507]: time="2025-01-30T15:45:44.817931088Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:45:44.819143 containerd[1507]: time="2025-01-30T15:45:44.819061736Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 30 15:45:44.819805 containerd[1507]: time="2025-01-30T15:45:44.819697994Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:45:44.821609 containerd[1507]: time="2025-01-30T15:45:44.821562476Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 6.439735609s" Jan 30 15:45:44.821792 containerd[1507]: time="2025-01-30T15:45:44.821755841Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 30 15:45:44.825115 containerd[1507]: time="2025-01-30T15:45:44.823838592Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 30 15:45:44.826203 containerd[1507]: time="2025-01-30T15:45:44.826142597Z" level=info msg="CreateContainer within sandbox \"1cfc299118daac3e51850efc1e952f8ed6730052b08a76e1ac3adb7e27038f55\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 30 15:45:44.854738 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount815503525.mount: Deactivated successfully. Jan 30 15:45:44.860036 containerd[1507]: time="2025-01-30T15:45:44.859310423Z" level=info msg="CreateContainer within sandbox \"1cfc299118daac3e51850efc1e952f8ed6730052b08a76e1ac3adb7e27038f55\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"6221764567595b2d39567d6049c037016f83ecec17f859a5b22e0615dffd79ee\"" Jan 30 15:45:44.862241 containerd[1507]: time="2025-01-30T15:45:44.861370596Z" level=info msg="StartContainer for \"6221764567595b2d39567d6049c037016f83ecec17f859a5b22e0615dffd79ee\"" Jan 30 15:45:44.910343 systemd[1]: Started cri-containerd-6221764567595b2d39567d6049c037016f83ecec17f859a5b22e0615dffd79ee.scope - libcontainer container 6221764567595b2d39567d6049c037016f83ecec17f859a5b22e0615dffd79ee. Jan 30 15:45:44.952668 containerd[1507]: time="2025-01-30T15:45:44.952482479Z" level=info msg="StartContainer for \"6221764567595b2d39567d6049c037016f83ecec17f859a5b22e0615dffd79ee\" returns successfully" Jan 30 15:45:47.082473 sshd[3116]: Connection closed by 103.117.57.141 port 55948 [preauth] Jan 30 15:45:47.084711 systemd[1]: sshd@11-10.243.85.194:22-103.117.57.141:55948.service: Deactivated successfully. Jan 30 15:45:52.249993 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4191015609.mount: Deactivated successfully. Jan 30 15:45:55.781610 containerd[1507]: time="2025-01-30T15:45:55.781502679Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:45:55.782824 containerd[1507]: time="2025-01-30T15:45:55.782627242Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 30 15:45:55.785924 containerd[1507]: time="2025-01-30T15:45:55.785892061Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:45:55.787850 containerd[1507]: time="2025-01-30T15:45:55.787811497Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.962536897s" Jan 30 15:45:55.787926 containerd[1507]: time="2025-01-30T15:45:55.787854084Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 30 15:45:55.792679 containerd[1507]: time="2025-01-30T15:45:55.792075749Z" level=info msg="CreateContainer within sandbox \"86f5ea8de930770ac39b9bd56bb5827358df0db77aeb88eaefe4023a24a6751a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 30 15:45:55.879510 containerd[1507]: time="2025-01-30T15:45:55.879462042Z" level=info msg="CreateContainer within sandbox \"86f5ea8de930770ac39b9bd56bb5827358df0db77aeb88eaefe4023a24a6751a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8da38d40aff35b4795aed99f335e1d55c4153deb6dcb1d9270fd832805f7a616\"" Jan 30 15:45:55.881404 containerd[1507]: time="2025-01-30T15:45:55.880392895Z" level=info msg="StartContainer for \"8da38d40aff35b4795aed99f335e1d55c4153deb6dcb1d9270fd832805f7a616\"" Jan 30 15:45:56.122354 systemd[1]: Started cri-containerd-8da38d40aff35b4795aed99f335e1d55c4153deb6dcb1d9270fd832805f7a616.scope - libcontainer container 8da38d40aff35b4795aed99f335e1d55c4153deb6dcb1d9270fd832805f7a616. Jan 30 15:45:56.169566 containerd[1507]: time="2025-01-30T15:45:56.169052566Z" level=info msg="StartContainer for \"8da38d40aff35b4795aed99f335e1d55c4153deb6dcb1d9270fd832805f7a616\" returns successfully" Jan 30 15:45:56.184714 systemd[1]: cri-containerd-8da38d40aff35b4795aed99f335e1d55c4153deb6dcb1d9270fd832805f7a616.scope: Deactivated successfully. Jan 30 15:45:56.375705 containerd[1507]: time="2025-01-30T15:45:56.362419161Z" level=info msg="shim disconnected" id=8da38d40aff35b4795aed99f335e1d55c4153deb6dcb1d9270fd832805f7a616 namespace=k8s.io Jan 30 15:45:56.375705 containerd[1507]: time="2025-01-30T15:45:56.375288145Z" level=warning msg="cleaning up after shim disconnected" id=8da38d40aff35b4795aed99f335e1d55c4153deb6dcb1d9270fd832805f7a616 namespace=k8s.io Jan 30 15:45:56.375705 containerd[1507]: time="2025-01-30T15:45:56.375320834Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 15:45:56.678414 containerd[1507]: time="2025-01-30T15:45:56.678068810Z" level=info msg="CreateContainer within sandbox \"86f5ea8de930770ac39b9bd56bb5827358df0db77aeb88eaefe4023a24a6751a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 30 15:45:56.697971 containerd[1507]: time="2025-01-30T15:45:56.697875244Z" level=info msg="CreateContainer within sandbox \"86f5ea8de930770ac39b9bd56bb5827358df0db77aeb88eaefe4023a24a6751a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0b6e9728da4b4b82fa9f424996adcadea9895f28f538902825c096bd1f629b4e\"" Jan 30 15:45:56.699055 containerd[1507]: time="2025-01-30T15:45:56.698986922Z" level=info msg="StartContainer for \"0b6e9728da4b4b82fa9f424996adcadea9895f28f538902825c096bd1f629b4e\"" Jan 30 15:45:56.722610 kubelet[2745]: I0130 15:45:56.722530 2745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-m82f7" podStartSLOduration=14.279647011 podStartE2EDuration="20.722500624s" podCreationTimestamp="2025-01-30 15:45:36 +0000 UTC" firstStartedPulling="2025-01-30 15:45:38.380170119 +0000 UTC m=+7.148753173" lastFinishedPulling="2025-01-30 15:45:44.823023726 +0000 UTC m=+13.591606786" observedRunningTime="2025-01-30 15:45:45.667329821 +0000 UTC m=+14.435912888" watchObservedRunningTime="2025-01-30 15:45:56.722500624 +0000 UTC m=+25.491083679" Jan 30 15:45:56.741332 systemd[1]: Started cri-containerd-0b6e9728da4b4b82fa9f424996adcadea9895f28f538902825c096bd1f629b4e.scope - libcontainer container 0b6e9728da4b4b82fa9f424996adcadea9895f28f538902825c096bd1f629b4e. Jan 30 15:45:56.782678 containerd[1507]: time="2025-01-30T15:45:56.781997251Z" level=info msg="StartContainer for \"0b6e9728da4b4b82fa9f424996adcadea9895f28f538902825c096bd1f629b4e\" returns successfully" Jan 30 15:45:56.799604 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 15:45:56.799996 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 15:45:56.800940 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 30 15:45:56.808574 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 15:45:56.808892 systemd[1]: cri-containerd-0b6e9728da4b4b82fa9f424996adcadea9895f28f538902825c096bd1f629b4e.scope: Deactivated successfully. Jan 30 15:45:56.857457 containerd[1507]: time="2025-01-30T15:45:56.856965739Z" level=info msg="shim disconnected" id=0b6e9728da4b4b82fa9f424996adcadea9895f28f538902825c096bd1f629b4e namespace=k8s.io Jan 30 15:45:56.857457 containerd[1507]: time="2025-01-30T15:45:56.857053410Z" level=warning msg="cleaning up after shim disconnected" id=0b6e9728da4b4b82fa9f424996adcadea9895f28f538902825c096bd1f629b4e namespace=k8s.io Jan 30 15:45:56.857457 containerd[1507]: time="2025-01-30T15:45:56.857067054Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 15:45:56.859112 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 15:45:56.869899 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8da38d40aff35b4795aed99f335e1d55c4153deb6dcb1d9270fd832805f7a616-rootfs.mount: Deactivated successfully. Jan 30 15:45:57.685964 containerd[1507]: time="2025-01-30T15:45:57.685707987Z" level=info msg="CreateContainer within sandbox \"86f5ea8de930770ac39b9bd56bb5827358df0db77aeb88eaefe4023a24a6751a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 30 15:45:57.725083 containerd[1507]: time="2025-01-30T15:45:57.725023094Z" level=info msg="CreateContainer within sandbox \"86f5ea8de930770ac39b9bd56bb5827358df0db77aeb88eaefe4023a24a6751a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"50a146346dfad438f5ead193a20b88ae1f228939c9dfffbb8f5507ad5a1dd357\"" Jan 30 15:45:57.726480 containerd[1507]: time="2025-01-30T15:45:57.726444316Z" level=info msg="StartContainer for \"50a146346dfad438f5ead193a20b88ae1f228939c9dfffbb8f5507ad5a1dd357\"" Jan 30 15:45:57.777475 systemd[1]: Started cri-containerd-50a146346dfad438f5ead193a20b88ae1f228939c9dfffbb8f5507ad5a1dd357.scope - libcontainer container 50a146346dfad438f5ead193a20b88ae1f228939c9dfffbb8f5507ad5a1dd357. Jan 30 15:45:57.832249 containerd[1507]: time="2025-01-30T15:45:57.832070309Z" level=info msg="StartContainer for \"50a146346dfad438f5ead193a20b88ae1f228939c9dfffbb8f5507ad5a1dd357\" returns successfully" Jan 30 15:45:57.842324 systemd[1]: cri-containerd-50a146346dfad438f5ead193a20b88ae1f228939c9dfffbb8f5507ad5a1dd357.scope: Deactivated successfully. Jan 30 15:45:57.869403 systemd[1]: run-containerd-runc-k8s.io-50a146346dfad438f5ead193a20b88ae1f228939c9dfffbb8f5507ad5a1dd357-runc.imU4sc.mount: Deactivated successfully. Jan 30 15:45:57.869610 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-50a146346dfad438f5ead193a20b88ae1f228939c9dfffbb8f5507ad5a1dd357-rootfs.mount: Deactivated successfully. Jan 30 15:45:57.873293 containerd[1507]: time="2025-01-30T15:45:57.873208486Z" level=info msg="shim disconnected" id=50a146346dfad438f5ead193a20b88ae1f228939c9dfffbb8f5507ad5a1dd357 namespace=k8s.io Jan 30 15:45:57.873414 containerd[1507]: time="2025-01-30T15:45:57.873303174Z" level=warning msg="cleaning up after shim disconnected" id=50a146346dfad438f5ead193a20b88ae1f228939c9dfffbb8f5507ad5a1dd357 namespace=k8s.io Jan 30 15:45:57.873414 containerd[1507]: time="2025-01-30T15:45:57.873320469Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 15:45:58.692348 containerd[1507]: time="2025-01-30T15:45:58.692294303Z" level=info msg="CreateContainer within sandbox \"86f5ea8de930770ac39b9bd56bb5827358df0db77aeb88eaefe4023a24a6751a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 30 15:45:58.718595 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1579614252.mount: Deactivated successfully. Jan 30 15:45:58.738947 containerd[1507]: time="2025-01-30T15:45:58.738850639Z" level=info msg="CreateContainer within sandbox \"86f5ea8de930770ac39b9bd56bb5827358df0db77aeb88eaefe4023a24a6751a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"83428fd4433541ee078b96aaeffff20817da475a61b80e33f7365a709bf70855\"" Jan 30 15:45:58.741200 containerd[1507]: time="2025-01-30T15:45:58.739972504Z" level=info msg="StartContainer for \"83428fd4433541ee078b96aaeffff20817da475a61b80e33f7365a709bf70855\"" Jan 30 15:45:58.787468 systemd[1]: Started cri-containerd-83428fd4433541ee078b96aaeffff20817da475a61b80e33f7365a709bf70855.scope - libcontainer container 83428fd4433541ee078b96aaeffff20817da475a61b80e33f7365a709bf70855. Jan 30 15:45:58.823148 systemd[1]: cri-containerd-83428fd4433541ee078b96aaeffff20817da475a61b80e33f7365a709bf70855.scope: Deactivated successfully. Jan 30 15:45:58.828132 containerd[1507]: time="2025-01-30T15:45:58.825726945Z" level=info msg="StartContainer for \"83428fd4433541ee078b96aaeffff20817da475a61b80e33f7365a709bf70855\" returns successfully" Jan 30 15:45:58.859582 containerd[1507]: time="2025-01-30T15:45:58.859507258Z" level=info msg="shim disconnected" id=83428fd4433541ee078b96aaeffff20817da475a61b80e33f7365a709bf70855 namespace=k8s.io Jan 30 15:45:58.860388 containerd[1507]: time="2025-01-30T15:45:58.859587183Z" level=warning msg="cleaning up after shim disconnected" id=83428fd4433541ee078b96aaeffff20817da475a61b80e33f7365a709bf70855 namespace=k8s.io Jan 30 15:45:58.860388 containerd[1507]: time="2025-01-30T15:45:58.859603731Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 15:45:58.870875 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-83428fd4433541ee078b96aaeffff20817da475a61b80e33f7365a709bf70855-rootfs.mount: Deactivated successfully. Jan 30 15:45:59.694610 containerd[1507]: time="2025-01-30T15:45:59.694520614Z" level=info msg="CreateContainer within sandbox \"86f5ea8de930770ac39b9bd56bb5827358df0db77aeb88eaefe4023a24a6751a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 30 15:45:59.719184 containerd[1507]: time="2025-01-30T15:45:59.719119431Z" level=info msg="CreateContainer within sandbox \"86f5ea8de930770ac39b9bd56bb5827358df0db77aeb88eaefe4023a24a6751a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"42a7cc56805e011fb85505f81f315db9308eeca3f9185a53e4d1b2b880d0defa\"" Jan 30 15:45:59.720685 containerd[1507]: time="2025-01-30T15:45:59.720620861Z" level=info msg="StartContainer for \"42a7cc56805e011fb85505f81f315db9308eeca3f9185a53e4d1b2b880d0defa\"" Jan 30 15:45:59.771370 systemd[1]: Started cri-containerd-42a7cc56805e011fb85505f81f315db9308eeca3f9185a53e4d1b2b880d0defa.scope - libcontainer container 42a7cc56805e011fb85505f81f315db9308eeca3f9185a53e4d1b2b880d0defa. Jan 30 15:45:59.815504 containerd[1507]: time="2025-01-30T15:45:59.815421776Z" level=info msg="StartContainer for \"42a7cc56805e011fb85505f81f315db9308eeca3f9185a53e4d1b2b880d0defa\" returns successfully" Jan 30 15:45:59.989145 kubelet[2745]: I0130 15:45:59.988636 2745 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Jan 30 15:46:00.077675 systemd[1]: Created slice kubepods-burstable-podaebd6f4a_000c_49db_aa20_3082138b1d7a.slice - libcontainer container kubepods-burstable-podaebd6f4a_000c_49db_aa20_3082138b1d7a.slice. Jan 30 15:46:00.094009 systemd[1]: Created slice kubepods-burstable-podbdbd91ff_111c_4f3f_bc0b_f8fdc5a756ec.slice - libcontainer container kubepods-burstable-podbdbd91ff_111c_4f3f_bc0b_f8fdc5a756ec.slice. Jan 30 15:46:00.235931 kubelet[2745]: I0130 15:46:00.235674 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42xkx\" (UniqueName: \"kubernetes.io/projected/bdbd91ff-111c-4f3f-bc0b-f8fdc5a756ec-kube-api-access-42xkx\") pod \"coredns-668d6bf9bc-58r2q\" (UID: \"bdbd91ff-111c-4f3f-bc0b-f8fdc5a756ec\") " pod="kube-system/coredns-668d6bf9bc-58r2q" Jan 30 15:46:00.235931 kubelet[2745]: I0130 15:46:00.235759 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgdzl\" (UniqueName: \"kubernetes.io/projected/aebd6f4a-000c-49db-aa20-3082138b1d7a-kube-api-access-zgdzl\") pod \"coredns-668d6bf9bc-27tmt\" (UID: \"aebd6f4a-000c-49db-aa20-3082138b1d7a\") " pod="kube-system/coredns-668d6bf9bc-27tmt" Jan 30 15:46:00.235931 kubelet[2745]: I0130 15:46:00.235802 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bdbd91ff-111c-4f3f-bc0b-f8fdc5a756ec-config-volume\") pod \"coredns-668d6bf9bc-58r2q\" (UID: \"bdbd91ff-111c-4f3f-bc0b-f8fdc5a756ec\") " pod="kube-system/coredns-668d6bf9bc-58r2q" Jan 30 15:46:00.235931 kubelet[2745]: I0130 15:46:00.235862 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aebd6f4a-000c-49db-aa20-3082138b1d7a-config-volume\") pod \"coredns-668d6bf9bc-27tmt\" (UID: \"aebd6f4a-000c-49db-aa20-3082138b1d7a\") " pod="kube-system/coredns-668d6bf9bc-27tmt" Jan 30 15:46:00.389168 containerd[1507]: time="2025-01-30T15:46:00.388375927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-27tmt,Uid:aebd6f4a-000c-49db-aa20-3082138b1d7a,Namespace:kube-system,Attempt:0,}" Jan 30 15:46:00.400561 containerd[1507]: time="2025-01-30T15:46:00.400525892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-58r2q,Uid:bdbd91ff-111c-4f3f-bc0b-f8fdc5a756ec,Namespace:kube-system,Attempt:0,}" Jan 30 15:46:02.394359 systemd-networkd[1425]: cilium_host: Link UP Jan 30 15:46:02.395573 systemd-networkd[1425]: cilium_net: Link UP Jan 30 15:46:02.396779 systemd-networkd[1425]: cilium_net: Gained carrier Jan 30 15:46:02.397055 systemd-networkd[1425]: cilium_host: Gained carrier Jan 30 15:46:02.437212 systemd-networkd[1425]: cilium_host: Gained IPv6LL Jan 30 15:46:02.571245 systemd-networkd[1425]: cilium_vxlan: Link UP Jan 30 15:46:02.571256 systemd-networkd[1425]: cilium_vxlan: Gained carrier Jan 30 15:46:02.786259 systemd-networkd[1425]: cilium_net: Gained IPv6LL Jan 30 15:46:03.141169 kernel: NET: Registered PF_ALG protocol family Jan 30 15:46:04.051740 systemd-networkd[1425]: cilium_vxlan: Gained IPv6LL Jan 30 15:46:04.177224 systemd-networkd[1425]: lxc_health: Link UP Jan 30 15:46:04.187200 systemd-networkd[1425]: lxc_health: Gained carrier Jan 30 15:46:04.553082 systemd-networkd[1425]: lxc2fcd750df684: Link UP Jan 30 15:46:04.558842 systemd-networkd[1425]: lxc801f623b3c9a: Link UP Jan 30 15:46:04.573719 kernel: eth0: renamed from tmp23df9 Jan 30 15:46:04.590141 kernel: eth0: renamed from tmpe6511 Jan 30 15:46:04.597709 systemd-networkd[1425]: lxc2fcd750df684: Gained carrier Jan 30 15:46:04.603257 systemd-networkd[1425]: lxc801f623b3c9a: Gained carrier Jan 30 15:46:04.720893 kubelet[2745]: I0130 15:46:04.720772 2745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-ffm9c" podStartSLOduration=11.63649752 podStartE2EDuration="28.718404957s" podCreationTimestamp="2025-01-30 15:45:36 +0000 UTC" firstStartedPulling="2025-01-30 15:45:38.707754204 +0000 UTC m=+7.476337265" lastFinishedPulling="2025-01-30 15:45:55.789661641 +0000 UTC m=+24.558244702" observedRunningTime="2025-01-30 15:46:00.731928049 +0000 UTC m=+29.500511111" watchObservedRunningTime="2025-01-30 15:46:04.718404957 +0000 UTC m=+33.486988015" Jan 30 15:46:05.907329 systemd-networkd[1425]: lxc2fcd750df684: Gained IPv6LL Jan 30 15:46:05.970288 systemd-networkd[1425]: lxc_health: Gained IPv6LL Jan 30 15:46:06.035208 systemd-networkd[1425]: lxc801f623b3c9a: Gained IPv6LL Jan 30 15:46:10.122660 containerd[1507]: time="2025-01-30T15:46:10.121810421Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 15:46:10.122660 containerd[1507]: time="2025-01-30T15:46:10.122225406Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 15:46:10.122660 containerd[1507]: time="2025-01-30T15:46:10.122261851Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:46:10.125693 containerd[1507]: time="2025-01-30T15:46:10.123081602Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:46:10.179964 containerd[1507]: time="2025-01-30T15:46:10.177031649Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 15:46:10.179964 containerd[1507]: time="2025-01-30T15:46:10.177160713Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 15:46:10.179964 containerd[1507]: time="2025-01-30T15:46:10.177185597Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:46:10.179964 containerd[1507]: time="2025-01-30T15:46:10.177325840Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:46:10.206138 systemd[1]: run-containerd-runc-k8s.io-e6511a5bf4a2409c9b27714b848a0916aad874e753023c6d6cefe49aacd85642-runc.S0Jwzw.mount: Deactivated successfully. Jan 30 15:46:10.217336 systemd[1]: Started cri-containerd-e6511a5bf4a2409c9b27714b848a0916aad874e753023c6d6cefe49aacd85642.scope - libcontainer container e6511a5bf4a2409c9b27714b848a0916aad874e753023c6d6cefe49aacd85642. Jan 30 15:46:10.261365 systemd[1]: Started cri-containerd-23df91f317ac668e5159c5717170c005894dba50b0dd59b6caa4f942758cf805.scope - libcontainer container 23df91f317ac668e5159c5717170c005894dba50b0dd59b6caa4f942758cf805. Jan 30 15:46:10.356530 containerd[1507]: time="2025-01-30T15:46:10.356452399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-58r2q,Uid:bdbd91ff-111c-4f3f-bc0b-f8fdc5a756ec,Namespace:kube-system,Attempt:0,} returns sandbox id \"e6511a5bf4a2409c9b27714b848a0916aad874e753023c6d6cefe49aacd85642\"" Jan 30 15:46:10.363464 containerd[1507]: time="2025-01-30T15:46:10.363409015Z" level=info msg="CreateContainer within sandbox \"e6511a5bf4a2409c9b27714b848a0916aad874e753023c6d6cefe49aacd85642\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 15:46:10.398833 containerd[1507]: time="2025-01-30T15:46:10.398508801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-27tmt,Uid:aebd6f4a-000c-49db-aa20-3082138b1d7a,Namespace:kube-system,Attempt:0,} returns sandbox id \"23df91f317ac668e5159c5717170c005894dba50b0dd59b6caa4f942758cf805\"" Jan 30 15:46:10.400171 containerd[1507]: time="2025-01-30T15:46:10.399858422Z" level=info msg="CreateContainer within sandbox \"e6511a5bf4a2409c9b27714b848a0916aad874e753023c6d6cefe49aacd85642\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4385084889964befcd3860d05643aabca8ed630e7068c60ff8fbc4e8970b634b\"" Jan 30 15:46:10.403253 containerd[1507]: time="2025-01-30T15:46:10.402317099Z" level=info msg="StartContainer for \"4385084889964befcd3860d05643aabca8ed630e7068c60ff8fbc4e8970b634b\"" Jan 30 15:46:10.409552 containerd[1507]: time="2025-01-30T15:46:10.409307981Z" level=info msg="CreateContainer within sandbox \"23df91f317ac668e5159c5717170c005894dba50b0dd59b6caa4f942758cf805\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 15:46:10.429527 containerd[1507]: time="2025-01-30T15:46:10.429353696Z" level=info msg="CreateContainer within sandbox \"23df91f317ac668e5159c5717170c005894dba50b0dd59b6caa4f942758cf805\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"eb814b8575dc86e61225e50012259a2da67d445ea0cdd796d67b51a40e21bf44\"" Jan 30 15:46:10.431186 containerd[1507]: time="2025-01-30T15:46:10.430687574Z" level=info msg="StartContainer for \"eb814b8575dc86e61225e50012259a2da67d445ea0cdd796d67b51a40e21bf44\"" Jan 30 15:46:10.453362 systemd[1]: Started cri-containerd-4385084889964befcd3860d05643aabca8ed630e7068c60ff8fbc4e8970b634b.scope - libcontainer container 4385084889964befcd3860d05643aabca8ed630e7068c60ff8fbc4e8970b634b. Jan 30 15:46:10.482338 systemd[1]: Started cri-containerd-eb814b8575dc86e61225e50012259a2da67d445ea0cdd796d67b51a40e21bf44.scope - libcontainer container eb814b8575dc86e61225e50012259a2da67d445ea0cdd796d67b51a40e21bf44. Jan 30 15:46:10.514736 containerd[1507]: time="2025-01-30T15:46:10.514596822Z" level=info msg="StartContainer for \"4385084889964befcd3860d05643aabca8ed630e7068c60ff8fbc4e8970b634b\" returns successfully" Jan 30 15:46:10.547989 containerd[1507]: time="2025-01-30T15:46:10.547788896Z" level=info msg="StartContainer for \"eb814b8575dc86e61225e50012259a2da67d445ea0cdd796d67b51a40e21bf44\" returns successfully" Jan 30 15:46:10.758088 kubelet[2745]: I0130 15:46:10.755987 2745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-58r2q" podStartSLOduration=34.755947347 podStartE2EDuration="34.755947347s" podCreationTimestamp="2025-01-30 15:45:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 15:46:10.755523163 +0000 UTC m=+39.524106240" watchObservedRunningTime="2025-01-30 15:46:10.755947347 +0000 UTC m=+39.524530408" Jan 30 15:46:10.803918 kubelet[2745]: I0130 15:46:10.803834 2745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-27tmt" podStartSLOduration=34.803813831 podStartE2EDuration="34.803813831s" podCreationTimestamp="2025-01-30 15:45:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 15:46:10.802677093 +0000 UTC m=+39.571260164" watchObservedRunningTime="2025-01-30 15:46:10.803813831 +0000 UTC m=+39.572396893" Jan 30 15:46:11.140301 systemd[1]: run-containerd-runc-k8s.io-23df91f317ac668e5159c5717170c005894dba50b0dd59b6caa4f942758cf805-runc.NkzuVa.mount: Deactivated successfully. Jan 30 15:46:49.081617 systemd[1]: Started sshd@12-10.243.85.194:22-139.178.89.65:47688.service - OpenSSH per-connection server daemon (139.178.89.65:47688). Jan 30 15:46:50.019417 sshd[4147]: Accepted publickey for core from 139.178.89.65 port 47688 ssh2: RSA SHA256:Dn4MxDy04q+t+ei5/s5fxQGaUh9w4dS1NM4x1ar4TVQ Jan 30 15:46:50.022490 sshd-session[4147]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:46:50.033148 systemd-logind[1492]: New session 12 of user core. Jan 30 15:46:50.045625 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 30 15:46:51.174889 sshd[4149]: Connection closed by 139.178.89.65 port 47688 Jan 30 15:46:51.175965 sshd-session[4147]: pam_unix(sshd:session): session closed for user core Jan 30 15:46:51.182385 systemd[1]: sshd@12-10.243.85.194:22-139.178.89.65:47688.service: Deactivated successfully. Jan 30 15:46:51.184788 systemd[1]: session-12.scope: Deactivated successfully. Jan 30 15:46:51.186029 systemd-logind[1492]: Session 12 logged out. Waiting for processes to exit. Jan 30 15:46:51.188275 systemd-logind[1492]: Removed session 12. Jan 30 15:46:56.334520 systemd[1]: Started sshd@13-10.243.85.194:22-139.178.89.65:52482.service - OpenSSH per-connection server daemon (139.178.89.65:52482). Jan 30 15:46:57.234187 sshd[4160]: Accepted publickey for core from 139.178.89.65 port 52482 ssh2: RSA SHA256:Dn4MxDy04q+t+ei5/s5fxQGaUh9w4dS1NM4x1ar4TVQ Jan 30 15:46:57.236017 sshd-session[4160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:46:57.242906 systemd-logind[1492]: New session 13 of user core. Jan 30 15:46:57.256355 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 30 15:46:57.958417 sshd[4162]: Connection closed by 139.178.89.65 port 52482 Jan 30 15:46:57.959678 sshd-session[4160]: pam_unix(sshd:session): session closed for user core Jan 30 15:46:57.964219 systemd-logind[1492]: Session 13 logged out. Waiting for processes to exit. Jan 30 15:46:57.964772 systemd[1]: sshd@13-10.243.85.194:22-139.178.89.65:52482.service: Deactivated successfully. Jan 30 15:46:57.968238 systemd[1]: session-13.scope: Deactivated successfully. Jan 30 15:46:57.971822 systemd-logind[1492]: Removed session 13. Jan 30 15:47:00.485565 systemd[1]: Started sshd@14-10.243.85.194:22-217.65.82.98:55747.service - OpenSSH per-connection server daemon (217.65.82.98:55747). Jan 30 15:47:01.280719 sshd[4174]: Invalid user user from 217.65.82.98 port 55747 Jan 30 15:47:01.424170 sshd[4174]: Received disconnect from 217.65.82.98 port 55747:11: Bye Bye [preauth] Jan 30 15:47:01.424170 sshd[4174]: Disconnected from invalid user user 217.65.82.98 port 55747 [preauth] Jan 30 15:47:01.426032 systemd[1]: sshd@14-10.243.85.194:22-217.65.82.98:55747.service: Deactivated successfully. Jan 30 15:47:03.121510 systemd[1]: Started sshd@15-10.243.85.194:22-139.178.89.65:32980.service - OpenSSH per-connection server daemon (139.178.89.65:32980). Jan 30 15:47:04.034870 sshd[4179]: Accepted publickey for core from 139.178.89.65 port 32980 ssh2: RSA SHA256:Dn4MxDy04q+t+ei5/s5fxQGaUh9w4dS1NM4x1ar4TVQ Jan 30 15:47:04.037384 sshd-session[4179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:47:04.045284 systemd-logind[1492]: New session 14 of user core. Jan 30 15:47:04.050309 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 30 15:47:04.788280 sshd[4181]: Connection closed by 139.178.89.65 port 32980 Jan 30 15:47:04.789672 sshd-session[4179]: pam_unix(sshd:session): session closed for user core Jan 30 15:47:04.796212 systemd[1]: sshd@15-10.243.85.194:22-139.178.89.65:32980.service: Deactivated successfully. Jan 30 15:47:04.798735 systemd[1]: session-14.scope: Deactivated successfully. Jan 30 15:47:04.799726 systemd-logind[1492]: Session 14 logged out. Waiting for processes to exit. Jan 30 15:47:04.801223 systemd-logind[1492]: Removed session 14. Jan 30 15:47:08.884508 systemd[1]: Started sshd@16-10.243.85.194:22-103.187.146.254:34438.service - OpenSSH per-connection server daemon (103.187.146.254:34438). Jan 30 15:47:09.855878 sshd[4195]: Invalid user steam from 103.187.146.254 port 34438 Jan 30 15:47:09.947439 systemd[1]: Started sshd@17-10.243.85.194:22-139.178.89.65:32982.service - OpenSSH per-connection server daemon (139.178.89.65:32982). Jan 30 15:47:10.341942 sshd[4195]: Received disconnect from 103.187.146.254 port 34438:11: Bye Bye [preauth] Jan 30 15:47:10.341942 sshd[4195]: Disconnected from invalid user steam 103.187.146.254 port 34438 [preauth] Jan 30 15:47:10.345237 systemd[1]: sshd@16-10.243.85.194:22-103.187.146.254:34438.service: Deactivated successfully. Jan 30 15:47:10.848244 sshd[4198]: Accepted publickey for core from 139.178.89.65 port 32982 ssh2: RSA SHA256:Dn4MxDy04q+t+ei5/s5fxQGaUh9w4dS1NM4x1ar4TVQ Jan 30 15:47:10.850408 sshd-session[4198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:47:10.859234 systemd-logind[1492]: New session 15 of user core. Jan 30 15:47:10.864762 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 30 15:47:11.577413 sshd[4202]: Connection closed by 139.178.89.65 port 32982 Jan 30 15:47:11.580557 sshd-session[4198]: pam_unix(sshd:session): session closed for user core Jan 30 15:47:11.588839 systemd-logind[1492]: Session 15 logged out. Waiting for processes to exit. Jan 30 15:47:11.589583 systemd[1]: sshd@17-10.243.85.194:22-139.178.89.65:32982.service: Deactivated successfully. Jan 30 15:47:11.594489 systemd[1]: session-15.scope: Deactivated successfully. Jan 30 15:47:11.596477 systemd-logind[1492]: Removed session 15. Jan 30 15:47:11.737439 systemd[1]: Started sshd@18-10.243.85.194:22-139.178.89.65:60498.service - OpenSSH per-connection server daemon (139.178.89.65:60498). Jan 30 15:47:12.660260 sshd[4214]: Accepted publickey for core from 139.178.89.65 port 60498 ssh2: RSA SHA256:Dn4MxDy04q+t+ei5/s5fxQGaUh9w4dS1NM4x1ar4TVQ Jan 30 15:47:12.661819 sshd-session[4214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:47:12.669225 systemd-logind[1492]: New session 16 of user core. Jan 30 15:47:12.675349 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 30 15:47:13.451312 sshd[4216]: Connection closed by 139.178.89.65 port 60498 Jan 30 15:47:13.452391 sshd-session[4214]: pam_unix(sshd:session): session closed for user core Jan 30 15:47:13.457497 systemd[1]: sshd@18-10.243.85.194:22-139.178.89.65:60498.service: Deactivated successfully. Jan 30 15:47:13.460368 systemd[1]: session-16.scope: Deactivated successfully. Jan 30 15:47:13.461333 systemd-logind[1492]: Session 16 logged out. Waiting for processes to exit. Jan 30 15:47:13.463321 systemd-logind[1492]: Removed session 16. Jan 30 15:47:13.610498 systemd[1]: Started sshd@19-10.243.85.194:22-139.178.89.65:60512.service - OpenSSH per-connection server daemon (139.178.89.65:60512). Jan 30 15:47:14.512218 sshd[4225]: Accepted publickey for core from 139.178.89.65 port 60512 ssh2: RSA SHA256:Dn4MxDy04q+t+ei5/s5fxQGaUh9w4dS1NM4x1ar4TVQ Jan 30 15:47:14.514504 sshd-session[4225]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:47:14.521595 systemd-logind[1492]: New session 17 of user core. Jan 30 15:47:14.537383 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 30 15:47:15.225772 sshd[4227]: Connection closed by 139.178.89.65 port 60512 Jan 30 15:47:15.227667 sshd-session[4225]: pam_unix(sshd:session): session closed for user core Jan 30 15:47:15.233065 systemd-logind[1492]: Session 17 logged out. Waiting for processes to exit. Jan 30 15:47:15.233805 systemd[1]: sshd@19-10.243.85.194:22-139.178.89.65:60512.service: Deactivated successfully. Jan 30 15:47:15.237583 systemd[1]: session-17.scope: Deactivated successfully. Jan 30 15:47:15.239992 systemd-logind[1492]: Removed session 17. Jan 30 15:47:20.388070 systemd[1]: Started sshd@20-10.243.85.194:22-139.178.89.65:60522.service - OpenSSH per-connection server daemon (139.178.89.65:60522). Jan 30 15:47:21.306478 sshd[4238]: Accepted publickey for core from 139.178.89.65 port 60522 ssh2: RSA SHA256:Dn4MxDy04q+t+ei5/s5fxQGaUh9w4dS1NM4x1ar4TVQ Jan 30 15:47:21.308488 sshd-session[4238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:47:21.315666 systemd-logind[1492]: New session 18 of user core. Jan 30 15:47:21.323297 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 30 15:47:22.008825 sshd[4240]: Connection closed by 139.178.89.65 port 60522 Jan 30 15:47:22.009307 sshd-session[4238]: pam_unix(sshd:session): session closed for user core Jan 30 15:47:22.014682 systemd-logind[1492]: Session 18 logged out. Waiting for processes to exit. Jan 30 15:47:22.015293 systemd[1]: sshd@20-10.243.85.194:22-139.178.89.65:60522.service: Deactivated successfully. Jan 30 15:47:22.017971 systemd[1]: session-18.scope: Deactivated successfully. Jan 30 15:47:22.019220 systemd-logind[1492]: Removed session 18. Jan 30 15:47:27.166468 systemd[1]: Started sshd@21-10.243.85.194:22-139.178.89.65:58574.service - OpenSSH per-connection server daemon (139.178.89.65:58574). Jan 30 15:47:28.072649 sshd[4251]: Accepted publickey for core from 139.178.89.65 port 58574 ssh2: RSA SHA256:Dn4MxDy04q+t+ei5/s5fxQGaUh9w4dS1NM4x1ar4TVQ Jan 30 15:47:28.075721 sshd-session[4251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:47:28.084555 systemd-logind[1492]: New session 19 of user core. Jan 30 15:47:28.089346 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 30 15:47:28.784158 sshd[4253]: Connection closed by 139.178.89.65 port 58574 Jan 30 15:47:28.785552 sshd-session[4251]: pam_unix(sshd:session): session closed for user core Jan 30 15:47:28.790517 systemd-logind[1492]: Session 19 logged out. Waiting for processes to exit. Jan 30 15:47:28.791574 systemd[1]: sshd@21-10.243.85.194:22-139.178.89.65:58574.service: Deactivated successfully. Jan 30 15:47:28.794864 systemd[1]: session-19.scope: Deactivated successfully. Jan 30 15:47:28.796472 systemd-logind[1492]: Removed session 19. Jan 30 15:47:28.942483 systemd[1]: Started sshd@22-10.243.85.194:22-139.178.89.65:58588.service - OpenSSH per-connection server daemon (139.178.89.65:58588). Jan 30 15:47:29.826697 sshd[4264]: Accepted publickey for core from 139.178.89.65 port 58588 ssh2: RSA SHA256:Dn4MxDy04q+t+ei5/s5fxQGaUh9w4dS1NM4x1ar4TVQ Jan 30 15:47:29.828590 sshd-session[4264]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:47:29.835445 systemd-logind[1492]: New session 20 of user core. Jan 30 15:47:29.841487 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 30 15:47:30.855774 sshd[4266]: Connection closed by 139.178.89.65 port 58588 Jan 30 15:47:30.857420 sshd-session[4264]: pam_unix(sshd:session): session closed for user core Jan 30 15:47:30.863186 systemd-logind[1492]: Session 20 logged out. Waiting for processes to exit. Jan 30 15:47:30.864394 systemd[1]: sshd@22-10.243.85.194:22-139.178.89.65:58588.service: Deactivated successfully. Jan 30 15:47:30.866886 systemd[1]: session-20.scope: Deactivated successfully. Jan 30 15:47:30.868179 systemd-logind[1492]: Removed session 20. Jan 30 15:47:31.016435 systemd[1]: Started sshd@23-10.243.85.194:22-139.178.89.65:58598.service - OpenSSH per-connection server daemon (139.178.89.65:58598). Jan 30 15:47:31.925860 sshd[4275]: Accepted publickey for core from 139.178.89.65 port 58598 ssh2: RSA SHA256:Dn4MxDy04q+t+ei5/s5fxQGaUh9w4dS1NM4x1ar4TVQ Jan 30 15:47:31.927844 sshd-session[4275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:47:31.936462 systemd-logind[1492]: New session 21 of user core. Jan 30 15:47:31.943318 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 30 15:47:33.686387 sshd[4279]: Connection closed by 139.178.89.65 port 58598 Jan 30 15:47:33.687773 sshd-session[4275]: pam_unix(sshd:session): session closed for user core Jan 30 15:47:33.700987 systemd[1]: sshd@23-10.243.85.194:22-139.178.89.65:58598.service: Deactivated successfully. Jan 30 15:47:33.704050 systemd[1]: session-21.scope: Deactivated successfully. Jan 30 15:47:33.705455 systemd-logind[1492]: Session 21 logged out. Waiting for processes to exit. Jan 30 15:47:33.707275 systemd-logind[1492]: Removed session 21. Jan 30 15:47:33.843923 systemd[1]: Started sshd@24-10.243.85.194:22-139.178.89.65:41750.service - OpenSSH per-connection server daemon (139.178.89.65:41750). Jan 30 15:47:34.747071 sshd[4295]: Accepted publickey for core from 139.178.89.65 port 41750 ssh2: RSA SHA256:Dn4MxDy04q+t+ei5/s5fxQGaUh9w4dS1NM4x1ar4TVQ Jan 30 15:47:34.749882 sshd-session[4295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:47:34.756832 systemd-logind[1492]: New session 22 of user core. Jan 30 15:47:34.764480 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 30 15:47:35.639994 sshd[4297]: Connection closed by 139.178.89.65 port 41750 Jan 30 15:47:35.641223 sshd-session[4295]: pam_unix(sshd:session): session closed for user core Jan 30 15:47:35.646599 systemd[1]: sshd@24-10.243.85.194:22-139.178.89.65:41750.service: Deactivated successfully. Jan 30 15:47:35.649225 systemd[1]: session-22.scope: Deactivated successfully. Jan 30 15:47:35.650915 systemd-logind[1492]: Session 22 logged out. Waiting for processes to exit. Jan 30 15:47:35.652589 systemd-logind[1492]: Removed session 22. Jan 30 15:47:35.796553 systemd[1]: Started sshd@25-10.243.85.194:22-139.178.89.65:41760.service - OpenSSH per-connection server daemon (139.178.89.65:41760). Jan 30 15:47:36.684548 sshd[4305]: Accepted publickey for core from 139.178.89.65 port 41760 ssh2: RSA SHA256:Dn4MxDy04q+t+ei5/s5fxQGaUh9w4dS1NM4x1ar4TVQ Jan 30 15:47:36.686620 sshd-session[4305]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:47:36.693833 systemd-logind[1492]: New session 23 of user core. Jan 30 15:47:36.701423 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 30 15:47:37.380563 sshd[4307]: Connection closed by 139.178.89.65 port 41760 Jan 30 15:47:37.381453 sshd-session[4305]: pam_unix(sshd:session): session closed for user core Jan 30 15:47:37.386985 systemd[1]: sshd@25-10.243.85.194:22-139.178.89.65:41760.service: Deactivated successfully. Jan 30 15:47:37.389215 systemd[1]: session-23.scope: Deactivated successfully. Jan 30 15:47:37.390235 systemd-logind[1492]: Session 23 logged out. Waiting for processes to exit. Jan 30 15:47:37.392016 systemd-logind[1492]: Removed session 23. Jan 30 15:47:42.548763 systemd[1]: Started sshd@26-10.243.85.194:22-139.178.89.65:44162.service - OpenSSH per-connection server daemon (139.178.89.65:44162). Jan 30 15:47:43.434741 sshd[4323]: Accepted publickey for core from 139.178.89.65 port 44162 ssh2: RSA SHA256:Dn4MxDy04q+t+ei5/s5fxQGaUh9w4dS1NM4x1ar4TVQ Jan 30 15:47:43.437323 sshd-session[4323]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:47:43.443881 systemd-logind[1492]: New session 24 of user core. Jan 30 15:47:43.450305 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 30 15:47:44.136253 sshd[4325]: Connection closed by 139.178.89.65 port 44162 Jan 30 15:47:44.137295 sshd-session[4323]: pam_unix(sshd:session): session closed for user core Jan 30 15:47:44.141357 systemd[1]: sshd@26-10.243.85.194:22-139.178.89.65:44162.service: Deactivated successfully. Jan 30 15:47:44.144360 systemd[1]: session-24.scope: Deactivated successfully. Jan 30 15:47:44.147666 systemd-logind[1492]: Session 24 logged out. Waiting for processes to exit. Jan 30 15:47:44.149256 systemd-logind[1492]: Removed session 24. Jan 30 15:47:48.600455 systemd[1]: Started sshd@27-10.243.85.194:22-103.117.57.141:59918.service - OpenSSH per-connection server daemon (103.117.57.141:59918). Jan 30 15:47:49.298432 systemd[1]: Started sshd@28-10.243.85.194:22-139.178.89.65:44172.service - OpenSSH per-connection server daemon (139.178.89.65:44172). Jan 30 15:47:50.052226 sshd[4335]: Invalid user steam from 103.117.57.141 port 59918 Jan 30 15:47:50.189219 sshd[4338]: Accepted publickey for core from 139.178.89.65 port 44172 ssh2: RSA SHA256:Dn4MxDy04q+t+ei5/s5fxQGaUh9w4dS1NM4x1ar4TVQ Jan 30 15:47:50.191827 sshd-session[4338]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:47:50.199355 systemd-logind[1492]: New session 25 of user core. Jan 30 15:47:50.204334 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 30 15:47:50.255848 sshd[4335]: Received disconnect from 103.117.57.141 port 59918:11: Bye Bye [preauth] Jan 30 15:47:50.255848 sshd[4335]: Disconnected from invalid user steam 103.117.57.141 port 59918 [preauth] Jan 30 15:47:50.257542 systemd[1]: sshd@27-10.243.85.194:22-103.117.57.141:59918.service: Deactivated successfully. Jan 30 15:47:50.893244 sshd[4340]: Connection closed by 139.178.89.65 port 44172 Jan 30 15:47:50.894639 sshd-session[4338]: pam_unix(sshd:session): session closed for user core Jan 30 15:47:50.899536 systemd-logind[1492]: Session 25 logged out. Waiting for processes to exit. Jan 30 15:47:50.901144 systemd[1]: sshd@28-10.243.85.194:22-139.178.89.65:44172.service: Deactivated successfully. Jan 30 15:47:50.904251 systemd[1]: session-25.scope: Deactivated successfully. Jan 30 15:47:50.906352 systemd-logind[1492]: Removed session 25. Jan 30 15:47:56.053480 systemd[1]: Started sshd@29-10.243.85.194:22-139.178.89.65:56352.service - OpenSSH per-connection server daemon (139.178.89.65:56352). Jan 30 15:47:56.954377 sshd[4352]: Accepted publickey for core from 139.178.89.65 port 56352 ssh2: RSA SHA256:Dn4MxDy04q+t+ei5/s5fxQGaUh9w4dS1NM4x1ar4TVQ Jan 30 15:47:56.956336 sshd-session[4352]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:47:56.962804 systemd-logind[1492]: New session 26 of user core. Jan 30 15:47:56.973446 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 30 15:47:57.669629 sshd[4355]: Connection closed by 139.178.89.65 port 56352 Jan 30 15:47:57.670417 sshd-session[4352]: pam_unix(sshd:session): session closed for user core Jan 30 15:47:57.675485 systemd[1]: sshd@29-10.243.85.194:22-139.178.89.65:56352.service: Deactivated successfully. Jan 30 15:47:57.678323 systemd[1]: session-26.scope: Deactivated successfully. Jan 30 15:47:57.679574 systemd-logind[1492]: Session 26 logged out. Waiting for processes to exit. Jan 30 15:47:57.680908 systemd-logind[1492]: Removed session 26. Jan 30 15:47:57.834513 systemd[1]: Started sshd@30-10.243.85.194:22-139.178.89.65:56366.service - OpenSSH per-connection server daemon (139.178.89.65:56366). Jan 30 15:47:58.736345 sshd[4365]: Accepted publickey for core from 139.178.89.65 port 56366 ssh2: RSA SHA256:Dn4MxDy04q+t+ei5/s5fxQGaUh9w4dS1NM4x1ar4TVQ Jan 30 15:47:58.738989 sshd-session[4365]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:47:58.746134 systemd-logind[1492]: New session 27 of user core. Jan 30 15:47:58.751296 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 30 15:48:00.941268 containerd[1507]: time="2025-01-30T15:48:00.939071356Z" level=info msg="StopContainer for \"6221764567595b2d39567d6049c037016f83ecec17f859a5b22e0615dffd79ee\" with timeout 30 (s)" Jan 30 15:48:00.943034 containerd[1507]: time="2025-01-30T15:48:00.942975003Z" level=info msg="Stop container \"6221764567595b2d39567d6049c037016f83ecec17f859a5b22e0615dffd79ee\" with signal terminated" Jan 30 15:48:00.946276 containerd[1507]: time="2025-01-30T15:48:00.946232343Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 15:48:00.961951 containerd[1507]: time="2025-01-30T15:48:00.961870939Z" level=info msg="StopContainer for \"42a7cc56805e011fb85505f81f315db9308eeca3f9185a53e4d1b2b880d0defa\" with timeout 2 (s)" Jan 30 15:48:00.962436 containerd[1507]: time="2025-01-30T15:48:00.962402557Z" level=info msg="Stop container \"42a7cc56805e011fb85505f81f315db9308eeca3f9185a53e4d1b2b880d0defa\" with signal terminated" Jan 30 15:48:00.969640 systemd[1]: cri-containerd-6221764567595b2d39567d6049c037016f83ecec17f859a5b22e0615dffd79ee.scope: Deactivated successfully. Jan 30 15:48:00.986796 systemd-networkd[1425]: lxc_health: Link DOWN Jan 30 15:48:00.986810 systemd-networkd[1425]: lxc_health: Lost carrier Jan 30 15:48:01.024668 systemd[1]: cri-containerd-42a7cc56805e011fb85505f81f315db9308eeca3f9185a53e4d1b2b880d0defa.scope: Deactivated successfully. Jan 30 15:48:01.025006 systemd[1]: cri-containerd-42a7cc56805e011fb85505f81f315db9308eeca3f9185a53e4d1b2b880d0defa.scope: Consumed 9.991s CPU time. Jan 30 15:48:01.041270 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6221764567595b2d39567d6049c037016f83ecec17f859a5b22e0615dffd79ee-rootfs.mount: Deactivated successfully. Jan 30 15:48:01.054160 containerd[1507]: time="2025-01-30T15:48:01.053959416Z" level=info msg="shim disconnected" id=6221764567595b2d39567d6049c037016f83ecec17f859a5b22e0615dffd79ee namespace=k8s.io Jan 30 15:48:01.054607 containerd[1507]: time="2025-01-30T15:48:01.054182053Z" level=warning msg="cleaning up after shim disconnected" id=6221764567595b2d39567d6049c037016f83ecec17f859a5b22e0615dffd79ee namespace=k8s.io Jan 30 15:48:01.054607 containerd[1507]: time="2025-01-30T15:48:01.054241618Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 15:48:01.067793 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-42a7cc56805e011fb85505f81f315db9308eeca3f9185a53e4d1b2b880d0defa-rootfs.mount: Deactivated successfully. Jan 30 15:48:01.081494 containerd[1507]: time="2025-01-30T15:48:01.081287789Z" level=warning msg="cleanup warnings time=\"2025-01-30T15:48:01Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 30 15:48:01.089974 containerd[1507]: time="2025-01-30T15:48:01.087242323Z" level=info msg="shim disconnected" id=42a7cc56805e011fb85505f81f315db9308eeca3f9185a53e4d1b2b880d0defa namespace=k8s.io Jan 30 15:48:01.089974 containerd[1507]: time="2025-01-30T15:48:01.087294604Z" level=warning msg="cleaning up after shim disconnected" id=42a7cc56805e011fb85505f81f315db9308eeca3f9185a53e4d1b2b880d0defa namespace=k8s.io Jan 30 15:48:01.089974 containerd[1507]: time="2025-01-30T15:48:01.087308972Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 15:48:01.090430 containerd[1507]: time="2025-01-30T15:48:01.087656780Z" level=info msg="StopContainer for \"6221764567595b2d39567d6049c037016f83ecec17f859a5b22e0615dffd79ee\" returns successfully" Jan 30 15:48:01.108188 containerd[1507]: time="2025-01-30T15:48:01.106878389Z" level=info msg="StopPodSandbox for \"1cfc299118daac3e51850efc1e952f8ed6730052b08a76e1ac3adb7e27038f55\"" Jan 30 15:48:01.116163 containerd[1507]: time="2025-01-30T15:48:01.110802385Z" level=info msg="Container to stop \"6221764567595b2d39567d6049c037016f83ecec17f859a5b22e0615dffd79ee\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 15:48:01.121839 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1cfc299118daac3e51850efc1e952f8ed6730052b08a76e1ac3adb7e27038f55-shm.mount: Deactivated successfully. Jan 30 15:48:01.137533 containerd[1507]: time="2025-01-30T15:48:01.137483648Z" level=info msg="StopContainer for \"42a7cc56805e011fb85505f81f315db9308eeca3f9185a53e4d1b2b880d0defa\" returns successfully" Jan 30 15:48:01.138373 containerd[1507]: time="2025-01-30T15:48:01.138343925Z" level=info msg="StopPodSandbox for \"86f5ea8de930770ac39b9bd56bb5827358df0db77aeb88eaefe4023a24a6751a\"" Jan 30 15:48:01.138804 systemd[1]: cri-containerd-1cfc299118daac3e51850efc1e952f8ed6730052b08a76e1ac3adb7e27038f55.scope: Deactivated successfully. Jan 30 15:48:01.140289 containerd[1507]: time="2025-01-30T15:48:01.138478799Z" level=info msg="Container to stop \"8da38d40aff35b4795aed99f335e1d55c4153deb6dcb1d9270fd832805f7a616\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 15:48:01.140289 containerd[1507]: time="2025-01-30T15:48:01.139681251Z" level=info msg="Container to stop \"0b6e9728da4b4b82fa9f424996adcadea9895f28f538902825c096bd1f629b4e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 15:48:01.140289 containerd[1507]: time="2025-01-30T15:48:01.139716177Z" level=info msg="Container to stop \"50a146346dfad438f5ead193a20b88ae1f228939c9dfffbb8f5507ad5a1dd357\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 15:48:01.140289 containerd[1507]: time="2025-01-30T15:48:01.139953939Z" level=info msg="Container to stop \"83428fd4433541ee078b96aaeffff20817da475a61b80e33f7365a709bf70855\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 15:48:01.140289 containerd[1507]: time="2025-01-30T15:48:01.139972369Z" level=info msg="Container to stop \"42a7cc56805e011fb85505f81f315db9308eeca3f9185a53e4d1b2b880d0defa\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 15:48:01.153650 systemd[1]: cri-containerd-86f5ea8de930770ac39b9bd56bb5827358df0db77aeb88eaefe4023a24a6751a.scope: Deactivated successfully. Jan 30 15:48:01.200199 containerd[1507]: time="2025-01-30T15:48:01.199217316Z" level=info msg="shim disconnected" id=1cfc299118daac3e51850efc1e952f8ed6730052b08a76e1ac3adb7e27038f55 namespace=k8s.io Jan 30 15:48:01.201397 containerd[1507]: time="2025-01-30T15:48:01.201356142Z" level=warning msg="cleaning up after shim disconnected" id=1cfc299118daac3e51850efc1e952f8ed6730052b08a76e1ac3adb7e27038f55 namespace=k8s.io Jan 30 15:48:01.202231 containerd[1507]: time="2025-01-30T15:48:01.202201815Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 15:48:01.202976 containerd[1507]: time="2025-01-30T15:48:01.202839901Z" level=info msg="shim disconnected" id=86f5ea8de930770ac39b9bd56bb5827358df0db77aeb88eaefe4023a24a6751a namespace=k8s.io Jan 30 15:48:01.202976 containerd[1507]: time="2025-01-30T15:48:01.202882438Z" level=warning msg="cleaning up after shim disconnected" id=86f5ea8de930770ac39b9bd56bb5827358df0db77aeb88eaefe4023a24a6751a namespace=k8s.io Jan 30 15:48:01.202976 containerd[1507]: time="2025-01-30T15:48:01.202897501Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 15:48:01.225514 containerd[1507]: time="2025-01-30T15:48:01.225445374Z" level=warning msg="cleanup warnings time=\"2025-01-30T15:48:01Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 30 15:48:01.227332 containerd[1507]: time="2025-01-30T15:48:01.227295546Z" level=info msg="TearDown network for sandbox \"1cfc299118daac3e51850efc1e952f8ed6730052b08a76e1ac3adb7e27038f55\" successfully" Jan 30 15:48:01.227419 containerd[1507]: time="2025-01-30T15:48:01.227331695Z" level=info msg="StopPodSandbox for \"1cfc299118daac3e51850efc1e952f8ed6730052b08a76e1ac3adb7e27038f55\" returns successfully" Jan 30 15:48:01.237625 containerd[1507]: time="2025-01-30T15:48:01.237548793Z" level=warning msg="cleanup warnings time=\"2025-01-30T15:48:01Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 30 15:48:01.238946 containerd[1507]: time="2025-01-30T15:48:01.238813549Z" level=info msg="TearDown network for sandbox \"86f5ea8de930770ac39b9bd56bb5827358df0db77aeb88eaefe4023a24a6751a\" successfully" Jan 30 15:48:01.238946 containerd[1507]: time="2025-01-30T15:48:01.238839616Z" level=info msg="StopPodSandbox for \"86f5ea8de930770ac39b9bd56bb5827358df0db77aeb88eaefe4023a24a6751a\" returns successfully" Jan 30 15:48:01.257015 kubelet[2745]: I0130 15:48:01.256964 2745 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9r7v\" (UniqueName: \"kubernetes.io/projected/4d1374c7-bfed-44ba-85b0-1668ae143351-kube-api-access-l9r7v\") pod \"4d1374c7-bfed-44ba-85b0-1668ae143351\" (UID: \"4d1374c7-bfed-44ba-85b0-1668ae143351\") " Jan 30 15:48:01.259627 kubelet[2745]: I0130 15:48:01.257036 2745 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4d1374c7-bfed-44ba-85b0-1668ae143351-cilium-config-path\") pod \"4d1374c7-bfed-44ba-85b0-1668ae143351\" (UID: \"4d1374c7-bfed-44ba-85b0-1668ae143351\") " Jan 30 15:48:01.272186 kubelet[2745]: I0130 15:48:01.270579 2745 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d1374c7-bfed-44ba-85b0-1668ae143351-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4d1374c7-bfed-44ba-85b0-1668ae143351" (UID: "4d1374c7-bfed-44ba-85b0-1668ae143351"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 15:48:01.272186 kubelet[2745]: I0130 15:48:01.270495 2745 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d1374c7-bfed-44ba-85b0-1668ae143351-kube-api-access-l9r7v" (OuterVolumeSpecName: "kube-api-access-l9r7v") pod "4d1374c7-bfed-44ba-85b0-1668ae143351" (UID: "4d1374c7-bfed-44ba-85b0-1668ae143351"). InnerVolumeSpecName "kube-api-access-l9r7v". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 15:48:01.357417 kubelet[2745]: I0130 15:48:01.357352 2745 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d4bb8523-69fb-4b61-84f1-745013762ac4-hostproc\") pod \"d4bb8523-69fb-4b61-84f1-745013762ac4\" (UID: \"d4bb8523-69fb-4b61-84f1-745013762ac4\") " Jan 30 15:48:01.357803 kubelet[2745]: I0130 15:48:01.357487 2745 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d4bb8523-69fb-4b61-84f1-745013762ac4-hostproc" (OuterVolumeSpecName: "hostproc") pod "d4bb8523-69fb-4b61-84f1-745013762ac4" (UID: "d4bb8523-69fb-4b61-84f1-745013762ac4"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 15:48:01.358132 kubelet[2745]: I0130 15:48:01.357775 2745 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d4bb8523-69fb-4b61-84f1-745013762ac4-host-proc-sys-net\") pod \"d4bb8523-69fb-4b61-84f1-745013762ac4\" (UID: \"d4bb8523-69fb-4b61-84f1-745013762ac4\") " Jan 30 15:48:01.358132 kubelet[2745]: I0130 15:48:01.357874 2745 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d4bb8523-69fb-4b61-84f1-745013762ac4-cni-path\") pod \"d4bb8523-69fb-4b61-84f1-745013762ac4\" (UID: \"d4bb8523-69fb-4b61-84f1-745013762ac4\") " Jan 30 15:48:01.358132 kubelet[2745]: I0130 15:48:01.357901 2745 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d4bb8523-69fb-4b61-84f1-745013762ac4-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "d4bb8523-69fb-4b61-84f1-745013762ac4" (UID: "d4bb8523-69fb-4b61-84f1-745013762ac4"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 15:48:01.358132 kubelet[2745]: I0130 15:48:01.358014 2745 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d4bb8523-69fb-4b61-84f1-745013762ac4-cni-path" (OuterVolumeSpecName: "cni-path") pod "d4bb8523-69fb-4b61-84f1-745013762ac4" (UID: "d4bb8523-69fb-4b61-84f1-745013762ac4"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 15:48:01.358132 kubelet[2745]: I0130 15:48:01.358029 2745 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d4bb8523-69fb-4b61-84f1-745013762ac4-hubble-tls\") pod \"d4bb8523-69fb-4b61-84f1-745013762ac4\" (UID: \"d4bb8523-69fb-4b61-84f1-745013762ac4\") " Jan 30 15:48:01.358645 kubelet[2745]: I0130 15:48:01.358069 2745 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d4bb8523-69fb-4b61-84f1-745013762ac4-cilium-config-path\") pod \"d4bb8523-69fb-4b61-84f1-745013762ac4\" (UID: \"d4bb8523-69fb-4b61-84f1-745013762ac4\") " Jan 30 15:48:01.358645 kubelet[2745]: I0130 15:48:01.358430 2745 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d4bb8523-69fb-4b61-84f1-745013762ac4-xtables-lock\") pod \"d4bb8523-69fb-4b61-84f1-745013762ac4\" (UID: \"d4bb8523-69fb-4b61-84f1-745013762ac4\") " Jan 30 15:48:01.358645 kubelet[2745]: I0130 15:48:01.358486 2745 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d4bb8523-69fb-4b61-84f1-745013762ac4-bpf-maps\") pod \"d4bb8523-69fb-4b61-84f1-745013762ac4\" (UID: \"d4bb8523-69fb-4b61-84f1-745013762ac4\") " Jan 30 15:48:01.358645 kubelet[2745]: I0130 15:48:01.358512 2745 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d4bb8523-69fb-4b61-84f1-745013762ac4-lib-modules\") pod \"d4bb8523-69fb-4b61-84f1-745013762ac4\" (UID: \"d4bb8523-69fb-4b61-84f1-745013762ac4\") " Jan 30 15:48:01.358645 kubelet[2745]: I0130 15:48:01.358534 2745 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d4bb8523-69fb-4b61-84f1-745013762ac4-cilium-run\") pod \"d4bb8523-69fb-4b61-84f1-745013762ac4\" (UID: \"d4bb8523-69fb-4b61-84f1-745013762ac4\") " Jan 30 15:48:01.358645 kubelet[2745]: I0130 15:48:01.358592 2745 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d4bb8523-69fb-4b61-84f1-745013762ac4-cilium-cgroup\") pod \"d4bb8523-69fb-4b61-84f1-745013762ac4\" (UID: \"d4bb8523-69fb-4b61-84f1-745013762ac4\") " Jan 30 15:48:01.359423 kubelet[2745]: I0130 15:48:01.358741 2745 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n87f9\" (UniqueName: \"kubernetes.io/projected/d4bb8523-69fb-4b61-84f1-745013762ac4-kube-api-access-n87f9\") pod \"d4bb8523-69fb-4b61-84f1-745013762ac4\" (UID: \"d4bb8523-69fb-4b61-84f1-745013762ac4\") " Jan 30 15:48:01.359423 kubelet[2745]: I0130 15:48:01.358777 2745 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d4bb8523-69fb-4b61-84f1-745013762ac4-host-proc-sys-kernel\") pod \"d4bb8523-69fb-4b61-84f1-745013762ac4\" (UID: \"d4bb8523-69fb-4b61-84f1-745013762ac4\") " Jan 30 15:48:01.359423 kubelet[2745]: I0130 15:48:01.358939 2745 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d4bb8523-69fb-4b61-84f1-745013762ac4-clustermesh-secrets\") pod \"d4bb8523-69fb-4b61-84f1-745013762ac4\" (UID: \"d4bb8523-69fb-4b61-84f1-745013762ac4\") " Jan 30 15:48:01.359423 kubelet[2745]: I0130 15:48:01.359199 2745 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d4bb8523-69fb-4b61-84f1-745013762ac4-etc-cni-netd\") pod \"d4bb8523-69fb-4b61-84f1-745013762ac4\" (UID: \"d4bb8523-69fb-4b61-84f1-745013762ac4\") " Jan 30 15:48:01.360037 kubelet[2745]: I0130 15:48:01.359563 2745 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l9r7v\" (UniqueName: \"kubernetes.io/projected/4d1374c7-bfed-44ba-85b0-1668ae143351-kube-api-access-l9r7v\") on node \"srv-eom3a.gb1.brightbox.com\" DevicePath \"\"" Jan 30 15:48:01.360037 kubelet[2745]: I0130 15:48:01.359587 2745 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d4bb8523-69fb-4b61-84f1-745013762ac4-hostproc\") on node \"srv-eom3a.gb1.brightbox.com\" DevicePath \"\"" Jan 30 15:48:01.360037 kubelet[2745]: I0130 15:48:01.359603 2745 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4d1374c7-bfed-44ba-85b0-1668ae143351-cilium-config-path\") on node \"srv-eom3a.gb1.brightbox.com\" DevicePath \"\"" Jan 30 15:48:01.360037 kubelet[2745]: I0130 15:48:01.359618 2745 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d4bb8523-69fb-4b61-84f1-745013762ac4-cni-path\") on node \"srv-eom3a.gb1.brightbox.com\" DevicePath \"\"" Jan 30 15:48:01.360037 kubelet[2745]: I0130 15:48:01.359650 2745 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d4bb8523-69fb-4b61-84f1-745013762ac4-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "d4bb8523-69fb-4b61-84f1-745013762ac4" (UID: "d4bb8523-69fb-4b61-84f1-745013762ac4"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 15:48:01.362132 kubelet[2745]: I0130 15:48:01.361809 2745 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4bb8523-69fb-4b61-84f1-745013762ac4-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "d4bb8523-69fb-4b61-84f1-745013762ac4" (UID: "d4bb8523-69fb-4b61-84f1-745013762ac4"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 15:48:01.362132 kubelet[2745]: I0130 15:48:01.361862 2745 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d4bb8523-69fb-4b61-84f1-745013762ac4-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "d4bb8523-69fb-4b61-84f1-745013762ac4" (UID: "d4bb8523-69fb-4b61-84f1-745013762ac4"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 15:48:01.362132 kubelet[2745]: I0130 15:48:01.361901 2745 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d4bb8523-69fb-4b61-84f1-745013762ac4-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "d4bb8523-69fb-4b61-84f1-745013762ac4" (UID: "d4bb8523-69fb-4b61-84f1-745013762ac4"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 15:48:01.362132 kubelet[2745]: I0130 15:48:01.361930 2745 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d4bb8523-69fb-4b61-84f1-745013762ac4-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "d4bb8523-69fb-4b61-84f1-745013762ac4" (UID: "d4bb8523-69fb-4b61-84f1-745013762ac4"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 15:48:01.362132 kubelet[2745]: I0130 15:48:01.361966 2745 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d4bb8523-69fb-4b61-84f1-745013762ac4-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d4bb8523-69fb-4b61-84f1-745013762ac4" (UID: "d4bb8523-69fb-4b61-84f1-745013762ac4"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 15:48:01.364288 kubelet[2745]: I0130 15:48:01.364259 2745 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4bb8523-69fb-4b61-84f1-745013762ac4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d4bb8523-69fb-4b61-84f1-745013762ac4" (UID: "d4bb8523-69fb-4b61-84f1-745013762ac4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 15:48:01.364744 kubelet[2745]: I0130 15:48:01.364402 2745 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d4bb8523-69fb-4b61-84f1-745013762ac4-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "d4bb8523-69fb-4b61-84f1-745013762ac4" (UID: "d4bb8523-69fb-4b61-84f1-745013762ac4"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 15:48:01.365049 kubelet[2745]: I0130 15:48:01.365003 2745 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4bb8523-69fb-4b61-84f1-745013762ac4-kube-api-access-n87f9" (OuterVolumeSpecName: "kube-api-access-n87f9") pod "d4bb8523-69fb-4b61-84f1-745013762ac4" (UID: "d4bb8523-69fb-4b61-84f1-745013762ac4"). InnerVolumeSpecName "kube-api-access-n87f9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 15:48:01.365204 kubelet[2745]: I0130 15:48:01.365060 2745 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d4bb8523-69fb-4b61-84f1-745013762ac4-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "d4bb8523-69fb-4b61-84f1-745013762ac4" (UID: "d4bb8523-69fb-4b61-84f1-745013762ac4"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 15:48:01.367865 kubelet[2745]: I0130 15:48:01.367808 2745 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4bb8523-69fb-4b61-84f1-745013762ac4-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "d4bb8523-69fb-4b61-84f1-745013762ac4" (UID: "d4bb8523-69fb-4b61-84f1-745013762ac4"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 15:48:01.444595 systemd[1]: Removed slice kubepods-besteffort-pod4d1374c7_bfed_44ba_85b0_1668ae143351.slice - libcontainer container kubepods-besteffort-pod4d1374c7_bfed_44ba_85b0_1668ae143351.slice. Jan 30 15:48:01.449099 systemd[1]: Removed slice kubepods-burstable-podd4bb8523_69fb_4b61_84f1_745013762ac4.slice - libcontainer container kubepods-burstable-podd4bb8523_69fb_4b61_84f1_745013762ac4.slice. Jan 30 15:48:01.449680 systemd[1]: kubepods-burstable-podd4bb8523_69fb_4b61_84f1_745013762ac4.slice: Consumed 10.107s CPU time. Jan 30 15:48:01.460335 kubelet[2745]: I0130 15:48:01.459815 2745 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d4bb8523-69fb-4b61-84f1-745013762ac4-hubble-tls\") on node \"srv-eom3a.gb1.brightbox.com\" DevicePath \"\"" Jan 30 15:48:01.460335 kubelet[2745]: I0130 15:48:01.459869 2745 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d4bb8523-69fb-4b61-84f1-745013762ac4-cilium-config-path\") on node \"srv-eom3a.gb1.brightbox.com\" DevicePath \"\"" Jan 30 15:48:01.460335 kubelet[2745]: I0130 15:48:01.459886 2745 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d4bb8523-69fb-4b61-84f1-745013762ac4-xtables-lock\") on node \"srv-eom3a.gb1.brightbox.com\" DevicePath \"\"" Jan 30 15:48:01.460335 kubelet[2745]: I0130 15:48:01.459907 2745 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d4bb8523-69fb-4b61-84f1-745013762ac4-host-proc-sys-net\") on node \"srv-eom3a.gb1.brightbox.com\" DevicePath \"\"" Jan 30 15:48:01.460335 kubelet[2745]: I0130 15:48:01.459921 2745 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d4bb8523-69fb-4b61-84f1-745013762ac4-bpf-maps\") on node \"srv-eom3a.gb1.brightbox.com\" DevicePath \"\"" Jan 30 15:48:01.460335 kubelet[2745]: I0130 15:48:01.459950 2745 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d4bb8523-69fb-4b61-84f1-745013762ac4-cilium-cgroup\") on node \"srv-eom3a.gb1.brightbox.com\" DevicePath \"\"" Jan 30 15:48:01.460335 kubelet[2745]: I0130 15:48:01.459964 2745 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d4bb8523-69fb-4b61-84f1-745013762ac4-lib-modules\") on node \"srv-eom3a.gb1.brightbox.com\" DevicePath \"\"" Jan 30 15:48:01.460335 kubelet[2745]: I0130 15:48:01.459977 2745 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d4bb8523-69fb-4b61-84f1-745013762ac4-cilium-run\") on node \"srv-eom3a.gb1.brightbox.com\" DevicePath \"\"" Jan 30 15:48:01.460847 kubelet[2745]: I0130 15:48:01.459991 2745 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-n87f9\" (UniqueName: \"kubernetes.io/projected/d4bb8523-69fb-4b61-84f1-745013762ac4-kube-api-access-n87f9\") on node \"srv-eom3a.gb1.brightbox.com\" DevicePath \"\"" Jan 30 15:48:01.460847 kubelet[2745]: I0130 15:48:01.460006 2745 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d4bb8523-69fb-4b61-84f1-745013762ac4-host-proc-sys-kernel\") on node \"srv-eom3a.gb1.brightbox.com\" DevicePath \"\"" Jan 30 15:48:01.460847 kubelet[2745]: I0130 15:48:01.460021 2745 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d4bb8523-69fb-4b61-84f1-745013762ac4-clustermesh-secrets\") on node \"srv-eom3a.gb1.brightbox.com\" DevicePath \"\"" Jan 30 15:48:01.464267 kubelet[2745]: I0130 15:48:01.464228 2745 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d4bb8523-69fb-4b61-84f1-745013762ac4-etc-cni-netd\") on node \"srv-eom3a.gb1.brightbox.com\" DevicePath \"\"" Jan 30 15:48:01.614713 kubelet[2745]: E0130 15:48:01.614627 2745 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 30 15:48:01.880243 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-86f5ea8de930770ac39b9bd56bb5827358df0db77aeb88eaefe4023a24a6751a-rootfs.mount: Deactivated successfully. Jan 30 15:48:01.880394 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-86f5ea8de930770ac39b9bd56bb5827358df0db77aeb88eaefe4023a24a6751a-shm.mount: Deactivated successfully. Jan 30 15:48:01.880519 systemd[1]: var-lib-kubelet-pods-d4bb8523\x2d69fb\x2d4b61\x2d84f1\x2d745013762ac4-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 30 15:48:01.880682 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1cfc299118daac3e51850efc1e952f8ed6730052b08a76e1ac3adb7e27038f55-rootfs.mount: Deactivated successfully. Jan 30 15:48:01.880785 systemd[1]: var-lib-kubelet-pods-d4bb8523\x2d69fb\x2d4b61\x2d84f1\x2d745013762ac4-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 30 15:48:01.880903 systemd[1]: var-lib-kubelet-pods-4d1374c7\x2dbfed\x2d44ba\x2d85b0\x2d1668ae143351-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dl9r7v.mount: Deactivated successfully. Jan 30 15:48:01.881022 systemd[1]: var-lib-kubelet-pods-d4bb8523\x2d69fb\x2d4b61\x2d84f1\x2d745013762ac4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dn87f9.mount: Deactivated successfully. Jan 30 15:48:02.036829 kubelet[2745]: I0130 15:48:02.036707 2745 scope.go:117] "RemoveContainer" containerID="6221764567595b2d39567d6049c037016f83ecec17f859a5b22e0615dffd79ee" Jan 30 15:48:02.059468 containerd[1507]: time="2025-01-30T15:48:02.058866091Z" level=info msg="RemoveContainer for \"6221764567595b2d39567d6049c037016f83ecec17f859a5b22e0615dffd79ee\"" Jan 30 15:48:02.064908 containerd[1507]: time="2025-01-30T15:48:02.064865725Z" level=info msg="RemoveContainer for \"6221764567595b2d39567d6049c037016f83ecec17f859a5b22e0615dffd79ee\" returns successfully" Jan 30 15:48:02.069115 kubelet[2745]: I0130 15:48:02.067025 2745 scope.go:117] "RemoveContainer" containerID="6221764567595b2d39567d6049c037016f83ecec17f859a5b22e0615dffd79ee" Jan 30 15:48:02.069255 containerd[1507]: time="2025-01-30T15:48:02.067543780Z" level=error msg="ContainerStatus for \"6221764567595b2d39567d6049c037016f83ecec17f859a5b22e0615dffd79ee\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6221764567595b2d39567d6049c037016f83ecec17f859a5b22e0615dffd79ee\": not found" Jan 30 15:48:02.079037 kubelet[2745]: E0130 15:48:02.078931 2745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6221764567595b2d39567d6049c037016f83ecec17f859a5b22e0615dffd79ee\": not found" containerID="6221764567595b2d39567d6049c037016f83ecec17f859a5b22e0615dffd79ee" Jan 30 15:48:02.087170 kubelet[2745]: I0130 15:48:02.078985 2745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6221764567595b2d39567d6049c037016f83ecec17f859a5b22e0615dffd79ee"} err="failed to get container status \"6221764567595b2d39567d6049c037016f83ecec17f859a5b22e0615dffd79ee\": rpc error: code = NotFound desc = an error occurred when try to find container \"6221764567595b2d39567d6049c037016f83ecec17f859a5b22e0615dffd79ee\": not found" Jan 30 15:48:02.087170 kubelet[2745]: I0130 15:48:02.086724 2745 scope.go:117] "RemoveContainer" containerID="42a7cc56805e011fb85505f81f315db9308eeca3f9185a53e4d1b2b880d0defa" Jan 30 15:48:02.091175 containerd[1507]: time="2025-01-30T15:48:02.090592399Z" level=info msg="RemoveContainer for \"42a7cc56805e011fb85505f81f315db9308eeca3f9185a53e4d1b2b880d0defa\"" Jan 30 15:48:02.096709 containerd[1507]: time="2025-01-30T15:48:02.096661491Z" level=info msg="RemoveContainer for \"42a7cc56805e011fb85505f81f315db9308eeca3f9185a53e4d1b2b880d0defa\" returns successfully" Jan 30 15:48:02.097853 kubelet[2745]: I0130 15:48:02.097327 2745 scope.go:117] "RemoveContainer" containerID="83428fd4433541ee078b96aaeffff20817da475a61b80e33f7365a709bf70855" Jan 30 15:48:02.107252 containerd[1507]: time="2025-01-30T15:48:02.107182888Z" level=info msg="RemoveContainer for \"83428fd4433541ee078b96aaeffff20817da475a61b80e33f7365a709bf70855\"" Jan 30 15:48:02.112068 containerd[1507]: time="2025-01-30T15:48:02.112016579Z" level=info msg="RemoveContainer for \"83428fd4433541ee078b96aaeffff20817da475a61b80e33f7365a709bf70855\" returns successfully" Jan 30 15:48:02.112478 kubelet[2745]: I0130 15:48:02.112372 2745 scope.go:117] "RemoveContainer" containerID="50a146346dfad438f5ead193a20b88ae1f228939c9dfffbb8f5507ad5a1dd357" Jan 30 15:48:02.116447 containerd[1507]: time="2025-01-30T15:48:02.114936458Z" level=info msg="RemoveContainer for \"50a146346dfad438f5ead193a20b88ae1f228939c9dfffbb8f5507ad5a1dd357\"" Jan 30 15:48:02.119446 containerd[1507]: time="2025-01-30T15:48:02.119396915Z" level=info msg="RemoveContainer for \"50a146346dfad438f5ead193a20b88ae1f228939c9dfffbb8f5507ad5a1dd357\" returns successfully" Jan 30 15:48:02.120030 kubelet[2745]: I0130 15:48:02.119909 2745 scope.go:117] "RemoveContainer" containerID="0b6e9728da4b4b82fa9f424996adcadea9895f28f538902825c096bd1f629b4e" Jan 30 15:48:02.121392 containerd[1507]: time="2025-01-30T15:48:02.121303282Z" level=info msg="RemoveContainer for \"0b6e9728da4b4b82fa9f424996adcadea9895f28f538902825c096bd1f629b4e\"" Jan 30 15:48:02.125668 containerd[1507]: time="2025-01-30T15:48:02.125628310Z" level=info msg="RemoveContainer for \"0b6e9728da4b4b82fa9f424996adcadea9895f28f538902825c096bd1f629b4e\" returns successfully" Jan 30 15:48:02.125935 kubelet[2745]: I0130 15:48:02.125847 2745 scope.go:117] "RemoveContainer" containerID="8da38d40aff35b4795aed99f335e1d55c4153deb6dcb1d9270fd832805f7a616" Jan 30 15:48:02.127021 containerd[1507]: time="2025-01-30T15:48:02.126993245Z" level=info msg="RemoveContainer for \"8da38d40aff35b4795aed99f335e1d55c4153deb6dcb1d9270fd832805f7a616\"" Jan 30 15:48:02.130220 containerd[1507]: time="2025-01-30T15:48:02.130123612Z" level=info msg="RemoveContainer for \"8da38d40aff35b4795aed99f335e1d55c4153deb6dcb1d9270fd832805f7a616\" returns successfully" Jan 30 15:48:02.130810 kubelet[2745]: I0130 15:48:02.130306 2745 scope.go:117] "RemoveContainer" containerID="42a7cc56805e011fb85505f81f315db9308eeca3f9185a53e4d1b2b880d0defa" Jan 30 15:48:02.130887 containerd[1507]: time="2025-01-30T15:48:02.130594087Z" level=error msg="ContainerStatus for \"42a7cc56805e011fb85505f81f315db9308eeca3f9185a53e4d1b2b880d0defa\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"42a7cc56805e011fb85505f81f315db9308eeca3f9185a53e4d1b2b880d0defa\": not found" Jan 30 15:48:02.130953 kubelet[2745]: E0130 15:48:02.130819 2745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"42a7cc56805e011fb85505f81f315db9308eeca3f9185a53e4d1b2b880d0defa\": not found" containerID="42a7cc56805e011fb85505f81f315db9308eeca3f9185a53e4d1b2b880d0defa" Jan 30 15:48:02.130953 kubelet[2745]: I0130 15:48:02.130851 2745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"42a7cc56805e011fb85505f81f315db9308eeca3f9185a53e4d1b2b880d0defa"} err="failed to get container status \"42a7cc56805e011fb85505f81f315db9308eeca3f9185a53e4d1b2b880d0defa\": rpc error: code = NotFound desc = an error occurred when try to find container \"42a7cc56805e011fb85505f81f315db9308eeca3f9185a53e4d1b2b880d0defa\": not found" Jan 30 15:48:02.130953 kubelet[2745]: I0130 15:48:02.130876 2745 scope.go:117] "RemoveContainer" containerID="83428fd4433541ee078b96aaeffff20817da475a61b80e33f7365a709bf70855" Jan 30 15:48:02.132570 containerd[1507]: time="2025-01-30T15:48:02.132241945Z" level=error msg="ContainerStatus for \"83428fd4433541ee078b96aaeffff20817da475a61b80e33f7365a709bf70855\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"83428fd4433541ee078b96aaeffff20817da475a61b80e33f7365a709bf70855\": not found" Jan 30 15:48:02.132643 kubelet[2745]: E0130 15:48:02.132429 2745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"83428fd4433541ee078b96aaeffff20817da475a61b80e33f7365a709bf70855\": not found" containerID="83428fd4433541ee078b96aaeffff20817da475a61b80e33f7365a709bf70855" Jan 30 15:48:02.132643 kubelet[2745]: I0130 15:48:02.132470 2745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"83428fd4433541ee078b96aaeffff20817da475a61b80e33f7365a709bf70855"} err="failed to get container status \"83428fd4433541ee078b96aaeffff20817da475a61b80e33f7365a709bf70855\": rpc error: code = NotFound desc = an error occurred when try to find container \"83428fd4433541ee078b96aaeffff20817da475a61b80e33f7365a709bf70855\": not found" Jan 30 15:48:02.132643 kubelet[2745]: I0130 15:48:02.132493 2745 scope.go:117] "RemoveContainer" containerID="50a146346dfad438f5ead193a20b88ae1f228939c9dfffbb8f5507ad5a1dd357" Jan 30 15:48:02.132998 containerd[1507]: time="2025-01-30T15:48:02.132783914Z" level=error msg="ContainerStatus for \"50a146346dfad438f5ead193a20b88ae1f228939c9dfffbb8f5507ad5a1dd357\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"50a146346dfad438f5ead193a20b88ae1f228939c9dfffbb8f5507ad5a1dd357\": not found" Jan 30 15:48:02.133177 kubelet[2745]: E0130 15:48:02.132973 2745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"50a146346dfad438f5ead193a20b88ae1f228939c9dfffbb8f5507ad5a1dd357\": not found" containerID="50a146346dfad438f5ead193a20b88ae1f228939c9dfffbb8f5507ad5a1dd357" Jan 30 15:48:02.138815 kubelet[2745]: I0130 15:48:02.133150 2745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"50a146346dfad438f5ead193a20b88ae1f228939c9dfffbb8f5507ad5a1dd357"} err="failed to get container status \"50a146346dfad438f5ead193a20b88ae1f228939c9dfffbb8f5507ad5a1dd357\": rpc error: code = NotFound desc = an error occurred when try to find container \"50a146346dfad438f5ead193a20b88ae1f228939c9dfffbb8f5507ad5a1dd357\": not found" Jan 30 15:48:02.138815 kubelet[2745]: I0130 15:48:02.138811 2745 scope.go:117] "RemoveContainer" containerID="0b6e9728da4b4b82fa9f424996adcadea9895f28f538902825c096bd1f629b4e" Jan 30 15:48:02.139237 containerd[1507]: time="2025-01-30T15:48:02.139049779Z" level=error msg="ContainerStatus for \"0b6e9728da4b4b82fa9f424996adcadea9895f28f538902825c096bd1f629b4e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0b6e9728da4b4b82fa9f424996adcadea9895f28f538902825c096bd1f629b4e\": not found" Jan 30 15:48:02.139317 kubelet[2745]: E0130 15:48:02.139238 2745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0b6e9728da4b4b82fa9f424996adcadea9895f28f538902825c096bd1f629b4e\": not found" containerID="0b6e9728da4b4b82fa9f424996adcadea9895f28f538902825c096bd1f629b4e" Jan 30 15:48:02.139317 kubelet[2745]: I0130 15:48:02.139268 2745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0b6e9728da4b4b82fa9f424996adcadea9895f28f538902825c096bd1f629b4e"} err="failed to get container status \"0b6e9728da4b4b82fa9f424996adcadea9895f28f538902825c096bd1f629b4e\": rpc error: code = NotFound desc = an error occurred when try to find container \"0b6e9728da4b4b82fa9f424996adcadea9895f28f538902825c096bd1f629b4e\": not found" Jan 30 15:48:02.139317 kubelet[2745]: I0130 15:48:02.139291 2745 scope.go:117] "RemoveContainer" containerID="8da38d40aff35b4795aed99f335e1d55c4153deb6dcb1d9270fd832805f7a616" Jan 30 15:48:02.139762 containerd[1507]: time="2025-01-30T15:48:02.139727020Z" level=error msg="ContainerStatus for \"8da38d40aff35b4795aed99f335e1d55c4153deb6dcb1d9270fd832805f7a616\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8da38d40aff35b4795aed99f335e1d55c4153deb6dcb1d9270fd832805f7a616\": not found" Jan 30 15:48:02.140056 kubelet[2745]: E0130 15:48:02.140002 2745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8da38d40aff35b4795aed99f335e1d55c4153deb6dcb1d9270fd832805f7a616\": not found" containerID="8da38d40aff35b4795aed99f335e1d55c4153deb6dcb1d9270fd832805f7a616" Jan 30 15:48:02.140153 kubelet[2745]: I0130 15:48:02.140122 2745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8da38d40aff35b4795aed99f335e1d55c4153deb6dcb1d9270fd832805f7a616"} err="failed to get container status \"8da38d40aff35b4795aed99f335e1d55c4153deb6dcb1d9270fd832805f7a616\": rpc error: code = NotFound desc = an error occurred when try to find container \"8da38d40aff35b4795aed99f335e1d55c4153deb6dcb1d9270fd832805f7a616\": not found" Jan 30 15:48:02.934181 sshd[4367]: Connection closed by 139.178.89.65 port 56366 Jan 30 15:48:02.935072 sshd-session[4365]: pam_unix(sshd:session): session closed for user core Jan 30 15:48:02.940035 systemd[1]: sshd@30-10.243.85.194:22-139.178.89.65:56366.service: Deactivated successfully. Jan 30 15:48:02.942562 systemd[1]: session-27.scope: Deactivated successfully. Jan 30 15:48:02.944001 systemd-logind[1492]: Session 27 logged out. Waiting for processes to exit. Jan 30 15:48:02.945575 systemd-logind[1492]: Removed session 27. Jan 30 15:48:03.094500 systemd[1]: Started sshd@31-10.243.85.194:22-139.178.89.65:39730.service - OpenSSH per-connection server daemon (139.178.89.65:39730). Jan 30 15:48:03.430978 kubelet[2745]: I0130 15:48:03.430872 2745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d1374c7-bfed-44ba-85b0-1668ae143351" path="/var/lib/kubelet/pods/4d1374c7-bfed-44ba-85b0-1668ae143351/volumes" Jan 30 15:48:03.431925 kubelet[2745]: I0130 15:48:03.431886 2745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4bb8523-69fb-4b61-84f1-745013762ac4" path="/var/lib/kubelet/pods/d4bb8523-69fb-4b61-84f1-745013762ac4/volumes" Jan 30 15:48:03.985779 sshd[4526]: Accepted publickey for core from 139.178.89.65 port 39730 ssh2: RSA SHA256:Dn4MxDy04q+t+ei5/s5fxQGaUh9w4dS1NM4x1ar4TVQ Jan 30 15:48:03.988004 sshd-session[4526]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:48:03.996974 systemd-logind[1492]: New session 28 of user core. Jan 30 15:48:04.005391 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 30 15:48:05.037909 kubelet[2745]: I0130 15:48:05.037798 2745 setters.go:602] "Node became not ready" node="srv-eom3a.gb1.brightbox.com" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-30T15:48:05Z","lastTransitionTime":"2025-01-30T15:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 30 15:48:05.359329 kubelet[2745]: I0130 15:48:05.358602 2745 memory_manager.go:355] "RemoveStaleState removing state" podUID="4d1374c7-bfed-44ba-85b0-1668ae143351" containerName="cilium-operator" Jan 30 15:48:05.359329 kubelet[2745]: I0130 15:48:05.358648 2745 memory_manager.go:355] "RemoveStaleState removing state" podUID="d4bb8523-69fb-4b61-84f1-745013762ac4" containerName="cilium-agent" Jan 30 15:48:05.390731 kubelet[2745]: I0130 15:48:05.389723 2745 status_manager.go:890] "Failed to get status for pod" podUID="c8a0c00d-e9d9-43db-8935-b9af22b58817" pod="kube-system/cilium-4h5tl" err="pods \"cilium-4h5tl\" is forbidden: User \"system:node:srv-eom3a.gb1.brightbox.com\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'srv-eom3a.gb1.brightbox.com' and this object" Jan 30 15:48:05.401244 systemd[1]: Created slice kubepods-burstable-podc8a0c00d_e9d9_43db_8935_b9af22b58817.slice - libcontainer container kubepods-burstable-podc8a0c00d_e9d9_43db_8935_b9af22b58817.slice. Jan 30 15:48:05.451338 sshd[4529]: Connection closed by 139.178.89.65 port 39730 Jan 30 15:48:05.452772 sshd-session[4526]: pam_unix(sshd:session): session closed for user core Jan 30 15:48:05.459145 systemd[1]: sshd@31-10.243.85.194:22-139.178.89.65:39730.service: Deactivated successfully. Jan 30 15:48:05.462371 systemd[1]: session-28.scope: Deactivated successfully. Jan 30 15:48:05.465246 systemd-logind[1492]: Session 28 logged out. Waiting for processes to exit. Jan 30 15:48:05.467454 systemd-logind[1492]: Removed session 28. Jan 30 15:48:05.500267 kubelet[2745]: I0130 15:48:05.500129 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c8a0c00d-e9d9-43db-8935-b9af22b58817-clustermesh-secrets\") pod \"cilium-4h5tl\" (UID: \"c8a0c00d-e9d9-43db-8935-b9af22b58817\") " pod="kube-system/cilium-4h5tl" Jan 30 15:48:05.500267 kubelet[2745]: I0130 15:48:05.500203 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c8a0c00d-e9d9-43db-8935-b9af22b58817-host-proc-sys-kernel\") pod \"cilium-4h5tl\" (UID: \"c8a0c00d-e9d9-43db-8935-b9af22b58817\") " pod="kube-system/cilium-4h5tl" Jan 30 15:48:05.500267 kubelet[2745]: I0130 15:48:05.500244 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lgjm\" (UniqueName: \"kubernetes.io/projected/c8a0c00d-e9d9-43db-8935-b9af22b58817-kube-api-access-5lgjm\") pod \"cilium-4h5tl\" (UID: \"c8a0c00d-e9d9-43db-8935-b9af22b58817\") " pod="kube-system/cilium-4h5tl" Jan 30 15:48:05.500267 kubelet[2745]: I0130 15:48:05.500276 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c8a0c00d-e9d9-43db-8935-b9af22b58817-cilium-config-path\") pod \"cilium-4h5tl\" (UID: \"c8a0c00d-e9d9-43db-8935-b9af22b58817\") " pod="kube-system/cilium-4h5tl" Jan 30 15:48:05.500662 kubelet[2745]: I0130 15:48:05.500309 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c8a0c00d-e9d9-43db-8935-b9af22b58817-cilium-cgroup\") pod \"cilium-4h5tl\" (UID: \"c8a0c00d-e9d9-43db-8935-b9af22b58817\") " pod="kube-system/cilium-4h5tl" Jan 30 15:48:05.500662 kubelet[2745]: I0130 15:48:05.500341 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c8a0c00d-e9d9-43db-8935-b9af22b58817-cilium-ipsec-secrets\") pod \"cilium-4h5tl\" (UID: \"c8a0c00d-e9d9-43db-8935-b9af22b58817\") " pod="kube-system/cilium-4h5tl" Jan 30 15:48:05.500662 kubelet[2745]: I0130 15:48:05.500369 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c8a0c00d-e9d9-43db-8935-b9af22b58817-cilium-run\") pod \"cilium-4h5tl\" (UID: \"c8a0c00d-e9d9-43db-8935-b9af22b58817\") " pod="kube-system/cilium-4h5tl" Jan 30 15:48:05.500662 kubelet[2745]: I0130 15:48:05.500394 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c8a0c00d-e9d9-43db-8935-b9af22b58817-bpf-maps\") pod \"cilium-4h5tl\" (UID: \"c8a0c00d-e9d9-43db-8935-b9af22b58817\") " pod="kube-system/cilium-4h5tl" Jan 30 15:48:05.500662 kubelet[2745]: I0130 15:48:05.500419 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c8a0c00d-e9d9-43db-8935-b9af22b58817-cni-path\") pod \"cilium-4h5tl\" (UID: \"c8a0c00d-e9d9-43db-8935-b9af22b58817\") " pod="kube-system/cilium-4h5tl" Jan 30 15:48:05.500662 kubelet[2745]: I0130 15:48:05.500446 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c8a0c00d-e9d9-43db-8935-b9af22b58817-lib-modules\") pod \"cilium-4h5tl\" (UID: \"c8a0c00d-e9d9-43db-8935-b9af22b58817\") " pod="kube-system/cilium-4h5tl" Jan 30 15:48:05.500931 kubelet[2745]: I0130 15:48:05.500474 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c8a0c00d-e9d9-43db-8935-b9af22b58817-hostproc\") pod \"cilium-4h5tl\" (UID: \"c8a0c00d-e9d9-43db-8935-b9af22b58817\") " pod="kube-system/cilium-4h5tl" Jan 30 15:48:05.500931 kubelet[2745]: I0130 15:48:05.500498 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c8a0c00d-e9d9-43db-8935-b9af22b58817-xtables-lock\") pod \"cilium-4h5tl\" (UID: \"c8a0c00d-e9d9-43db-8935-b9af22b58817\") " pod="kube-system/cilium-4h5tl" Jan 30 15:48:05.500931 kubelet[2745]: I0130 15:48:05.500522 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c8a0c00d-e9d9-43db-8935-b9af22b58817-host-proc-sys-net\") pod \"cilium-4h5tl\" (UID: \"c8a0c00d-e9d9-43db-8935-b9af22b58817\") " pod="kube-system/cilium-4h5tl" Jan 30 15:48:05.500931 kubelet[2745]: I0130 15:48:05.500548 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c8a0c00d-e9d9-43db-8935-b9af22b58817-hubble-tls\") pod \"cilium-4h5tl\" (UID: \"c8a0c00d-e9d9-43db-8935-b9af22b58817\") " pod="kube-system/cilium-4h5tl" Jan 30 15:48:05.500931 kubelet[2745]: I0130 15:48:05.500573 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c8a0c00d-e9d9-43db-8935-b9af22b58817-etc-cni-netd\") pod \"cilium-4h5tl\" (UID: \"c8a0c00d-e9d9-43db-8935-b9af22b58817\") " pod="kube-system/cilium-4h5tl" Jan 30 15:48:05.614284 systemd[1]: Started sshd@32-10.243.85.194:22-139.178.89.65:39732.service - OpenSSH per-connection server daemon (139.178.89.65:39732). Jan 30 15:48:05.730902 containerd[1507]: time="2025-01-30T15:48:05.730818633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4h5tl,Uid:c8a0c00d-e9d9-43db-8935-b9af22b58817,Namespace:kube-system,Attempt:0,}" Jan 30 15:48:05.763533 containerd[1507]: time="2025-01-30T15:48:05.763345556Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 15:48:05.763533 containerd[1507]: time="2025-01-30T15:48:05.763446996Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 15:48:05.763533 containerd[1507]: time="2025-01-30T15:48:05.763467409Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:48:05.763909 containerd[1507]: time="2025-01-30T15:48:05.763622029Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:48:05.790345 systemd[1]: Started cri-containerd-c03dac96e33957287b895285d54799c0b424be7290f5d4b45a75241a38a8e37d.scope - libcontainer container c03dac96e33957287b895285d54799c0b424be7290f5d4b45a75241a38a8e37d. Jan 30 15:48:05.831848 containerd[1507]: time="2025-01-30T15:48:05.831775177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4h5tl,Uid:c8a0c00d-e9d9-43db-8935-b9af22b58817,Namespace:kube-system,Attempt:0,} returns sandbox id \"c03dac96e33957287b895285d54799c0b424be7290f5d4b45a75241a38a8e37d\"" Jan 30 15:48:05.839396 containerd[1507]: time="2025-01-30T15:48:05.838950112Z" level=info msg="CreateContainer within sandbox \"c03dac96e33957287b895285d54799c0b424be7290f5d4b45a75241a38a8e37d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 30 15:48:05.853273 containerd[1507]: time="2025-01-30T15:48:05.853204292Z" level=info msg="CreateContainer within sandbox \"c03dac96e33957287b895285d54799c0b424be7290f5d4b45a75241a38a8e37d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b69824f9f05d65c4ef1414afdccbe05dcab50288df667235b1766462d45c58e2\"" Jan 30 15:48:05.854488 containerd[1507]: time="2025-01-30T15:48:05.854419940Z" level=info msg="StartContainer for \"b69824f9f05d65c4ef1414afdccbe05dcab50288df667235b1766462d45c58e2\"" Jan 30 15:48:05.892289 systemd[1]: Started cri-containerd-b69824f9f05d65c4ef1414afdccbe05dcab50288df667235b1766462d45c58e2.scope - libcontainer container b69824f9f05d65c4ef1414afdccbe05dcab50288df667235b1766462d45c58e2. Jan 30 15:48:05.936114 containerd[1507]: time="2025-01-30T15:48:05.935832068Z" level=info msg="StartContainer for \"b69824f9f05d65c4ef1414afdccbe05dcab50288df667235b1766462d45c58e2\" returns successfully" Jan 30 15:48:05.953006 systemd[1]: cri-containerd-b69824f9f05d65c4ef1414afdccbe05dcab50288df667235b1766462d45c58e2.scope: Deactivated successfully. Jan 30 15:48:06.010732 containerd[1507]: time="2025-01-30T15:48:06.010571331Z" level=info msg="shim disconnected" id=b69824f9f05d65c4ef1414afdccbe05dcab50288df667235b1766462d45c58e2 namespace=k8s.io Jan 30 15:48:06.010732 containerd[1507]: time="2025-01-30T15:48:06.010725858Z" level=warning msg="cleaning up after shim disconnected" id=b69824f9f05d65c4ef1414afdccbe05dcab50288df667235b1766462d45c58e2 namespace=k8s.io Jan 30 15:48:06.011410 containerd[1507]: time="2025-01-30T15:48:06.010766681Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 15:48:06.062029 containerd[1507]: time="2025-01-30T15:48:06.061945226Z" level=info msg="CreateContainer within sandbox \"c03dac96e33957287b895285d54799c0b424be7290f5d4b45a75241a38a8e37d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 30 15:48:06.084957 containerd[1507]: time="2025-01-30T15:48:06.084884980Z" level=info msg="CreateContainer within sandbox \"c03dac96e33957287b895285d54799c0b424be7290f5d4b45a75241a38a8e37d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1e30ef1157297d370fdbe4f8c378335b95e16981505cbf2ad6bd80a5a6b6890b\"" Jan 30 15:48:06.085921 containerd[1507]: time="2025-01-30T15:48:06.085882931Z" level=info msg="StartContainer for \"1e30ef1157297d370fdbe4f8c378335b95e16981505cbf2ad6bd80a5a6b6890b\"" Jan 30 15:48:06.130370 systemd[1]: Started cri-containerd-1e30ef1157297d370fdbe4f8c378335b95e16981505cbf2ad6bd80a5a6b6890b.scope - libcontainer container 1e30ef1157297d370fdbe4f8c378335b95e16981505cbf2ad6bd80a5a6b6890b. Jan 30 15:48:06.171088 containerd[1507]: time="2025-01-30T15:48:06.170385282Z" level=info msg="StartContainer for \"1e30ef1157297d370fdbe4f8c378335b95e16981505cbf2ad6bd80a5a6b6890b\" returns successfully" Jan 30 15:48:06.184685 systemd[1]: cri-containerd-1e30ef1157297d370fdbe4f8c378335b95e16981505cbf2ad6bd80a5a6b6890b.scope: Deactivated successfully. Jan 30 15:48:06.220127 containerd[1507]: time="2025-01-30T15:48:06.219675491Z" level=info msg="shim disconnected" id=1e30ef1157297d370fdbe4f8c378335b95e16981505cbf2ad6bd80a5a6b6890b namespace=k8s.io Jan 30 15:48:06.220127 containerd[1507]: time="2025-01-30T15:48:06.219786902Z" level=warning msg="cleaning up after shim disconnected" id=1e30ef1157297d370fdbe4f8c378335b95e16981505cbf2ad6bd80a5a6b6890b namespace=k8s.io Jan 30 15:48:06.220127 containerd[1507]: time="2025-01-30T15:48:06.219802156Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 15:48:06.544087 sshd[4538]: Accepted publickey for core from 139.178.89.65 port 39732 ssh2: RSA SHA256:Dn4MxDy04q+t+ei5/s5fxQGaUh9w4dS1NM4x1ar4TVQ Jan 30 15:48:06.547121 sshd-session[4538]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:48:06.554520 systemd-logind[1492]: New session 29 of user core. Jan 30 15:48:06.567320 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 30 15:48:06.616137 kubelet[2745]: E0130 15:48:06.616033 2745 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 30 15:48:07.082690 containerd[1507]: time="2025-01-30T15:48:07.082194632Z" level=info msg="CreateContainer within sandbox \"c03dac96e33957287b895285d54799c0b424be7290f5d4b45a75241a38a8e37d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 30 15:48:07.122695 containerd[1507]: time="2025-01-30T15:48:07.121507307Z" level=info msg="CreateContainer within sandbox \"c03dac96e33957287b895285d54799c0b424be7290f5d4b45a75241a38a8e37d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"32a3901d9ab7450f029926aae173c3da154487f30e90b61114a743a556d533e9\"" Jan 30 15:48:07.128847 containerd[1507]: time="2025-01-30T15:48:07.128785150Z" level=info msg="StartContainer for \"32a3901d9ab7450f029926aae173c3da154487f30e90b61114a743a556d533e9\"" Jan 30 15:48:07.160859 sshd[4708]: Connection closed by 139.178.89.65 port 39732 Jan 30 15:48:07.161574 sshd-session[4538]: pam_unix(sshd:session): session closed for user core Jan 30 15:48:07.169434 systemd[1]: sshd@32-10.243.85.194:22-139.178.89.65:39732.service: Deactivated successfully. Jan 30 15:48:07.172747 systemd[1]: session-29.scope: Deactivated successfully. Jan 30 15:48:07.175672 systemd-logind[1492]: Session 29 logged out. Waiting for processes to exit. Jan 30 15:48:07.178021 systemd-logind[1492]: Removed session 29. Jan 30 15:48:07.193299 systemd[1]: Started cri-containerd-32a3901d9ab7450f029926aae173c3da154487f30e90b61114a743a556d533e9.scope - libcontainer container 32a3901d9ab7450f029926aae173c3da154487f30e90b61114a743a556d533e9. Jan 30 15:48:07.250983 containerd[1507]: time="2025-01-30T15:48:07.250685576Z" level=info msg="StartContainer for \"32a3901d9ab7450f029926aae173c3da154487f30e90b61114a743a556d533e9\" returns successfully" Jan 30 15:48:07.260436 systemd[1]: cri-containerd-32a3901d9ab7450f029926aae173c3da154487f30e90b61114a743a556d533e9.scope: Deactivated successfully. Jan 30 15:48:07.293915 containerd[1507]: time="2025-01-30T15:48:07.293791144Z" level=info msg="shim disconnected" id=32a3901d9ab7450f029926aae173c3da154487f30e90b61114a743a556d533e9 namespace=k8s.io Jan 30 15:48:07.293915 containerd[1507]: time="2025-01-30T15:48:07.293908064Z" level=warning msg="cleaning up after shim disconnected" id=32a3901d9ab7450f029926aae173c3da154487f30e90b61114a743a556d533e9 namespace=k8s.io Jan 30 15:48:07.294373 containerd[1507]: time="2025-01-30T15:48:07.293923493Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 15:48:07.319794 systemd[1]: Started sshd@33-10.243.85.194:22-139.178.89.65:39740.service - OpenSSH per-connection server daemon (139.178.89.65:39740). Jan 30 15:48:07.629660 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-32a3901d9ab7450f029926aae173c3da154487f30e90b61114a743a556d533e9-rootfs.mount: Deactivated successfully. Jan 30 15:48:08.088576 containerd[1507]: time="2025-01-30T15:48:08.088374373Z" level=info msg="CreateContainer within sandbox \"c03dac96e33957287b895285d54799c0b424be7290f5d4b45a75241a38a8e37d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 30 15:48:08.118048 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2822745534.mount: Deactivated successfully. Jan 30 15:48:08.135511 containerd[1507]: time="2025-01-30T15:48:08.135441287Z" level=info msg="CreateContainer within sandbox \"c03dac96e33957287b895285d54799c0b424be7290f5d4b45a75241a38a8e37d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"89bfec2c7d45457a400d0e5e289461a84f93bbb61020d1e166573b1c205c159d\"" Jan 30 15:48:08.136265 containerd[1507]: time="2025-01-30T15:48:08.136228119Z" level=info msg="StartContainer for \"89bfec2c7d45457a400d0e5e289461a84f93bbb61020d1e166573b1c205c159d\"" Jan 30 15:48:08.185386 systemd[1]: Started cri-containerd-89bfec2c7d45457a400d0e5e289461a84f93bbb61020d1e166573b1c205c159d.scope - libcontainer container 89bfec2c7d45457a400d0e5e289461a84f93bbb61020d1e166573b1c205c159d. Jan 30 15:48:08.218382 sshd[4769]: Accepted publickey for core from 139.178.89.65 port 39740 ssh2: RSA SHA256:Dn4MxDy04q+t+ei5/s5fxQGaUh9w4dS1NM4x1ar4TVQ Jan 30 15:48:08.222688 sshd-session[4769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:48:08.232823 systemd-logind[1492]: New session 30 of user core. Jan 30 15:48:08.235748 containerd[1507]: time="2025-01-30T15:48:08.235693670Z" level=info msg="StartContainer for \"89bfec2c7d45457a400d0e5e289461a84f93bbb61020d1e166573b1c205c159d\" returns successfully" Jan 30 15:48:08.238583 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 30 15:48:08.239736 systemd[1]: cri-containerd-89bfec2c7d45457a400d0e5e289461a84f93bbb61020d1e166573b1c205c159d.scope: Deactivated successfully. Jan 30 15:48:08.271986 containerd[1507]: time="2025-01-30T15:48:08.271839711Z" level=info msg="shim disconnected" id=89bfec2c7d45457a400d0e5e289461a84f93bbb61020d1e166573b1c205c159d namespace=k8s.io Jan 30 15:48:08.272417 containerd[1507]: time="2025-01-30T15:48:08.272257669Z" level=warning msg="cleaning up after shim disconnected" id=89bfec2c7d45457a400d0e5e289461a84f93bbb61020d1e166573b1c205c159d namespace=k8s.io Jan 30 15:48:08.272417 containerd[1507]: time="2025-01-30T15:48:08.272283592Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 15:48:08.628365 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-89bfec2c7d45457a400d0e5e289461a84f93bbb61020d1e166573b1c205c159d-rootfs.mount: Deactivated successfully. Jan 30 15:48:09.094170 containerd[1507]: time="2025-01-30T15:48:09.094083805Z" level=info msg="CreateContainer within sandbox \"c03dac96e33957287b895285d54799c0b424be7290f5d4b45a75241a38a8e37d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 30 15:48:09.125690 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1685605038.mount: Deactivated successfully. Jan 30 15:48:09.128234 containerd[1507]: time="2025-01-30T15:48:09.127996024Z" level=info msg="CreateContainer within sandbox \"c03dac96e33957287b895285d54799c0b424be7290f5d4b45a75241a38a8e37d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"eb9649b1827b4865392d56550325a0f4d7cba88b58f58fbcc874b3c40dcb2a7d\"" Jan 30 15:48:09.130032 containerd[1507]: time="2025-01-30T15:48:09.129925352Z" level=info msg="StartContainer for \"eb9649b1827b4865392d56550325a0f4d7cba88b58f58fbcc874b3c40dcb2a7d\"" Jan 30 15:48:09.169330 systemd[1]: Started cri-containerd-eb9649b1827b4865392d56550325a0f4d7cba88b58f58fbcc874b3c40dcb2a7d.scope - libcontainer container eb9649b1827b4865392d56550325a0f4d7cba88b58f58fbcc874b3c40dcb2a7d. Jan 30 15:48:09.211524 containerd[1507]: time="2025-01-30T15:48:09.211462511Z" level=info msg="StartContainer for \"eb9649b1827b4865392d56550325a0f4d7cba88b58f58fbcc874b3c40dcb2a7d\" returns successfully" Jan 30 15:48:09.876712 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 30 15:48:13.567753 systemd-networkd[1425]: lxc_health: Link UP Jan 30 15:48:13.577663 systemd-networkd[1425]: lxc_health: Gained carrier Jan 30 15:48:13.763008 kubelet[2745]: I0130 15:48:13.762936 2745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-4h5tl" podStartSLOduration=8.762903879 podStartE2EDuration="8.762903879s" podCreationTimestamp="2025-01-30 15:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 15:48:10.138176067 +0000 UTC m=+158.906759136" watchObservedRunningTime="2025-01-30 15:48:13.762903879 +0000 UTC m=+162.531486945" Jan 30 15:48:14.674430 systemd-networkd[1425]: lxc_health: Gained IPv6LL Jan 30 15:48:17.992776 systemd[1]: run-containerd-runc-k8s.io-eb9649b1827b4865392d56550325a0f4d7cba88b58f58fbcc874b3c40dcb2a7d-runc.xQVkV3.mount: Deactivated successfully. Jan 30 15:48:20.158695 systemd[1]: run-containerd-runc-k8s.io-eb9649b1827b4865392d56550325a0f4d7cba88b58f58fbcc874b3c40dcb2a7d-runc.SLVBbr.mount: Deactivated successfully. Jan 30 15:48:20.384131 sshd[4810]: Connection closed by 139.178.89.65 port 39740 Jan 30 15:48:20.385162 sshd-session[4769]: pam_unix(sshd:session): session closed for user core Jan 30 15:48:20.390838 systemd[1]: sshd@33-10.243.85.194:22-139.178.89.65:39740.service: Deactivated successfully. Jan 30 15:48:20.394841 systemd[1]: session-30.scope: Deactivated successfully. Jan 30 15:48:20.397162 systemd-logind[1492]: Session 30 logged out. Waiting for processes to exit. Jan 30 15:48:20.399120 systemd-logind[1492]: Removed session 30.