Jan 29 11:54:54.039021 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 09:29:54 -00 2025 Jan 29 11:54:54.039072 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=fe60919b0c6f6abb7495678f87f7024e97a038fc343fa31a123a43ef5f489466 Jan 29 11:54:54.039087 kernel: BIOS-provided physical RAM map: Jan 29 11:54:54.039103 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 29 11:54:54.039114 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 29 11:54:54.039126 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 29 11:54:54.039138 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Jan 29 11:54:54.039149 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Jan 29 11:54:54.039161 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 29 11:54:54.039172 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 29 11:54:54.039183 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 29 11:54:54.039194 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 29 11:54:54.039210 kernel: NX (Execute Disable) protection: active Jan 29 11:54:54.039222 kernel: APIC: Static calls initialized Jan 29 11:54:54.039235 kernel: SMBIOS 2.8 present. Jan 29 11:54:54.039268 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Jan 29 11:54:54.039282 kernel: Hypervisor detected: KVM Jan 29 11:54:54.039300 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 29 11:54:54.039312 kernel: kvm-clock: using sched offset of 4451058979 cycles Jan 29 11:54:54.039326 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 29 11:54:54.039338 kernel: tsc: Detected 2499.998 MHz processor Jan 29 11:54:54.039350 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 29 11:54:54.039363 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 29 11:54:54.039375 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Jan 29 11:54:54.039388 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 29 11:54:54.039400 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 29 11:54:54.039416 kernel: Using GB pages for direct mapping Jan 29 11:54:54.039429 kernel: ACPI: Early table checksum verification disabled Jan 29 11:54:54.039441 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Jan 29 11:54:54.039453 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:54:54.039466 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:54:54.039478 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:54:54.039490 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Jan 29 11:54:54.039502 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:54:54.039515 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:54:54.039531 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:54:54.039544 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:54:54.039556 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Jan 29 11:54:54.039568 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Jan 29 11:54:54.039581 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Jan 29 11:54:54.039599 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Jan 29 11:54:54.039612 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Jan 29 11:54:54.039629 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Jan 29 11:54:54.039642 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Jan 29 11:54:54.039654 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 29 11:54:54.039667 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 29 11:54:54.039680 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Jan 29 11:54:54.039692 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Jan 29 11:54:54.039705 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Jan 29 11:54:54.039717 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Jan 29 11:54:54.039735 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Jan 29 11:54:54.039747 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Jan 29 11:54:54.039764 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Jan 29 11:54:54.039776 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Jan 29 11:54:54.039789 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Jan 29 11:54:54.039801 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Jan 29 11:54:54.039814 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Jan 29 11:54:54.039826 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Jan 29 11:54:54.039839 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Jan 29 11:54:54.039851 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Jan 29 11:54:54.039868 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 29 11:54:54.039881 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 29 11:54:54.039894 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Jan 29 11:54:54.039907 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Jan 29 11:54:54.039919 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Jan 29 11:54:54.039932 kernel: Zone ranges: Jan 29 11:54:54.039945 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 29 11:54:54.039969 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Jan 29 11:54:54.039982 kernel: Normal empty Jan 29 11:54:54.040000 kernel: Movable zone start for each node Jan 29 11:54:54.040013 kernel: Early memory node ranges Jan 29 11:54:54.040025 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 29 11:54:54.040038 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Jan 29 11:54:54.040050 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Jan 29 11:54:54.040063 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 29 11:54:54.040075 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 29 11:54:54.040088 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Jan 29 11:54:54.040100 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 29 11:54:54.040118 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 29 11:54:54.040131 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 29 11:54:54.040143 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 29 11:54:54.040155 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 29 11:54:54.040168 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 29 11:54:54.040181 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 29 11:54:54.040193 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 29 11:54:54.040206 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 29 11:54:54.040218 kernel: TSC deadline timer available Jan 29 11:54:54.040235 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Jan 29 11:54:54.040330 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 29 11:54:54.040346 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 29 11:54:54.040358 kernel: Booting paravirtualized kernel on KVM Jan 29 11:54:54.040371 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 29 11:54:54.040384 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Jan 29 11:54:54.040397 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Jan 29 11:54:54.040410 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Jan 29 11:54:54.040422 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jan 29 11:54:54.040442 kernel: kvm-guest: PV spinlocks enabled Jan 29 11:54:54.040455 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 29 11:54:54.040469 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=fe60919b0c6f6abb7495678f87f7024e97a038fc343fa31a123a43ef5f489466 Jan 29 11:54:54.040483 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 11:54:54.040495 kernel: random: crng init done Jan 29 11:54:54.040508 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 11:54:54.040520 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 29 11:54:54.040533 kernel: Fallback order for Node 0: 0 Jan 29 11:54:54.040550 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Jan 29 11:54:54.040563 kernel: Policy zone: DMA32 Jan 29 11:54:54.040576 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 11:54:54.040588 kernel: software IO TLB: area num 16. Jan 29 11:54:54.040601 kernel: Memory: 1899476K/2096616K available (14336K kernel code, 2301K rwdata, 22800K rodata, 43320K init, 1752K bss, 196880K reserved, 0K cma-reserved) Jan 29 11:54:54.040614 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jan 29 11:54:54.040627 kernel: Kernel/User page tables isolation: enabled Jan 29 11:54:54.040640 kernel: ftrace: allocating 37893 entries in 149 pages Jan 29 11:54:54.040652 kernel: ftrace: allocated 149 pages with 4 groups Jan 29 11:54:54.040669 kernel: Dynamic Preempt: voluntary Jan 29 11:54:54.040682 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 11:54:54.040696 kernel: rcu: RCU event tracing is enabled. Jan 29 11:54:54.040709 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jan 29 11:54:54.040722 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 11:54:54.040746 kernel: Rude variant of Tasks RCU enabled. Jan 29 11:54:54.040764 kernel: Tracing variant of Tasks RCU enabled. Jan 29 11:54:54.040778 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 11:54:54.040791 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jan 29 11:54:54.040804 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Jan 29 11:54:54.040817 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 11:54:54.040830 kernel: Console: colour VGA+ 80x25 Jan 29 11:54:54.040848 kernel: printk: console [tty0] enabled Jan 29 11:54:54.040862 kernel: printk: console [ttyS0] enabled Jan 29 11:54:54.040875 kernel: ACPI: Core revision 20230628 Jan 29 11:54:54.040888 kernel: APIC: Switch to symmetric I/O mode setup Jan 29 11:54:54.040901 kernel: x2apic enabled Jan 29 11:54:54.040919 kernel: APIC: Switched APIC routing to: physical x2apic Jan 29 11:54:54.040933 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jan 29 11:54:54.040957 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Jan 29 11:54:54.040973 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 29 11:54:54.040987 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 29 11:54:54.041000 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 29 11:54:54.041018 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 29 11:54:54.041030 kernel: Spectre V2 : Mitigation: Retpolines Jan 29 11:54:54.041043 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 29 11:54:54.041056 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 29 11:54:54.041075 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 29 11:54:54.041089 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 29 11:54:54.041102 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 29 11:54:54.041115 kernel: MDS: Mitigation: Clear CPU buffers Jan 29 11:54:54.041128 kernel: MMIO Stale Data: Unknown: No mitigations Jan 29 11:54:54.041140 kernel: SRBDS: Unknown: Dependent on hypervisor status Jan 29 11:54:54.041153 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 29 11:54:54.041166 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 29 11:54:54.041179 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 29 11:54:54.041192 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 29 11:54:54.041206 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 29 11:54:54.041224 kernel: Freeing SMP alternatives memory: 32K Jan 29 11:54:54.041237 kernel: pid_max: default: 32768 minimum: 301 Jan 29 11:54:54.041268 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 11:54:54.041283 kernel: landlock: Up and running. Jan 29 11:54:54.041296 kernel: SELinux: Initializing. Jan 29 11:54:54.041309 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 29 11:54:54.041322 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 29 11:54:54.041336 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Jan 29 11:54:54.041349 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 29 11:54:54.041362 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 29 11:54:54.041382 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 29 11:54:54.041396 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Jan 29 11:54:54.041409 kernel: signal: max sigframe size: 1776 Jan 29 11:54:54.041422 kernel: rcu: Hierarchical SRCU implementation. Jan 29 11:54:54.041436 kernel: rcu: Max phase no-delay instances is 400. Jan 29 11:54:54.041449 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 29 11:54:54.041462 kernel: smp: Bringing up secondary CPUs ... Jan 29 11:54:54.041476 kernel: smpboot: x86: Booting SMP configuration: Jan 29 11:54:54.041489 kernel: .... node #0, CPUs: #1 Jan 29 11:54:54.041507 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Jan 29 11:54:54.041520 kernel: smp: Brought up 1 node, 2 CPUs Jan 29 11:54:54.041533 kernel: smpboot: Max logical packages: 16 Jan 29 11:54:54.041550 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Jan 29 11:54:54.041563 kernel: devtmpfs: initialized Jan 29 11:54:54.041576 kernel: x86/mm: Memory block size: 128MB Jan 29 11:54:54.041589 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 11:54:54.041603 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jan 29 11:54:54.041616 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 11:54:54.041641 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 11:54:54.041655 kernel: audit: initializing netlink subsys (disabled) Jan 29 11:54:54.041668 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 11:54:54.041681 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 29 11:54:54.041694 kernel: audit: type=2000 audit(1738151692.642:1): state=initialized audit_enabled=0 res=1 Jan 29 11:54:54.041707 kernel: cpuidle: using governor menu Jan 29 11:54:54.041720 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 11:54:54.041733 kernel: dca service started, version 1.12.1 Jan 29 11:54:54.041747 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 29 11:54:54.041766 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 29 11:54:54.041780 kernel: PCI: Using configuration type 1 for base access Jan 29 11:54:54.041793 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 29 11:54:54.041807 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 11:54:54.041820 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 11:54:54.041833 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 11:54:54.041846 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 11:54:54.041859 kernel: ACPI: Added _OSI(Module Device) Jan 29 11:54:54.041873 kernel: ACPI: Added _OSI(Processor Device) Jan 29 11:54:54.041890 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 11:54:54.041904 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 11:54:54.041917 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 11:54:54.041930 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 29 11:54:54.041943 kernel: ACPI: Interpreter enabled Jan 29 11:54:54.041968 kernel: ACPI: PM: (supports S0 S5) Jan 29 11:54:54.041981 kernel: ACPI: Using IOAPIC for interrupt routing Jan 29 11:54:54.041995 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 29 11:54:54.042008 kernel: PCI: Using E820 reservations for host bridge windows Jan 29 11:54:54.042026 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 29 11:54:54.042040 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 29 11:54:54.042316 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 29 11:54:54.042507 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 29 11:54:54.042677 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 29 11:54:54.042697 kernel: PCI host bridge to bus 0000:00 Jan 29 11:54:54.042903 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 29 11:54:54.043079 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 29 11:54:54.043237 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 29 11:54:54.043408 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Jan 29 11:54:54.043582 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 29 11:54:54.043755 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Jan 29 11:54:54.043918 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 29 11:54:54.044137 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 29 11:54:54.044362 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Jan 29 11:54:54.044535 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Jan 29 11:54:54.044705 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Jan 29 11:54:54.044887 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Jan 29 11:54:54.045086 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 29 11:54:54.045289 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jan 29 11:54:54.045471 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Jan 29 11:54:54.045672 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jan 29 11:54:54.045883 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Jan 29 11:54:54.046102 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jan 29 11:54:54.049312 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Jan 29 11:54:54.049525 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jan 29 11:54:54.049715 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Jan 29 11:54:54.049922 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jan 29 11:54:54.050128 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Jan 29 11:54:54.050335 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jan 29 11:54:54.050506 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Jan 29 11:54:54.050695 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jan 29 11:54:54.050867 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Jan 29 11:54:54.051088 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jan 29 11:54:54.052271 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Jan 29 11:54:54.052476 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 29 11:54:54.052654 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Jan 29 11:54:54.052826 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Jan 29 11:54:54.053011 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Jan 29 11:54:54.053189 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Jan 29 11:54:54.054432 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jan 29 11:54:54.054610 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jan 29 11:54:54.054779 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Jan 29 11:54:54.054994 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Jan 29 11:54:54.055176 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 29 11:54:54.056432 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 29 11:54:54.056630 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 29 11:54:54.056799 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Jan 29 11:54:54.056979 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Jan 29 11:54:54.057155 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 29 11:54:54.057407 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 29 11:54:54.057602 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Jan 29 11:54:54.057774 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Jan 29 11:54:54.057993 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 29 11:54:54.058755 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 29 11:54:54.058979 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 29 11:54:54.059159 kernel: pci_bus 0000:02: extended config space not accessible Jan 29 11:54:54.060493 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Jan 29 11:54:54.060800 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Jan 29 11:54:54.061028 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 29 11:54:54.061214 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 29 11:54:54.061456 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Jan 29 11:54:54.061629 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Jan 29 11:54:54.061799 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 29 11:54:54.061990 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 29 11:54:54.062161 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 29 11:54:54.064472 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Jan 29 11:54:54.064672 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Jan 29 11:54:54.064889 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 29 11:54:54.065090 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 29 11:54:54.067292 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 29 11:54:54.067490 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 29 11:54:54.067676 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 29 11:54:54.067855 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 29 11:54:54.068046 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 29 11:54:54.068215 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 29 11:54:54.068415 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 29 11:54:54.068586 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 29 11:54:54.068751 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 29 11:54:54.068914 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 29 11:54:54.069094 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 29 11:54:54.069294 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 29 11:54:54.069460 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 29 11:54:54.069630 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 29 11:54:54.069799 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 29 11:54:54.070010 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 29 11:54:54.070032 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 29 11:54:54.070046 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 29 11:54:54.070060 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 29 11:54:54.070073 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 29 11:54:54.070094 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 29 11:54:54.070108 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 29 11:54:54.070121 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 29 11:54:54.070134 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 29 11:54:54.070147 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 29 11:54:54.070160 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 29 11:54:54.070174 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 29 11:54:54.070187 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 29 11:54:54.070200 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 29 11:54:54.070218 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 29 11:54:54.070232 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 29 11:54:54.070245 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 29 11:54:54.072760 kernel: iommu: Default domain type: Translated Jan 29 11:54:54.072779 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 29 11:54:54.072793 kernel: PCI: Using ACPI for IRQ routing Jan 29 11:54:54.072806 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 29 11:54:54.072820 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 29 11:54:54.072833 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Jan 29 11:54:54.073039 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 29 11:54:54.073228 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 29 11:54:54.073421 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 29 11:54:54.073442 kernel: vgaarb: loaded Jan 29 11:54:54.073468 kernel: clocksource: Switched to clocksource kvm-clock Jan 29 11:54:54.073481 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 11:54:54.073494 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 11:54:54.073508 kernel: pnp: PnP ACPI init Jan 29 11:54:54.073683 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 29 11:54:54.073705 kernel: pnp: PnP ACPI: found 5 devices Jan 29 11:54:54.073722 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 29 11:54:54.073748 kernel: NET: Registered PF_INET protocol family Jan 29 11:54:54.073762 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 11:54:54.073775 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 29 11:54:54.073789 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 11:54:54.073802 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 29 11:54:54.073823 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 29 11:54:54.073837 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 29 11:54:54.073850 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 29 11:54:54.073863 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 29 11:54:54.073877 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 11:54:54.073890 kernel: NET: Registered PF_XDP protocol family Jan 29 11:54:54.074068 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Jan 29 11:54:54.074249 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 29 11:54:54.074461 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 29 11:54:54.074627 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 29 11:54:54.074816 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 29 11:54:54.074995 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 29 11:54:54.075162 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 29 11:54:54.075343 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 29 11:54:54.077382 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Jan 29 11:54:54.077545 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Jan 29 11:54:54.077726 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Jan 29 11:54:54.077918 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Jan 29 11:54:54.078101 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Jan 29 11:54:54.079324 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Jan 29 11:54:54.079520 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Jan 29 11:54:54.079675 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Jan 29 11:54:54.079888 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 29 11:54:54.080097 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 29 11:54:54.081306 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 29 11:54:54.081478 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jan 29 11:54:54.081644 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 29 11:54:54.081831 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 29 11:54:54.082034 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 29 11:54:54.082200 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jan 29 11:54:54.084030 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 29 11:54:54.084198 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 29 11:54:54.084403 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 29 11:54:54.084570 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jan 29 11:54:54.084735 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 29 11:54:54.084932 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 29 11:54:54.085118 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 29 11:54:54.085309 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jan 29 11:54:54.085501 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 29 11:54:54.085669 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 29 11:54:54.085857 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 29 11:54:54.086045 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jan 29 11:54:54.086212 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 29 11:54:54.088413 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 29 11:54:54.088599 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 29 11:54:54.088777 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jan 29 11:54:54.088958 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 29 11:54:54.089132 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 29 11:54:54.091331 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 29 11:54:54.091503 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jan 29 11:54:54.091710 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 29 11:54:54.091887 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 29 11:54:54.092067 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 29 11:54:54.092233 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jan 29 11:54:54.092425 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 29 11:54:54.092601 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 29 11:54:54.092770 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 29 11:54:54.092922 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 29 11:54:54.093091 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 29 11:54:54.095281 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Jan 29 11:54:54.095441 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 29 11:54:54.095592 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Jan 29 11:54:54.095762 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jan 29 11:54:54.095919 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Jan 29 11:54:54.096090 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Jan 29 11:54:54.096309 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Jan 29 11:54:54.096513 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Jan 29 11:54:54.096674 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Jan 29 11:54:54.096835 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 29 11:54:54.097042 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Jan 29 11:54:54.097203 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Jan 29 11:54:54.097409 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 29 11:54:54.097585 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Jan 29 11:54:54.097793 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Jan 29 11:54:54.097976 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 29 11:54:54.098158 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Jan 29 11:54:54.100422 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Jan 29 11:54:54.100592 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 29 11:54:54.100760 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Jan 29 11:54:54.100938 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Jan 29 11:54:54.101109 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 29 11:54:54.102388 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Jan 29 11:54:54.102561 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Jan 29 11:54:54.102718 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 29 11:54:54.102891 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Jan 29 11:54:54.103064 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Jan 29 11:54:54.103242 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 29 11:54:54.103285 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 29 11:54:54.103302 kernel: PCI: CLS 0 bytes, default 64 Jan 29 11:54:54.103316 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 29 11:54:54.103341 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Jan 29 11:54:54.103354 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 29 11:54:54.103367 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jan 29 11:54:54.103393 kernel: Initialise system trusted keyrings Jan 29 11:54:54.103414 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 29 11:54:54.103429 kernel: Key type asymmetric registered Jan 29 11:54:54.103443 kernel: Asymmetric key parser 'x509' registered Jan 29 11:54:54.103456 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 29 11:54:54.103470 kernel: io scheduler mq-deadline registered Jan 29 11:54:54.103489 kernel: io scheduler kyber registered Jan 29 11:54:54.103503 kernel: io scheduler bfq registered Jan 29 11:54:54.103676 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jan 29 11:54:54.103857 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jan 29 11:54:54.104069 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 11:54:54.104238 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jan 29 11:54:54.104456 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jan 29 11:54:54.104631 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 11:54:54.104800 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jan 29 11:54:54.105004 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jan 29 11:54:54.105182 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 11:54:54.105379 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jan 29 11:54:54.105546 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jan 29 11:54:54.105711 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 11:54:54.105881 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jan 29 11:54:54.106088 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jan 29 11:54:54.106292 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 11:54:54.106465 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jan 29 11:54:54.106658 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jan 29 11:54:54.106827 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 11:54:54.107028 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jan 29 11:54:54.107198 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jan 29 11:54:54.107430 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 11:54:54.107598 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jan 29 11:54:54.107764 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jan 29 11:54:54.107931 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 11:54:54.107972 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 29 11:54:54.107988 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 29 11:54:54.108010 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 29 11:54:54.108024 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 11:54:54.108038 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 29 11:54:54.108053 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 29 11:54:54.108067 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 29 11:54:54.108080 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 29 11:54:54.108284 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 29 11:54:54.108308 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 29 11:54:54.108488 kernel: rtc_cmos 00:03: registered as rtc0 Jan 29 11:54:54.108656 kernel: rtc_cmos 00:03: setting system clock to 2025-01-29T11:54:53 UTC (1738151693) Jan 29 11:54:54.108811 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 29 11:54:54.108832 kernel: intel_pstate: CPU model not supported Jan 29 11:54:54.108847 kernel: NET: Registered PF_INET6 protocol family Jan 29 11:54:54.108861 kernel: Segment Routing with IPv6 Jan 29 11:54:54.108879 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 11:54:54.108894 kernel: NET: Registered PF_PACKET protocol family Jan 29 11:54:54.108907 kernel: Key type dns_resolver registered Jan 29 11:54:54.108928 kernel: IPI shorthand broadcast: enabled Jan 29 11:54:54.108942 kernel: sched_clock: Marking stable (1246003917, 241697468)->(1621820686, -134119301) Jan 29 11:54:54.108969 kernel: registered taskstats version 1 Jan 29 11:54:54.108983 kernel: Loading compiled-in X.509 certificates Jan 29 11:54:54.108997 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 7f0738935740330d55027faa5877e7155d5f24f4' Jan 29 11:54:54.109011 kernel: Key type .fscrypt registered Jan 29 11:54:54.109025 kernel: Key type fscrypt-provisioning registered Jan 29 11:54:54.109039 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 11:54:54.109053 kernel: ima: Allocated hash algorithm: sha1 Jan 29 11:54:54.109073 kernel: ima: No architecture policies found Jan 29 11:54:54.109087 kernel: clk: Disabling unused clocks Jan 29 11:54:54.109101 kernel: Freeing unused kernel image (initmem) memory: 43320K Jan 29 11:54:54.109115 kernel: Write protecting the kernel read-only data: 38912k Jan 29 11:54:54.109129 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Jan 29 11:54:54.109147 kernel: Run /init as init process Jan 29 11:54:54.109162 kernel: with arguments: Jan 29 11:54:54.109175 kernel: /init Jan 29 11:54:54.109189 kernel: with environment: Jan 29 11:54:54.109207 kernel: HOME=/ Jan 29 11:54:54.109220 kernel: TERM=linux Jan 29 11:54:54.109234 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 11:54:54.109299 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:54:54.109321 systemd[1]: Detected virtualization kvm. Jan 29 11:54:54.109340 systemd[1]: Detected architecture x86-64. Jan 29 11:54:54.109355 systemd[1]: Running in initrd. Jan 29 11:54:54.109370 systemd[1]: No hostname configured, using default hostname. Jan 29 11:54:54.109392 systemd[1]: Hostname set to . Jan 29 11:54:54.109419 systemd[1]: Initializing machine ID from VM UUID. Jan 29 11:54:54.109434 systemd[1]: Queued start job for default target initrd.target. Jan 29 11:54:54.109449 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:54:54.109476 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:54:54.109490 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 11:54:54.109505 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:54:54.109519 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 11:54:54.109539 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 11:54:54.109555 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 11:54:54.109569 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 11:54:54.109584 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:54:54.109598 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:54:54.109612 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:54:54.109626 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:54:54.109657 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:54:54.109672 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:54:54.109687 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:54:54.109702 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:54:54.109718 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 11:54:54.109733 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 11:54:54.109748 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:54:54.109763 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:54:54.109783 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:54:54.109798 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:54:54.109813 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 11:54:54.109828 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:54:54.109844 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 11:54:54.109859 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 11:54:54.109874 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:54:54.109889 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:54:54.109904 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:54:54.109980 systemd-journald[201]: Collecting audit messages is disabled. Jan 29 11:54:54.110016 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 11:54:54.110032 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:54:54.110047 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 11:54:54.110070 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 11:54:54.110085 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 11:54:54.110100 kernel: Bridge firewalling registered Jan 29 11:54:54.110121 systemd-journald[201]: Journal started Jan 29 11:54:54.110154 systemd-journald[201]: Runtime Journal (/run/log/journal/5289e4f27d3c45339271348e7150bd41) is 4.7M, max 37.9M, 33.2M free. Jan 29 11:54:54.061354 systemd-modules-load[202]: Inserted module 'overlay' Jan 29 11:54:54.102417 systemd-modules-load[202]: Inserted module 'br_netfilter' Jan 29 11:54:54.166274 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:54:54.167795 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:54:54.169911 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:54:54.171964 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:54:54.184558 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:54:54.187489 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:54:54.190448 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:54:54.204446 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:54:54.214536 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:54:54.221512 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:54:54.222818 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:54:54.235462 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 11:54:54.237836 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:54:54.247482 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:54:54.252730 dracut-cmdline[235]: dracut-dracut-053 Jan 29 11:54:54.261287 dracut-cmdline[235]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=fe60919b0c6f6abb7495678f87f7024e97a038fc343fa31a123a43ef5f489466 Jan 29 11:54:54.289234 systemd-resolved[240]: Positive Trust Anchors: Jan 29 11:54:54.290212 systemd-resolved[240]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:54:54.290286 systemd-resolved[240]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:54:54.298556 systemd-resolved[240]: Defaulting to hostname 'linux'. Jan 29 11:54:54.301638 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:54:54.303538 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:54:54.363303 kernel: SCSI subsystem initialized Jan 29 11:54:54.375296 kernel: Loading iSCSI transport class v2.0-870. Jan 29 11:54:54.389319 kernel: iscsi: registered transport (tcp) Jan 29 11:54:54.416406 kernel: iscsi: registered transport (qla4xxx) Jan 29 11:54:54.416505 kernel: QLogic iSCSI HBA Driver Jan 29 11:54:54.475747 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 11:54:54.481464 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 11:54:54.522128 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 11:54:54.522181 kernel: device-mapper: uevent: version 1.0.3 Jan 29 11:54:54.522981 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 11:54:54.572327 kernel: raid6: sse2x4 gen() 13146 MB/s Jan 29 11:54:54.590281 kernel: raid6: sse2x2 gen() 8985 MB/s Jan 29 11:54:54.609063 kernel: raid6: sse2x1 gen() 9517 MB/s Jan 29 11:54:54.609104 kernel: raid6: using algorithm sse2x4 gen() 13146 MB/s Jan 29 11:54:54.627970 kernel: raid6: .... xor() 7560 MB/s, rmw enabled Jan 29 11:54:54.628027 kernel: raid6: using ssse3x2 recovery algorithm Jan 29 11:54:54.654299 kernel: xor: automatically using best checksumming function avx Jan 29 11:54:54.828318 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 11:54:54.843803 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:54:54.850495 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:54:54.880647 systemd-udevd[420]: Using default interface naming scheme 'v255'. Jan 29 11:54:54.888331 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:54:54.896445 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 11:54:54.918495 dracut-pre-trigger[425]: rd.md=0: removing MD RAID activation Jan 29 11:54:54.959836 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:54:54.976455 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:54:55.092284 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:54:55.101478 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 11:54:55.134883 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 11:54:55.137671 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:54:55.139747 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:54:55.141379 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:54:55.150656 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 11:54:55.173888 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:54:55.222765 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Jan 29 11:54:55.296074 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 29 11:54:55.296346 kernel: cryptd: max_cpu_qlen set to 1000 Jan 29 11:54:55.296371 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 11:54:55.296390 kernel: GPT:17805311 != 125829119 Jan 29 11:54:55.296418 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 11:54:55.296438 kernel: GPT:17805311 != 125829119 Jan 29 11:54:55.296455 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 11:54:55.296474 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:54:55.296492 kernel: AVX version of gcm_enc/dec engaged. Jan 29 11:54:55.296510 kernel: AES CTR mode by8 optimization enabled Jan 29 11:54:55.296528 kernel: ACPI: bus type USB registered Jan 29 11:54:55.296546 kernel: usbcore: registered new interface driver usbfs Jan 29 11:54:55.296564 kernel: usbcore: registered new interface driver hub Jan 29 11:54:55.270750 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:54:55.270933 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:54:55.271906 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:54:55.272657 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:54:55.305621 kernel: usbcore: registered new device driver usb Jan 29 11:54:55.272829 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:54:55.273605 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:54:55.283592 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:54:55.349304 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 29 11:54:55.422447 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Jan 29 11:54:55.422698 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 29 11:54:55.422935 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 29 11:54:55.423146 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Jan 29 11:54:55.423394 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Jan 29 11:54:55.423602 kernel: BTRFS: device fsid f8084233-4a6f-4e67-af0b-519e43b19e58 devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (469) Jan 29 11:54:55.423625 kernel: hub 1-0:1.0: USB hub found Jan 29 11:54:55.423878 kernel: libata version 3.00 loaded. Jan 29 11:54:55.423924 kernel: hub 1-0:1.0: 4 ports detected Jan 29 11:54:55.424125 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (472) Jan 29 11:54:55.424147 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 29 11:54:55.424806 kernel: hub 2-0:1.0: USB hub found Jan 29 11:54:55.425050 kernel: hub 2-0:1.0: 4 ports detected Jan 29 11:54:55.424616 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 29 11:54:55.459203 kernel: ahci 0000:00:1f.2: version 3.0 Jan 29 11:54:55.490588 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 29 11:54:55.490617 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 29 11:54:55.490828 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 29 11:54:55.491083 kernel: scsi host0: ahci Jan 29 11:54:55.491480 kernel: scsi host1: ahci Jan 29 11:54:55.491886 kernel: scsi host2: ahci Jan 29 11:54:55.492110 kernel: scsi host3: ahci Jan 29 11:54:55.492327 kernel: scsi host4: ahci Jan 29 11:54:55.492527 kernel: scsi host5: ahci Jan 29 11:54:55.492813 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 41 Jan 29 11:54:55.492838 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 41 Jan 29 11:54:55.492865 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 41 Jan 29 11:54:55.492884 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 41 Jan 29 11:54:55.492902 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 41 Jan 29 11:54:55.492938 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 41 Jan 29 11:54:55.460310 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:54:55.471813 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 29 11:54:55.484347 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 29 11:54:55.494449 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 29 11:54:55.502582 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 11:54:55.509538 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 11:54:55.512436 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:54:55.519565 disk-uuid[562]: Primary Header is updated. Jan 29 11:54:55.519565 disk-uuid[562]: Secondary Entries is updated. Jan 29 11:54:55.519565 disk-uuid[562]: Secondary Header is updated. Jan 29 11:54:55.527677 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:54:55.537387 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:54:55.539705 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:54:55.653529 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 29 11:54:55.795279 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 29 11:54:55.802295 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 29 11:54:55.802345 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 29 11:54:55.805175 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 29 11:54:55.808861 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 29 11:54:55.808925 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 29 11:54:55.809268 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 29 11:54:55.821897 kernel: usbcore: registered new interface driver usbhid Jan 29 11:54:55.821948 kernel: usbhid: USB HID core driver Jan 29 11:54:55.829508 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Jan 29 11:54:55.829555 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Jan 29 11:54:56.537800 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:54:56.539073 disk-uuid[563]: The operation has completed successfully. Jan 29 11:54:56.596588 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 11:54:56.596749 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 11:54:56.621436 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 11:54:56.627812 sh[583]: Success Jan 29 11:54:56.644499 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Jan 29 11:54:56.734851 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 11:54:56.736729 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 11:54:56.739699 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 11:54:56.774293 kernel: BTRFS info (device dm-0): first mount of filesystem f8084233-4a6f-4e67-af0b-519e43b19e58 Jan 29 11:54:56.774369 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:54:56.774390 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 11:54:56.774409 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 11:54:56.775423 kernel: BTRFS info (device dm-0): using free space tree Jan 29 11:54:56.787066 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 11:54:56.788725 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 11:54:56.793481 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 11:54:56.796480 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 11:54:56.817172 kernel: BTRFS info (device vda6): first mount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 29 11:54:56.817233 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:54:56.817284 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:54:56.823304 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:54:56.839501 kernel: BTRFS info (device vda6): last unmount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 29 11:54:56.839013 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 11:54:56.848645 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 11:54:56.856462 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 11:54:56.945525 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:54:56.955075 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:54:56.992667 systemd-networkd[765]: lo: Link UP Jan 29 11:54:56.994138 systemd-networkd[765]: lo: Gained carrier Jan 29 11:54:56.997569 systemd-networkd[765]: Enumeration completed Jan 29 11:54:56.998711 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:54:56.999195 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:54:56.999201 systemd-networkd[765]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:54:57.001962 systemd-networkd[765]: eth0: Link UP Jan 29 11:54:57.001969 systemd-networkd[765]: eth0: Gained carrier Jan 29 11:54:57.001981 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:54:57.004792 systemd[1]: Reached target network.target - Network. Jan 29 11:54:57.019314 ignition[684]: Ignition 2.20.0 Jan 29 11:54:57.019340 ignition[684]: Stage: fetch-offline Jan 29 11:54:57.020119 systemd-networkd[765]: eth0: DHCPv4 address 10.230.10.162/30, gateway 10.230.10.161 acquired from 10.230.10.161 Jan 29 11:54:57.019441 ignition[684]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:54:57.019461 ignition[684]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 11:54:57.019668 ignition[684]: parsed url from cmdline: "" Jan 29 11:54:57.024488 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:54:57.019676 ignition[684]: no config URL provided Jan 29 11:54:57.019686 ignition[684]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 11:54:57.019702 ignition[684]: no config at "/usr/lib/ignition/user.ign" Jan 29 11:54:57.019719 ignition[684]: failed to fetch config: resource requires networking Jan 29 11:54:57.020008 ignition[684]: Ignition finished successfully Jan 29 11:54:57.030445 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 29 11:54:57.053021 ignition[773]: Ignition 2.20.0 Jan 29 11:54:57.053044 ignition[773]: Stage: fetch Jan 29 11:54:57.053334 ignition[773]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:54:57.053353 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 11:54:57.053499 ignition[773]: parsed url from cmdline: "" Jan 29 11:54:57.053507 ignition[773]: no config URL provided Jan 29 11:54:57.053517 ignition[773]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 11:54:57.053533 ignition[773]: no config at "/usr/lib/ignition/user.ign" Jan 29 11:54:57.053693 ignition[773]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 29 11:54:57.054339 ignition[773]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 29 11:54:57.054376 ignition[773]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 29 11:54:57.069683 ignition[773]: GET result: OK Jan 29 11:54:57.072168 ignition[773]: parsing config with SHA512: 05152b5455183d89111cf116a08b8bcb229a0a3fede27ade032544fab4cfa8b52aa77d80e53f276d48f6d74ab290468c830cedf87531fd3a2dfe6827bfb4e5d8 Jan 29 11:54:57.077787 unknown[773]: fetched base config from "system" Jan 29 11:54:57.077804 unknown[773]: fetched base config from "system" Jan 29 11:54:57.078225 ignition[773]: fetch: fetch complete Jan 29 11:54:57.077813 unknown[773]: fetched user config from "openstack" Jan 29 11:54:57.078234 ignition[773]: fetch: fetch passed Jan 29 11:54:57.081671 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 29 11:54:57.078318 ignition[773]: Ignition finished successfully Jan 29 11:54:57.093562 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 11:54:57.109994 ignition[780]: Ignition 2.20.0 Jan 29 11:54:57.110013 ignition[780]: Stage: kargs Jan 29 11:54:57.110241 ignition[780]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:54:57.110289 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 11:54:57.113724 ignition[780]: kargs: kargs passed Jan 29 11:54:57.114937 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 11:54:57.113786 ignition[780]: Ignition finished successfully Jan 29 11:54:57.122453 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 11:54:57.141067 ignition[786]: Ignition 2.20.0 Jan 29 11:54:57.141089 ignition[786]: Stage: disks Jan 29 11:54:57.141346 ignition[786]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:54:57.143405 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 11:54:57.141365 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 11:54:57.144653 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 11:54:57.142426 ignition[786]: disks: disks passed Jan 29 11:54:57.145516 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 11:54:57.142487 ignition[786]: Ignition finished successfully Jan 29 11:54:57.146990 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:54:57.148584 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:54:57.150088 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:54:57.158805 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 11:54:57.177746 systemd-fsck[794]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 29 11:54:57.181138 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 11:54:57.187341 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 11:54:57.299260 kernel: EXT4-fs (vda9): mounted filesystem cdc615db-d057-439f-af25-aa57b1c399e2 r/w with ordered data mode. Quota mode: none. Jan 29 11:54:57.300116 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 11:54:57.301512 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 11:54:57.312384 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:54:57.315012 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 11:54:57.316928 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 29 11:54:57.321440 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 29 11:54:57.323731 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 11:54:57.336609 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (802) Jan 29 11:54:57.336642 kernel: BTRFS info (device vda6): first mount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 29 11:54:57.336662 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:54:57.336681 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:54:57.336700 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:54:57.323773 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:54:57.332279 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 11:54:57.345681 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 11:54:57.349202 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:54:57.433307 initrd-setup-root[830]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 11:54:57.442326 initrd-setup-root[837]: cut: /sysroot/etc/group: No such file or directory Jan 29 11:54:57.449001 initrd-setup-root[844]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 11:54:57.455681 initrd-setup-root[851]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 11:54:57.565785 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 11:54:57.571432 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 11:54:57.575445 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 11:54:57.588274 kernel: BTRFS info (device vda6): last unmount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 29 11:54:57.617836 ignition[918]: INFO : Ignition 2.20.0 Jan 29 11:54:57.617836 ignition[918]: INFO : Stage: mount Jan 29 11:54:57.617836 ignition[918]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:54:57.617836 ignition[918]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 11:54:57.622905 ignition[918]: INFO : mount: mount passed Jan 29 11:54:57.622905 ignition[918]: INFO : Ignition finished successfully Jan 29 11:54:57.619543 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 11:54:57.620920 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 11:54:57.768352 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 11:54:58.705626 systemd-networkd[765]: eth0: Gained IPv6LL Jan 29 11:55:00.212112 systemd-networkd[765]: eth0: Ignoring DHCPv6 address 2a02:1348:179:82a8:24:19ff:fee6:aa2/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:82a8:24:19ff:fee6:aa2/64 assigned by NDisc. Jan 29 11:55:00.212129 systemd-networkd[765]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 29 11:55:04.503891 coreos-metadata[804]: Jan 29 11:55:04.503 WARN failed to locate config-drive, using the metadata service API instead Jan 29 11:55:04.527590 coreos-metadata[804]: Jan 29 11:55:04.527 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 29 11:55:04.541611 coreos-metadata[804]: Jan 29 11:55:04.541 INFO Fetch successful Jan 29 11:55:04.542457 coreos-metadata[804]: Jan 29 11:55:04.542 INFO wrote hostname srv-xy63l.gb1.brightbox.com to /sysroot/etc/hostname Jan 29 11:55:04.544405 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 29 11:55:04.544615 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 29 11:55:04.551367 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 11:55:04.567490 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:55:04.581271 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (935) Jan 29 11:55:04.586611 kernel: BTRFS info (device vda6): first mount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 29 11:55:04.586686 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:55:04.588475 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:55:04.594427 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:55:04.596591 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:55:04.621398 ignition[953]: INFO : Ignition 2.20.0 Jan 29 11:55:04.621398 ignition[953]: INFO : Stage: files Jan 29 11:55:04.623081 ignition[953]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:55:04.623081 ignition[953]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 11:55:04.623081 ignition[953]: DEBUG : files: compiled without relabeling support, skipping Jan 29 11:55:04.625701 ignition[953]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 11:55:04.625701 ignition[953]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 11:55:04.627588 ignition[953]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 11:55:04.628638 ignition[953]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 11:55:04.628638 ignition[953]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 11:55:04.628220 unknown[953]: wrote ssh authorized keys file for user: core Jan 29 11:55:04.631455 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 11:55:04.631455 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 29 11:55:04.809324 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 29 11:55:05.079963 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 11:55:05.079963 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 29 11:55:05.079963 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 29 11:55:05.612827 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 29 11:55:05.937460 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 29 11:55:05.937460 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 29 11:55:05.940604 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 11:55:05.940604 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 11:55:05.940604 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 11:55:05.940604 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 11:55:05.940604 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 11:55:05.940604 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 11:55:05.940604 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 11:55:05.940604 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:55:05.940604 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:55:05.940604 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 11:55:05.940604 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 11:55:05.940604 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 11:55:05.940604 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Jan 29 11:55:06.416177 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 29 11:55:07.547911 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 11:55:07.547911 ignition[953]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 29 11:55:07.551470 ignition[953]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 11:55:07.551470 ignition[953]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 11:55:07.551470 ignition[953]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 29 11:55:07.551470 ignition[953]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 29 11:55:07.551470 ignition[953]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 11:55:07.551470 ignition[953]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:55:07.551470 ignition[953]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:55:07.551470 ignition[953]: INFO : files: files passed Jan 29 11:55:07.551470 ignition[953]: INFO : Ignition finished successfully Jan 29 11:55:07.552232 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 11:55:07.562598 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 11:55:07.565531 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 11:55:07.572915 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 11:55:07.573089 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 11:55:07.584496 initrd-setup-root-after-ignition[981]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:55:07.584496 initrd-setup-root-after-ignition[981]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:55:07.588412 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:55:07.591233 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:55:07.592822 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 11:55:07.607573 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 11:55:07.645181 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 11:55:07.646334 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 11:55:07.647738 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 11:55:07.649085 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 11:55:07.650706 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 11:55:07.656456 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 11:55:07.675842 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:55:07.680523 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 11:55:07.708708 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:55:07.710811 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:55:07.711735 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 11:55:07.713184 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 11:55:07.713395 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:55:07.715146 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 11:55:07.716063 systemd[1]: Stopped target basic.target - Basic System. Jan 29 11:55:07.717730 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 11:55:07.719054 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:55:07.719882 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 11:55:07.720884 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 11:55:07.721771 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:55:07.723466 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 11:55:07.725377 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 11:55:07.726882 systemd[1]: Stopped target swap.target - Swaps. Jan 29 11:55:07.728205 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 11:55:07.728513 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:55:07.730390 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:55:07.731453 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:55:07.732824 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 11:55:07.733066 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:55:07.734284 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 11:55:07.734526 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 11:55:07.735980 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 11:55:07.736154 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:55:07.737917 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 11:55:07.738072 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 11:55:07.748037 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 11:55:07.751552 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 11:55:07.752278 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 11:55:07.752527 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:55:07.754523 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 11:55:07.754778 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:55:07.774363 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 11:55:07.775053 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 11:55:07.782465 ignition[1005]: INFO : Ignition 2.20.0 Jan 29 11:55:07.782465 ignition[1005]: INFO : Stage: umount Jan 29 11:55:07.787359 ignition[1005]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:55:07.787359 ignition[1005]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 11:55:07.784821 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 11:55:07.792495 ignition[1005]: INFO : umount: umount passed Jan 29 11:55:07.792495 ignition[1005]: INFO : Ignition finished successfully Jan 29 11:55:07.786908 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 11:55:07.787051 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 11:55:07.789510 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 11:55:07.789708 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 11:55:07.791593 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 11:55:07.791911 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 11:55:07.793212 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 11:55:07.793378 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 11:55:07.794066 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 29 11:55:07.794133 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 29 11:55:07.794844 systemd[1]: Stopped target network.target - Network. Jan 29 11:55:07.795443 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 11:55:07.795511 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:55:07.796273 systemd[1]: Stopped target paths.target - Path Units. Jan 29 11:55:07.796948 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 11:55:07.808687 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:55:07.810149 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 11:55:07.811569 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 11:55:07.813232 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 11:55:07.813335 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:55:07.814514 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 11:55:07.814590 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:55:07.815865 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 11:55:07.815948 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 11:55:07.817159 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 11:55:07.817226 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 11:55:07.818557 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 11:55:07.818650 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 11:55:07.820121 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 11:55:07.821765 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 11:55:07.825367 systemd-networkd[765]: eth0: DHCPv6 lease lost Jan 29 11:55:07.827478 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 11:55:07.827676 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 11:55:07.829790 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 11:55:07.829895 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:55:07.843809 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 11:55:07.846387 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 11:55:07.846639 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:55:07.847962 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:55:07.851524 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 11:55:07.851734 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 11:55:07.869386 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 11:55:07.871076 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:55:07.873923 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 11:55:07.874080 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 11:55:07.877774 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 11:55:07.877871 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 11:55:07.879588 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 11:55:07.879654 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:55:07.880343 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 11:55:07.880416 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:55:07.882520 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 11:55:07.882607 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 11:55:07.883972 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:55:07.884053 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:55:07.890509 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 11:55:07.891288 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 11:55:07.891380 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:55:07.892932 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 11:55:07.893009 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 11:55:07.895369 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 11:55:07.895438 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:55:07.896964 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 29 11:55:07.897035 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:55:07.898523 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 11:55:07.898609 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:55:07.902065 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 11:55:07.902146 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:55:07.903648 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:55:07.903716 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:55:07.911166 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 11:55:07.911348 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 11:55:07.913683 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 11:55:07.923512 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 11:55:07.934029 systemd[1]: Switching root. Jan 29 11:55:07.967534 systemd-journald[201]: Journal stopped Jan 29 11:55:09.633188 systemd-journald[201]: Received SIGTERM from PID 1 (systemd). Jan 29 11:55:09.636356 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 11:55:09.636389 kernel: SELinux: policy capability open_perms=1 Jan 29 11:55:09.636411 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 11:55:09.636438 kernel: SELinux: policy capability always_check_network=0 Jan 29 11:55:09.636459 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 11:55:09.636479 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 11:55:09.636513 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 11:55:09.636547 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 11:55:09.636569 kernel: audit: type=1403 audit(1738151708.369:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 11:55:09.636599 systemd[1]: Successfully loaded SELinux policy in 58ms. Jan 29 11:55:09.636637 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.717ms. Jan 29 11:55:09.636660 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:55:09.636683 systemd[1]: Detected virtualization kvm. Jan 29 11:55:09.636705 systemd[1]: Detected architecture x86-64. Jan 29 11:55:09.636741 systemd[1]: Detected first boot. Jan 29 11:55:09.636763 systemd[1]: Hostname set to . Jan 29 11:55:09.636784 systemd[1]: Initializing machine ID from VM UUID. Jan 29 11:55:09.636805 zram_generator::config[1047]: No configuration found. Jan 29 11:55:09.636829 systemd[1]: Populated /etc with preset unit settings. Jan 29 11:55:09.636850 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 29 11:55:09.636871 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 29 11:55:09.636894 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 29 11:55:09.636936 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 11:55:09.636959 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 11:55:09.636981 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 11:55:09.637002 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 11:55:09.637023 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 11:55:09.637044 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 11:55:09.637067 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 11:55:09.637089 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 11:55:09.637124 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:55:09.637148 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:55:09.637169 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 11:55:09.637191 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 11:55:09.637213 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 11:55:09.638948 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:55:09.638986 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 29 11:55:09.639009 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:55:09.639031 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 29 11:55:09.639071 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 29 11:55:09.639095 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 29 11:55:09.639116 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 11:55:09.639137 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:55:09.639158 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:55:09.639180 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:55:09.639215 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:55:09.639992 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 11:55:09.640024 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 11:55:09.640046 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:55:09.640068 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:55:09.640100 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:55:09.640133 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 11:55:09.640170 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 11:55:09.640194 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 11:55:09.640223 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 11:55:09.642975 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:55:09.643009 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 11:55:09.643032 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 11:55:09.643095 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 11:55:09.643120 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 11:55:09.643162 systemd[1]: Reached target machines.target - Containers. Jan 29 11:55:09.643186 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 11:55:09.643208 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:55:09.643230 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:55:09.643316 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 11:55:09.643343 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:55:09.643365 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:55:09.643403 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:55:09.643427 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 11:55:09.643462 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:55:09.643487 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 11:55:09.643508 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 29 11:55:09.643541 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 29 11:55:09.643566 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 29 11:55:09.643588 systemd[1]: Stopped systemd-fsck-usr.service. Jan 29 11:55:09.643609 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:55:09.643630 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:55:09.643661 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 11:55:09.643697 kernel: fuse: init (API version 7.39) Jan 29 11:55:09.643721 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 11:55:09.643743 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:55:09.643765 systemd[1]: verity-setup.service: Deactivated successfully. Jan 29 11:55:09.643787 systemd[1]: Stopped verity-setup.service. Jan 29 11:55:09.643809 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:55:09.643831 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 11:55:09.643860 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 11:55:09.643895 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 11:55:09.643919 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 11:55:09.643940 kernel: ACPI: bus type drm_connector registered Jan 29 11:55:09.643961 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 11:55:09.643982 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 11:55:09.644016 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:55:09.644053 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 11:55:09.644076 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 11:55:09.644098 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:55:09.644120 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:55:09.644163 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:55:09.644186 kernel: loop: module loaded Jan 29 11:55:09.644207 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:55:09.644239 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:55:09.644317 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:55:09.644342 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 11:55:09.644365 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 11:55:09.644386 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 11:55:09.644458 systemd-journald[1140]: Collecting audit messages is disabled. Jan 29 11:55:09.644499 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:55:09.644541 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:55:09.644565 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:55:09.644586 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 11:55:09.644613 systemd-journald[1140]: Journal started Jan 29 11:55:09.644662 systemd-journald[1140]: Runtime Journal (/run/log/journal/5289e4f27d3c45339271348e7150bd41) is 4.7M, max 37.9M, 33.2M free. Jan 29 11:55:09.189233 systemd[1]: Queued start job for default target multi-user.target. Jan 29 11:55:09.648336 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:55:09.211418 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 29 11:55:09.212195 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 29 11:55:09.651596 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 11:55:09.669534 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 11:55:09.680354 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 11:55:09.688950 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 11:55:09.691430 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 11:55:09.691487 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:55:09.695397 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 29 11:55:09.705232 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 11:55:09.711395 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 11:55:09.712312 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:55:09.721625 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 11:55:09.727386 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 11:55:09.728208 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:55:09.739510 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 11:55:09.740372 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:55:09.745460 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:55:09.755462 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 11:55:09.759534 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 11:55:09.764696 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 11:55:09.766637 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 11:55:09.768720 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 11:55:09.774632 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 11:55:09.783816 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 11:55:09.799938 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 29 11:55:09.853366 systemd-journald[1140]: Time spent on flushing to /var/log/journal/5289e4f27d3c45339271348e7150bd41 is 78.910ms for 1147 entries. Jan 29 11:55:09.853366 systemd-journald[1140]: System Journal (/var/log/journal/5289e4f27d3c45339271348e7150bd41) is 8.0M, max 584.8M, 576.8M free. Jan 29 11:55:09.953558 systemd-journald[1140]: Received client request to flush runtime journal. Jan 29 11:55:09.953622 kernel: loop0: detected capacity change from 0 to 141000 Jan 29 11:55:09.953651 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 11:55:09.852316 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:55:09.893706 systemd-tmpfiles[1181]: ACLs are not supported, ignoring. Jan 29 11:55:09.893733 systemd-tmpfiles[1181]: ACLs are not supported, ignoring. Jan 29 11:55:09.936024 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 11:55:09.941370 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:55:09.944335 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 29 11:55:09.958540 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 11:55:09.960845 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 11:55:09.991325 kernel: loop1: detected capacity change from 0 to 8 Jan 29 11:55:09.996799 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:55:10.012749 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 11:55:10.057175 kernel: loop2: detected capacity change from 0 to 138184 Jan 29 11:55:10.056931 udevadm[1201]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 29 11:55:10.080688 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 11:55:10.094087 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:55:10.118391 kernel: loop3: detected capacity change from 0 to 205544 Jan 29 11:55:10.145600 systemd-tmpfiles[1206]: ACLs are not supported, ignoring. Jan 29 11:55:10.146090 systemd-tmpfiles[1206]: ACLs are not supported, ignoring. Jan 29 11:55:10.154065 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:55:10.176276 kernel: loop4: detected capacity change from 0 to 141000 Jan 29 11:55:10.208488 kernel: loop5: detected capacity change from 0 to 8 Jan 29 11:55:10.216273 kernel: loop6: detected capacity change from 0 to 138184 Jan 29 11:55:10.246970 kernel: loop7: detected capacity change from 0 to 205544 Jan 29 11:55:10.275222 (sd-merge)[1211]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jan 29 11:55:10.276178 (sd-merge)[1211]: Merged extensions into '/usr'. Jan 29 11:55:10.296370 systemd[1]: Reloading requested from client PID 1180 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 11:55:10.296396 systemd[1]: Reloading... Jan 29 11:55:10.468311 zram_generator::config[1237]: No configuration found. Jan 29 11:55:10.535370 ldconfig[1175]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 11:55:10.674341 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:55:10.745906 systemd[1]: Reloading finished in 448 ms. Jan 29 11:55:10.783979 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 11:55:10.785829 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 11:55:10.799554 systemd[1]: Starting ensure-sysext.service... Jan 29 11:55:10.809679 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:55:10.829978 systemd[1]: Reloading requested from client PID 1293 ('systemctl') (unit ensure-sysext.service)... Jan 29 11:55:10.830149 systemd[1]: Reloading... Jan 29 11:55:10.857810 systemd-tmpfiles[1294]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 11:55:10.858308 systemd-tmpfiles[1294]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 11:55:10.864781 systemd-tmpfiles[1294]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 11:55:10.865211 systemd-tmpfiles[1294]: ACLs are not supported, ignoring. Jan 29 11:55:10.865356 systemd-tmpfiles[1294]: ACLs are not supported, ignoring. Jan 29 11:55:10.875023 systemd-tmpfiles[1294]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:55:10.875043 systemd-tmpfiles[1294]: Skipping /boot Jan 29 11:55:10.907806 systemd-tmpfiles[1294]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:55:10.907828 systemd-tmpfiles[1294]: Skipping /boot Jan 29 11:55:10.968754 zram_generator::config[1321]: No configuration found. Jan 29 11:55:11.144389 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:55:11.218828 systemd[1]: Reloading finished in 387 ms. Jan 29 11:55:11.246118 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 11:55:11.254884 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:55:11.279335 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 11:55:11.286364 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 11:55:11.297638 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 11:55:11.305633 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:55:11.315659 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:55:11.326592 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 11:55:11.340708 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:55:11.341049 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:55:11.353073 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:55:11.360646 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:55:11.364318 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:55:11.366543 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:55:11.366725 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:55:11.380634 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 11:55:11.384545 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:55:11.384840 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:55:11.385088 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:55:11.385235 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:55:11.389792 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 11:55:11.407699 systemd-udevd[1390]: Using default interface naming scheme 'v255'. Jan 29 11:55:11.418070 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:55:11.419758 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:55:11.429705 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:55:11.430722 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:55:11.443811 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 11:55:11.444631 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:55:11.446671 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 11:55:11.448223 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:55:11.448523 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:55:11.455693 systemd[1]: Finished ensure-sysext.service. Jan 29 11:55:11.468558 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 29 11:55:11.481651 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:55:11.481921 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:55:11.483440 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:55:11.483670 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:55:11.486055 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:55:11.486158 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:55:11.489983 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:55:11.490277 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:55:11.493667 augenrules[1417]: No rules Jan 29 11:55:11.498864 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 11:55:11.499221 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 11:55:11.500549 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 11:55:11.508410 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:55:11.521136 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:55:11.523544 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 11:55:11.525210 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 11:55:11.580823 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 11:55:11.700896 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 29 11:55:11.718150 systemd-networkd[1430]: lo: Link UP Jan 29 11:55:11.718962 systemd-networkd[1430]: lo: Gained carrier Jan 29 11:55:11.722019 systemd-networkd[1430]: Enumeration completed Jan 29 11:55:11.722201 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:55:11.732494 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 11:55:11.765701 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 29 11:55:11.766648 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 11:55:11.771829 systemd-networkd[1430]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:55:11.771843 systemd-networkd[1430]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:55:11.773776 systemd-networkd[1430]: eth0: Link UP Jan 29 11:55:11.773895 systemd-networkd[1430]: eth0: Gained carrier Jan 29 11:55:11.773994 systemd-networkd[1430]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:55:11.784074 systemd-resolved[1388]: Positive Trust Anchors: Jan 29 11:55:11.784094 systemd-resolved[1388]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:55:11.784139 systemd-resolved[1388]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:55:11.788360 systemd-networkd[1430]: eth0: DHCPv4 address 10.230.10.162/30, gateway 10.230.10.161 acquired from 10.230.10.161 Jan 29 11:55:11.789922 systemd-timesyncd[1413]: Network configuration changed, trying to establish connection. Jan 29 11:55:11.800890 systemd-resolved[1388]: Using system hostname 'srv-xy63l.gb1.brightbox.com'. Jan 29 11:55:11.806767 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:55:11.807905 systemd[1]: Reached target network.target - Network. Jan 29 11:55:11.808629 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:55:11.857323 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 29 11:55:11.857424 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1426) Jan 29 11:55:11.866280 kernel: ACPI: button: Power Button [PWRF] Jan 29 11:55:11.934277 kernel: mousedev: PS/2 mouse device common for all mice Jan 29 11:55:11.973277 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 29 11:55:11.982703 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 29 11:55:11.983011 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 29 11:55:11.993159 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 11:55:12.012455 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 11:55:12.040174 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 11:55:12.051264 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 29 11:55:12.058607 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:55:12.733525 systemd-resolved[1388]: Clock change detected. Flushing caches. Jan 29 11:55:12.735444 systemd-timesyncd[1413]: Contacted time server 91.135.12.168:123 (0.flatcar.pool.ntp.org). Jan 29 11:55:12.735679 systemd-timesyncd[1413]: Initial clock synchronization to Wed 2025-01-29 11:55:12.733125 UTC. Jan 29 11:55:12.883741 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 11:55:12.915915 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:55:12.923521 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 11:55:12.957316 lvm[1471]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:55:12.993832 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 11:55:12.995113 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:55:12.996002 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:55:12.996915 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 11:55:12.997939 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 11:55:12.999093 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 11:55:13.000061 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 11:55:13.000888 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 11:55:13.001722 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 11:55:13.001779 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:55:13.002413 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:55:13.010508 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 11:55:13.013495 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 11:55:13.018558 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 11:55:13.021282 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 11:55:13.022754 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 11:55:13.023650 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:55:13.024326 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:55:13.025020 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:55:13.025073 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:55:13.031474 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 11:55:13.036572 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 29 11:55:13.040569 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 11:55:13.042645 lvm[1475]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:55:13.051427 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 11:55:13.055530 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 11:55:13.058141 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 11:55:13.067566 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 11:55:13.071326 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 29 11:55:13.075472 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 11:55:13.087559 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 11:55:13.096376 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 11:55:13.098014 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 11:55:13.098866 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 11:55:13.104516 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 11:55:13.106747 jq[1479]: false Jan 29 11:55:13.110386 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 11:55:13.121985 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 11:55:13.122618 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 11:55:13.153036 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 11:55:13.155152 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 11:55:13.156639 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 11:55:13.173316 extend-filesystems[1480]: Found loop4 Jan 29 11:55:13.173316 extend-filesystems[1480]: Found loop5 Jan 29 11:55:13.173316 extend-filesystems[1480]: Found loop6 Jan 29 11:55:13.173316 extend-filesystems[1480]: Found loop7 Jan 29 11:55:13.173316 extend-filesystems[1480]: Found vda Jan 29 11:55:13.173316 extend-filesystems[1480]: Found vda1 Jan 29 11:55:13.173316 extend-filesystems[1480]: Found vda2 Jan 29 11:55:13.173316 extend-filesystems[1480]: Found vda3 Jan 29 11:55:13.173316 extend-filesystems[1480]: Found usr Jan 29 11:55:13.173316 extend-filesystems[1480]: Found vda4 Jan 29 11:55:13.173316 extend-filesystems[1480]: Found vda6 Jan 29 11:55:13.173316 extend-filesystems[1480]: Found vda7 Jan 29 11:55:13.173316 extend-filesystems[1480]: Found vda9 Jan 29 11:55:13.173316 extend-filesystems[1480]: Checking size of /dev/vda9 Jan 29 11:55:13.214359 jq[1490]: true Jan 29 11:55:13.200198 (ntainerd)[1506]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 11:55:13.209235 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 11:55:13.209630 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 11:55:13.233457 dbus-daemon[1478]: [system] SELinux support is enabled Jan 29 11:55:13.238902 extend-filesystems[1480]: Resized partition /dev/vda9 Jan 29 11:55:13.242378 extend-filesystems[1517]: resize2fs 1.47.1 (20-May-2024) Jan 29 11:55:13.254315 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Jan 29 11:55:13.242002 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 11:55:13.245557 dbus-daemon[1478]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1430 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 29 11:55:13.257286 update_engine[1488]: I20250129 11:55:13.251814 1488 main.cc:92] Flatcar Update Engine starting Jan 29 11:55:13.253706 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 11:55:13.259083 dbus-daemon[1478]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 29 11:55:13.254587 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 11:55:13.254622 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 11:55:13.255602 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 11:55:13.255630 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 11:55:13.265035 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 29 11:55:13.277603 systemd[1]: Started update-engine.service - Update Engine. Jan 29 11:55:13.278811 update_engine[1488]: I20250129 11:55:13.277792 1488 update_check_scheduler.cc:74] Next update check in 10m23s Jan 29 11:55:13.282760 jq[1511]: true Jan 29 11:55:13.288541 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 11:55:13.294703 tar[1493]: linux-amd64/helm Jan 29 11:55:13.397945 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1432) Jan 29 11:55:13.423326 systemd-logind[1487]: Watching system buttons on /dev/input/event2 (Power Button) Jan 29 11:55:13.423400 systemd-logind[1487]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 29 11:55:13.426393 systemd-logind[1487]: New seat seat0. Jan 29 11:55:13.432718 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 11:55:13.546241 bash[1536]: Updated "/home/core/.ssh/authorized_keys" Jan 29 11:55:13.553112 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 11:55:13.576900 systemd[1]: Starting sshkeys.service... Jan 29 11:55:13.613945 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 29 11:55:13.627752 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 29 11:55:13.671281 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jan 29 11:55:13.695961 extend-filesystems[1517]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 29 11:55:13.695961 extend-filesystems[1517]: old_desc_blocks = 1, new_desc_blocks = 8 Jan 29 11:55:13.695961 extend-filesystems[1517]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jan 29 11:55:13.707793 extend-filesystems[1480]: Resized filesystem in /dev/vda9 Jan 29 11:55:13.701021 systemd-networkd[1430]: eth0: Gained IPv6LL Jan 29 11:55:13.701392 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 11:55:13.702463 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 11:55:13.712804 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 11:55:13.714504 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 11:55:13.719552 dbus-daemon[1478]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 29 11:55:13.721738 locksmithd[1520]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 11:55:13.724667 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:55:13.724478 dbus-daemon[1478]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1518 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 29 11:55:13.727678 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 11:55:13.730008 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 29 11:55:13.744752 systemd[1]: Starting polkit.service - Authorization Manager... Jan 29 11:55:13.776077 polkitd[1557]: Started polkitd version 121 Jan 29 11:55:13.794553 polkitd[1557]: Loading rules from directory /etc/polkit-1/rules.d Jan 29 11:55:13.794659 polkitd[1557]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 29 11:55:13.795631 polkitd[1557]: Finished loading, compiling and executing 2 rules Jan 29 11:55:13.797941 dbus-daemon[1478]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 29 11:55:13.798168 systemd[1]: Started polkit.service - Authorization Manager. Jan 29 11:55:13.799554 polkitd[1557]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 29 11:55:13.835419 systemd-hostnamed[1518]: Hostname set to (static) Jan 29 11:55:13.867118 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 11:55:13.880546 containerd[1506]: time="2025-01-29T11:55:13.880420175Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 29 11:55:13.970348 containerd[1506]: time="2025-01-29T11:55:13.968158068Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:55:13.986759 containerd[1506]: time="2025-01-29T11:55:13.986692477Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:55:13.987055 containerd[1506]: time="2025-01-29T11:55:13.987023846Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 11:55:13.987220 containerd[1506]: time="2025-01-29T11:55:13.987178914Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 11:55:13.988699 containerd[1506]: time="2025-01-29T11:55:13.988667312Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 11:55:13.988978 containerd[1506]: time="2025-01-29T11:55:13.988948362Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 11:55:13.989217 containerd[1506]: time="2025-01-29T11:55:13.989172711Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:55:13.990118 containerd[1506]: time="2025-01-29T11:55:13.990090201Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:55:13.991572 containerd[1506]: time="2025-01-29T11:55:13.991538432Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:55:13.991736 containerd[1506]: time="2025-01-29T11:55:13.991708062Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 11:55:13.991856 containerd[1506]: time="2025-01-29T11:55:13.991828878Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:55:13.991992 containerd[1506]: time="2025-01-29T11:55:13.991964591Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 11:55:13.993637 containerd[1506]: time="2025-01-29T11:55:13.993605701Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:55:13.994848 containerd[1506]: time="2025-01-29T11:55:13.994131889Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:55:13.994848 containerd[1506]: time="2025-01-29T11:55:13.994312037Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:55:13.994848 containerd[1506]: time="2025-01-29T11:55:13.994349108Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 11:55:13.994848 containerd[1506]: time="2025-01-29T11:55:13.994544865Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 11:55:13.994848 containerd[1506]: time="2025-01-29T11:55:13.994632249Z" level=info msg="metadata content store policy set" policy=shared Jan 29 11:55:14.007639 containerd[1506]: time="2025-01-29T11:55:14.007576645Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 11:55:14.010081 containerd[1506]: time="2025-01-29T11:55:14.008078759Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 11:55:14.010081 containerd[1506]: time="2025-01-29T11:55:14.008136606Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 11:55:14.010081 containerd[1506]: time="2025-01-29T11:55:14.008185130Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 11:55:14.010081 containerd[1506]: time="2025-01-29T11:55:14.008211898Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 11:55:14.010081 containerd[1506]: time="2025-01-29T11:55:14.008566711Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 11:55:14.010081 containerd[1506]: time="2025-01-29T11:55:14.009039036Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 11:55:14.010081 containerd[1506]: time="2025-01-29T11:55:14.009228576Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 11:55:14.010081 containerd[1506]: time="2025-01-29T11:55:14.009277941Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 11:55:14.010081 containerd[1506]: time="2025-01-29T11:55:14.009303609Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 11:55:14.010081 containerd[1506]: time="2025-01-29T11:55:14.009330796Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 11:55:14.010081 containerd[1506]: time="2025-01-29T11:55:14.009361584Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 11:55:14.010081 containerd[1506]: time="2025-01-29T11:55:14.009382851Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 11:55:14.010081 containerd[1506]: time="2025-01-29T11:55:14.009404029Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 11:55:14.010081 containerd[1506]: time="2025-01-29T11:55:14.009439167Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 11:55:14.010593 containerd[1506]: time="2025-01-29T11:55:14.009469781Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 11:55:14.010593 containerd[1506]: time="2025-01-29T11:55:14.009492218Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 11:55:14.010593 containerd[1506]: time="2025-01-29T11:55:14.009511572Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 11:55:14.010593 containerd[1506]: time="2025-01-29T11:55:14.009548181Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 11:55:14.010593 containerd[1506]: time="2025-01-29T11:55:14.009573224Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 11:55:14.010593 containerd[1506]: time="2025-01-29T11:55:14.009592566Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 11:55:14.010593 containerd[1506]: time="2025-01-29T11:55:14.009639612Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 11:55:14.010593 containerd[1506]: time="2025-01-29T11:55:14.009663708Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 11:55:14.010593 containerd[1506]: time="2025-01-29T11:55:14.009686897Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 11:55:14.010593 containerd[1506]: time="2025-01-29T11:55:14.009705786Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 11:55:14.010593 containerd[1506]: time="2025-01-29T11:55:14.009725195Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 11:55:14.010593 containerd[1506]: time="2025-01-29T11:55:14.009783674Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 11:55:14.010593 containerd[1506]: time="2025-01-29T11:55:14.009811188Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 11:55:14.010593 containerd[1506]: time="2025-01-29T11:55:14.009830376Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 11:55:14.011046 containerd[1506]: time="2025-01-29T11:55:14.009850419Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 11:55:14.011046 containerd[1506]: time="2025-01-29T11:55:14.009868982Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 11:55:14.011046 containerd[1506]: time="2025-01-29T11:55:14.009890673Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 11:55:14.011046 containerd[1506]: time="2025-01-29T11:55:14.009933438Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 11:55:14.011046 containerd[1506]: time="2025-01-29T11:55:14.009971814Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 11:55:14.011046 containerd[1506]: time="2025-01-29T11:55:14.009993304Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 11:55:14.011488 containerd[1506]: time="2025-01-29T11:55:14.011396266Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 11:55:14.011776 containerd[1506]: time="2025-01-29T11:55:14.011724311Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 11:55:14.011994 containerd[1506]: time="2025-01-29T11:55:14.011876534Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 11:55:14.011994 containerd[1506]: time="2025-01-29T11:55:14.011928574Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 11:55:14.011994 containerd[1506]: time="2025-01-29T11:55:14.011947700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 11:55:14.012205 containerd[1506]: time="2025-01-29T11:55:14.011977610Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 11:55:14.012205 containerd[1506]: time="2025-01-29T11:55:14.012166932Z" level=info msg="NRI interface is disabled by configuration." Jan 29 11:55:14.012451 containerd[1506]: time="2025-01-29T11:55:14.012187498Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 11:55:14.013944 containerd[1506]: time="2025-01-29T11:55:14.013806477Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 11:55:14.013944 containerd[1506]: time="2025-01-29T11:55:14.013902853Z" level=info msg="Connect containerd service" Jan 29 11:55:14.014603 containerd[1506]: time="2025-01-29T11:55:14.014316329Z" level=info msg="using legacy CRI server" Jan 29 11:55:14.014603 containerd[1506]: time="2025-01-29T11:55:14.014358658Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 11:55:14.014791 containerd[1506]: time="2025-01-29T11:55:14.014739818Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 11:55:14.016412 containerd[1506]: time="2025-01-29T11:55:14.016274343Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 11:55:14.017393 containerd[1506]: time="2025-01-29T11:55:14.016540529Z" level=info msg="Start subscribing containerd event" Jan 29 11:55:14.017393 containerd[1506]: time="2025-01-29T11:55:14.016629179Z" level=info msg="Start recovering state" Jan 29 11:55:14.017393 containerd[1506]: time="2025-01-29T11:55:14.016738062Z" level=info msg="Start event monitor" Jan 29 11:55:14.017393 containerd[1506]: time="2025-01-29T11:55:14.016764078Z" level=info msg="Start snapshots syncer" Jan 29 11:55:14.017393 containerd[1506]: time="2025-01-29T11:55:14.016780657Z" level=info msg="Start cni network conf syncer for default" Jan 29 11:55:14.017393 containerd[1506]: time="2025-01-29T11:55:14.016794382Z" level=info msg="Start streaming server" Jan 29 11:55:14.018224 containerd[1506]: time="2025-01-29T11:55:14.018172719Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 11:55:14.018457 containerd[1506]: time="2025-01-29T11:55:14.018420841Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 11:55:14.018739 containerd[1506]: time="2025-01-29T11:55:14.018714331Z" level=info msg="containerd successfully booted in 0.144798s" Jan 29 11:55:14.019641 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 11:55:14.259409 sshd_keygen[1508]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 11:55:14.296071 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 11:55:14.308760 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 11:55:14.318723 systemd[1]: Started sshd@0-10.230.10.162:22-139.178.68.195:53990.service - OpenSSH per-connection server daemon (139.178.68.195:53990). Jan 29 11:55:14.339532 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 11:55:14.341369 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 11:55:14.353794 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 11:55:14.398646 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 11:55:14.415118 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 11:55:14.424843 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 29 11:55:14.427607 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 11:55:14.597938 tar[1493]: linux-amd64/LICENSE Jan 29 11:55:14.597938 tar[1493]: linux-amd64/README.md Jan 29 11:55:14.613660 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 29 11:55:14.856483 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:55:14.878260 (kubelet)[1606]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:55:15.208827 systemd-networkd[1430]: eth0: Ignoring DHCPv6 address 2a02:1348:179:82a8:24:19ff:fee6:aa2/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:82a8:24:19ff:fee6:aa2/64 assigned by NDisc. Jan 29 11:55:15.209438 systemd-networkd[1430]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 29 11:55:15.265432 sshd[1588]: Accepted publickey for core from 139.178.68.195 port 53990 ssh2: RSA SHA256:1NfIXCvxej/z4X5wlkwGw1mN1hR8YjkIU7Ph0XPIiZI Jan 29 11:55:15.268063 sshd-session[1588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:55:15.290188 systemd-logind[1487]: New session 1 of user core. Jan 29 11:55:15.292986 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 11:55:15.306346 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 11:55:15.343076 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 11:55:15.352793 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 11:55:15.366624 (systemd)[1615]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 11:55:15.514424 kubelet[1606]: E0129 11:55:15.512877 1606 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:55:15.516466 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:55:15.516757 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:55:15.530732 systemd[1615]: Queued start job for default target default.target. Jan 29 11:55:15.539335 systemd[1615]: Created slice app.slice - User Application Slice. Jan 29 11:55:15.539403 systemd[1615]: Reached target paths.target - Paths. Jan 29 11:55:15.539443 systemd[1615]: Reached target timers.target - Timers. Jan 29 11:55:15.541844 systemd[1615]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 11:55:15.564280 systemd[1615]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 11:55:15.564506 systemd[1615]: Reached target sockets.target - Sockets. Jan 29 11:55:15.564533 systemd[1615]: Reached target basic.target - Basic System. Jan 29 11:55:15.564607 systemd[1615]: Reached target default.target - Main User Target. Jan 29 11:55:15.564674 systemd[1615]: Startup finished in 186ms. Jan 29 11:55:15.564788 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 11:55:15.574576 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 11:55:16.218487 systemd[1]: Started sshd@1-10.230.10.162:22-139.178.68.195:49150.service - OpenSSH per-connection server daemon (139.178.68.195:49150). Jan 29 11:55:17.113516 sshd[1629]: Accepted publickey for core from 139.178.68.195 port 49150 ssh2: RSA SHA256:1NfIXCvxej/z4X5wlkwGw1mN1hR8YjkIU7Ph0XPIiZI Jan 29 11:55:17.115733 sshd-session[1629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:55:17.123114 systemd-logind[1487]: New session 2 of user core. Jan 29 11:55:17.136560 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 11:55:17.734432 sshd[1632]: Connection closed by 139.178.68.195 port 49150 Jan 29 11:55:17.735455 sshd-session[1629]: pam_unix(sshd:session): session closed for user core Jan 29 11:55:17.740081 systemd[1]: sshd@1-10.230.10.162:22-139.178.68.195:49150.service: Deactivated successfully. Jan 29 11:55:17.742517 systemd[1]: session-2.scope: Deactivated successfully. Jan 29 11:55:17.743688 systemd-logind[1487]: Session 2 logged out. Waiting for processes to exit. Jan 29 11:55:17.745224 systemd-logind[1487]: Removed session 2. Jan 29 11:55:17.904228 systemd[1]: Started sshd@2-10.230.10.162:22-139.178.68.195:49154.service - OpenSSH per-connection server daemon (139.178.68.195:49154). Jan 29 11:55:18.790632 sshd[1637]: Accepted publickey for core from 139.178.68.195 port 49154 ssh2: RSA SHA256:1NfIXCvxej/z4X5wlkwGw1mN1hR8YjkIU7Ph0XPIiZI Jan 29 11:55:18.792707 sshd-session[1637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:55:18.799406 systemd-logind[1487]: New session 3 of user core. Jan 29 11:55:18.807539 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 11:55:19.410332 sshd[1639]: Connection closed by 139.178.68.195 port 49154 Jan 29 11:55:19.411195 sshd-session[1637]: pam_unix(sshd:session): session closed for user core Jan 29 11:55:19.415131 systemd[1]: sshd@2-10.230.10.162:22-139.178.68.195:49154.service: Deactivated successfully. Jan 29 11:55:19.417873 systemd[1]: session-3.scope: Deactivated successfully. Jan 29 11:55:19.420050 systemd-logind[1487]: Session 3 logged out. Waiting for processes to exit. Jan 29 11:55:19.421528 systemd-logind[1487]: Removed session 3. Jan 29 11:55:19.471018 agetty[1595]: failed to open credentials directory Jan 29 11:55:19.471053 agetty[1596]: failed to open credentials directory Jan 29 11:55:19.488931 login[1595]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 29 11:55:19.491342 login[1596]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 29 11:55:19.497162 systemd-logind[1487]: New session 4 of user core. Jan 29 11:55:19.509577 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 11:55:19.514071 systemd-logind[1487]: New session 5 of user core. Jan 29 11:55:19.521569 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 11:55:20.235065 coreos-metadata[1477]: Jan 29 11:55:20.234 WARN failed to locate config-drive, using the metadata service API instead Jan 29 11:55:20.261998 coreos-metadata[1477]: Jan 29 11:55:20.261 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 29 11:55:20.268503 coreos-metadata[1477]: Jan 29 11:55:20.268 INFO Fetch failed with 404: resource not found Jan 29 11:55:20.268503 coreos-metadata[1477]: Jan 29 11:55:20.268 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 29 11:55:20.269228 coreos-metadata[1477]: Jan 29 11:55:20.269 INFO Fetch successful Jan 29 11:55:20.269369 coreos-metadata[1477]: Jan 29 11:55:20.269 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 29 11:55:20.279585 coreos-metadata[1477]: Jan 29 11:55:20.279 INFO Fetch successful Jan 29 11:55:20.279819 coreos-metadata[1477]: Jan 29 11:55:20.279 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 29 11:55:20.292347 coreos-metadata[1477]: Jan 29 11:55:20.292 INFO Fetch successful Jan 29 11:55:20.292347 coreos-metadata[1477]: Jan 29 11:55:20.292 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 29 11:55:20.305197 coreos-metadata[1477]: Jan 29 11:55:20.305 INFO Fetch successful Jan 29 11:55:20.305197 coreos-metadata[1477]: Jan 29 11:55:20.305 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 29 11:55:20.320985 coreos-metadata[1477]: Jan 29 11:55:20.320 INFO Fetch successful Jan 29 11:55:20.345404 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 29 11:55:20.347038 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 11:55:20.734072 coreos-metadata[1544]: Jan 29 11:55:20.733 WARN failed to locate config-drive, using the metadata service API instead Jan 29 11:55:20.755987 coreos-metadata[1544]: Jan 29 11:55:20.755 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 29 11:55:20.783543 coreos-metadata[1544]: Jan 29 11:55:20.783 INFO Fetch successful Jan 29 11:55:20.783543 coreos-metadata[1544]: Jan 29 11:55:20.783 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 29 11:55:20.810122 coreos-metadata[1544]: Jan 29 11:55:20.810 INFO Fetch successful Jan 29 11:55:20.812365 unknown[1544]: wrote ssh authorized keys file for user: core Jan 29 11:55:20.840537 update-ssh-keys[1678]: Updated "/home/core/.ssh/authorized_keys" Jan 29 11:55:20.841352 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 29 11:55:20.843753 systemd[1]: Finished sshkeys.service. Jan 29 11:55:20.846794 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 11:55:20.847348 systemd[1]: Startup finished in 1.430s (kernel) + 14.601s (initrd) + 11.879s (userspace) = 27.911s. Jan 29 11:55:25.767338 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 11:55:25.778565 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:55:25.956739 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:55:25.968738 (kubelet)[1689]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:55:26.023186 kubelet[1689]: E0129 11:55:26.022951 1689 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:55:26.027032 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:55:26.027332 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:55:29.576707 systemd[1]: Started sshd@3-10.230.10.162:22-139.178.68.195:46810.service - OpenSSH per-connection server daemon (139.178.68.195:46810). Jan 29 11:55:30.476673 sshd[1699]: Accepted publickey for core from 139.178.68.195 port 46810 ssh2: RSA SHA256:1NfIXCvxej/z4X5wlkwGw1mN1hR8YjkIU7Ph0XPIiZI Jan 29 11:55:30.479014 sshd-session[1699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:55:30.488132 systemd-logind[1487]: New session 6 of user core. Jan 29 11:55:30.494519 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 11:55:31.097314 sshd[1701]: Connection closed by 139.178.68.195 port 46810 Jan 29 11:55:31.098368 sshd-session[1699]: pam_unix(sshd:session): session closed for user core Jan 29 11:55:31.102699 systemd[1]: sshd@3-10.230.10.162:22-139.178.68.195:46810.service: Deactivated successfully. Jan 29 11:55:31.104873 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 11:55:31.106620 systemd-logind[1487]: Session 6 logged out. Waiting for processes to exit. Jan 29 11:55:31.108137 systemd-logind[1487]: Removed session 6. Jan 29 11:55:31.256650 systemd[1]: Started sshd@4-10.230.10.162:22-139.178.68.195:46826.service - OpenSSH per-connection server daemon (139.178.68.195:46826). Jan 29 11:55:32.166175 sshd[1706]: Accepted publickey for core from 139.178.68.195 port 46826 ssh2: RSA SHA256:1NfIXCvxej/z4X5wlkwGw1mN1hR8YjkIU7Ph0XPIiZI Jan 29 11:55:32.168171 sshd-session[1706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:55:32.175213 systemd-logind[1487]: New session 7 of user core. Jan 29 11:55:32.183568 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 11:55:32.784800 sshd[1708]: Connection closed by 139.178.68.195 port 46826 Jan 29 11:55:32.785747 sshd-session[1706]: pam_unix(sshd:session): session closed for user core Jan 29 11:55:32.790715 systemd[1]: sshd@4-10.230.10.162:22-139.178.68.195:46826.service: Deactivated successfully. Jan 29 11:55:32.792984 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 11:55:32.793959 systemd-logind[1487]: Session 7 logged out. Waiting for processes to exit. Jan 29 11:55:32.795315 systemd-logind[1487]: Removed session 7. Jan 29 11:55:32.950486 systemd[1]: Started sshd@5-10.230.10.162:22-139.178.68.195:46830.service - OpenSSH per-connection server daemon (139.178.68.195:46830). Jan 29 11:55:33.858198 sshd[1713]: Accepted publickey for core from 139.178.68.195 port 46830 ssh2: RSA SHA256:1NfIXCvxej/z4X5wlkwGw1mN1hR8YjkIU7Ph0XPIiZI Jan 29 11:55:33.860149 sshd-session[1713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:55:33.869106 systemd-logind[1487]: New session 8 of user core. Jan 29 11:55:33.872497 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 11:55:34.475799 sshd[1715]: Connection closed by 139.178.68.195 port 46830 Jan 29 11:55:34.477612 sshd-session[1713]: pam_unix(sshd:session): session closed for user core Jan 29 11:55:34.482597 systemd[1]: sshd@5-10.230.10.162:22-139.178.68.195:46830.service: Deactivated successfully. Jan 29 11:55:34.485043 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 11:55:34.486166 systemd-logind[1487]: Session 8 logged out. Waiting for processes to exit. Jan 29 11:55:34.487865 systemd-logind[1487]: Removed session 8. Jan 29 11:55:34.634669 systemd[1]: Started sshd@6-10.230.10.162:22-139.178.68.195:46836.service - OpenSSH per-connection server daemon (139.178.68.195:46836). Jan 29 11:55:35.522514 sshd[1720]: Accepted publickey for core from 139.178.68.195 port 46836 ssh2: RSA SHA256:1NfIXCvxej/z4X5wlkwGw1mN1hR8YjkIU7Ph0XPIiZI Jan 29 11:55:35.524423 sshd-session[1720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:55:35.530796 systemd-logind[1487]: New session 9 of user core. Jan 29 11:55:35.539467 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 11:55:36.012992 sudo[1723]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 11:55:36.013536 sudo[1723]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:55:36.028744 sudo[1723]: pam_unix(sudo:session): session closed for user root Jan 29 11:55:36.171570 sshd[1722]: Connection closed by 139.178.68.195 port 46836 Jan 29 11:55:36.172347 sshd-session[1720]: pam_unix(sshd:session): session closed for user core Jan 29 11:55:36.178854 systemd[1]: sshd@6-10.230.10.162:22-139.178.68.195:46836.service: Deactivated successfully. Jan 29 11:55:36.181145 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 11:55:36.182593 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 29 11:55:36.184360 systemd-logind[1487]: Session 9 logged out. Waiting for processes to exit. Jan 29 11:55:36.189534 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:55:36.190987 systemd-logind[1487]: Removed session 9. Jan 29 11:55:36.333343 systemd[1]: Started sshd@7-10.230.10.162:22-139.178.68.195:35648.service - OpenSSH per-connection server daemon (139.178.68.195:35648). Jan 29 11:55:36.368726 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:55:36.377749 (kubelet)[1738]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:55:36.438682 kubelet[1738]: E0129 11:55:36.438599 1738 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:55:36.440836 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:55:36.441069 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:55:37.219753 sshd[1731]: Accepted publickey for core from 139.178.68.195 port 35648 ssh2: RSA SHA256:1NfIXCvxej/z4X5wlkwGw1mN1hR8YjkIU7Ph0XPIiZI Jan 29 11:55:37.221958 sshd-session[1731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:55:37.230332 systemd-logind[1487]: New session 10 of user core. Jan 29 11:55:37.240531 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 29 11:55:37.696117 sudo[1748]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 11:55:37.696936 sudo[1748]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:55:37.701984 sudo[1748]: pam_unix(sudo:session): session closed for user root Jan 29 11:55:37.710675 sudo[1747]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 29 11:55:37.711129 sudo[1747]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:55:37.741751 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 11:55:37.781685 augenrules[1770]: No rules Jan 29 11:55:37.782635 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 11:55:37.782945 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 11:55:37.784219 sudo[1747]: pam_unix(sudo:session): session closed for user root Jan 29 11:55:37.927024 sshd[1746]: Connection closed by 139.178.68.195 port 35648 Jan 29 11:55:37.927963 sshd-session[1731]: pam_unix(sshd:session): session closed for user core Jan 29 11:55:37.932405 systemd[1]: sshd@7-10.230.10.162:22-139.178.68.195:35648.service: Deactivated successfully. Jan 29 11:55:37.934677 systemd[1]: session-10.scope: Deactivated successfully. Jan 29 11:55:37.936681 systemd-logind[1487]: Session 10 logged out. Waiting for processes to exit. Jan 29 11:55:37.938265 systemd-logind[1487]: Removed session 10. Jan 29 11:55:38.088689 systemd[1]: Started sshd@8-10.230.10.162:22-139.178.68.195:35664.service - OpenSSH per-connection server daemon (139.178.68.195:35664). Jan 29 11:55:38.986084 sshd[1778]: Accepted publickey for core from 139.178.68.195 port 35664 ssh2: RSA SHA256:1NfIXCvxej/z4X5wlkwGw1mN1hR8YjkIU7Ph0XPIiZI Jan 29 11:55:38.988165 sshd-session[1778]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:55:38.997397 systemd-logind[1487]: New session 11 of user core. Jan 29 11:55:39.002468 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 29 11:55:39.465413 sudo[1781]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 11:55:39.465909 sudo[1781]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:55:39.941963 (dockerd)[1800]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 29 11:55:39.942733 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 29 11:55:40.386354 dockerd[1800]: time="2025-01-29T11:55:40.385535017Z" level=info msg="Starting up" Jan 29 11:55:40.535642 dockerd[1800]: time="2025-01-29T11:55:40.535554103Z" level=info msg="Loading containers: start." Jan 29 11:55:40.767532 kernel: Initializing XFRM netlink socket Jan 29 11:55:40.894591 systemd-networkd[1430]: docker0: Link UP Jan 29 11:55:40.924293 dockerd[1800]: time="2025-01-29T11:55:40.924176059Z" level=info msg="Loading containers: done." Jan 29 11:55:40.947541 dockerd[1800]: time="2025-01-29T11:55:40.946589108Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 29 11:55:40.947541 dockerd[1800]: time="2025-01-29T11:55:40.946756988Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jan 29 11:55:40.947541 dockerd[1800]: time="2025-01-29T11:55:40.946936502Z" level=info msg="Daemon has completed initialization" Jan 29 11:55:40.987782 dockerd[1800]: time="2025-01-29T11:55:40.987635469Z" level=info msg="API listen on /run/docker.sock" Jan 29 11:55:40.988062 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 29 11:55:42.242487 containerd[1506]: time="2025-01-29T11:55:42.241627150Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\"" Jan 29 11:55:43.056232 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1880576076.mount: Deactivated successfully. Jan 29 11:55:44.752280 containerd[1506]: time="2025-01-29T11:55:44.750903858Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:55:44.753126 containerd[1506]: time="2025-01-29T11:55:44.753064034Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.5: active requests=0, bytes read=27976729" Jan 29 11:55:44.753536 containerd[1506]: time="2025-01-29T11:55:44.753475067Z" level=info msg="ImageCreate event name:\"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:55:44.757682 containerd[1506]: time="2025-01-29T11:55:44.757646898Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:55:44.759389 containerd[1506]: time="2025-01-29T11:55:44.759351453Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.5\" with image id \"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\", size \"27973521\" in 2.51757211s" Jan 29 11:55:44.759479 containerd[1506]: time="2025-01-29T11:55:44.759410684Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\" returns image reference \"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\"" Jan 29 11:55:44.761679 containerd[1506]: time="2025-01-29T11:55:44.761649168Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\"" Jan 29 11:55:45.234936 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 29 11:55:46.456925 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 29 11:55:46.470287 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:55:46.926341 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:55:46.935790 (kubelet)[2063]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:55:46.986274 containerd[1506]: time="2025-01-29T11:55:46.984051337Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:55:46.987003 containerd[1506]: time="2025-01-29T11:55:46.986203230Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.5: active requests=0, bytes read=24701151" Jan 29 11:55:46.988859 containerd[1506]: time="2025-01-29T11:55:46.988823967Z" level=info msg="ImageCreate event name:\"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:55:46.994425 containerd[1506]: time="2025-01-29T11:55:46.994376791Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:55:47.001514 containerd[1506]: time="2025-01-29T11:55:47.001457965Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.5\" with image id \"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\", size \"26147725\" in 2.239335258s" Jan 29 11:55:47.001815 containerd[1506]: time="2025-01-29T11:55:47.001687904Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\" returns image reference \"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\"" Jan 29 11:55:47.003968 containerd[1506]: time="2025-01-29T11:55:47.003904810Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\"" Jan 29 11:55:47.018518 kubelet[2063]: E0129 11:55:47.018456 2063 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:55:47.022321 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:55:47.022628 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:55:48.826682 containerd[1506]: time="2025-01-29T11:55:48.825106567Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:55:48.827425 containerd[1506]: time="2025-01-29T11:55:48.827201083Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.5: active requests=0, bytes read=18652061" Jan 29 11:55:48.828101 containerd[1506]: time="2025-01-29T11:55:48.828066945Z" level=info msg="ImageCreate event name:\"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:55:48.832092 containerd[1506]: time="2025-01-29T11:55:48.832058483Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:55:48.833938 containerd[1506]: time="2025-01-29T11:55:48.833897643Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.5\" with image id \"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\", size \"20098653\" in 1.829203717s" Jan 29 11:55:48.834109 containerd[1506]: time="2025-01-29T11:55:48.833942088Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\" returns image reference \"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\"" Jan 29 11:55:48.834714 containerd[1506]: time="2025-01-29T11:55:48.834631824Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\"" Jan 29 11:55:50.395370 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2261075021.mount: Deactivated successfully. Jan 29 11:55:51.125163 containerd[1506]: time="2025-01-29T11:55:51.125016741Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:55:51.130010 containerd[1506]: time="2025-01-29T11:55:51.129965091Z" level=info msg="ImageCreate event name:\"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:55:51.130373 containerd[1506]: time="2025-01-29T11:55:51.130314243Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.5: active requests=0, bytes read=30231136" Jan 29 11:55:51.133888 containerd[1506]: time="2025-01-29T11:55:51.133846698Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:55:51.135635 containerd[1506]: time="2025-01-29T11:55:51.135071124Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.5\" with image id \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\", size \"30230147\" in 2.300294355s" Jan 29 11:55:51.135635 containerd[1506]: time="2025-01-29T11:55:51.135148065Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\" returns image reference \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\"" Jan 29 11:55:51.137070 containerd[1506]: time="2025-01-29T11:55:51.137007356Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 29 11:55:51.747678 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3300791691.mount: Deactivated successfully. Jan 29 11:55:52.978318 containerd[1506]: time="2025-01-29T11:55:52.978147096Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:55:52.983016 containerd[1506]: time="2025-01-29T11:55:52.982938631Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Jan 29 11:55:52.988086 containerd[1506]: time="2025-01-29T11:55:52.988026322Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:55:53.004736 containerd[1506]: time="2025-01-29T11:55:53.004644202Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:55:53.006316 containerd[1506]: time="2025-01-29T11:55:53.006126965Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.868858424s" Jan 29 11:55:53.006316 containerd[1506]: time="2025-01-29T11:55:53.006175451Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 29 11:55:53.007945 containerd[1506]: time="2025-01-29T11:55:53.007664366Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 29 11:55:53.650361 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2973783379.mount: Deactivated successfully. Jan 29 11:55:53.666736 containerd[1506]: time="2025-01-29T11:55:53.665502228Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:55:53.670492 containerd[1506]: time="2025-01-29T11:55:53.670394884Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Jan 29 11:55:53.672282 containerd[1506]: time="2025-01-29T11:55:53.672230795Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:55:53.675205 containerd[1506]: time="2025-01-29T11:55:53.675167708Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:55:53.676843 containerd[1506]: time="2025-01-29T11:55:53.676801890Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 669.086705ms" Jan 29 11:55:53.677022 containerd[1506]: time="2025-01-29T11:55:53.676990143Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 29 11:55:53.678311 containerd[1506]: time="2025-01-29T11:55:53.678240713Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jan 29 11:55:54.388731 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3709737849.mount: Deactivated successfully. Jan 29 11:55:57.167075 containerd[1506]: time="2025-01-29T11:55:57.166919580Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:55:57.169732 containerd[1506]: time="2025-01-29T11:55:57.169602380Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56779981" Jan 29 11:55:57.170747 containerd[1506]: time="2025-01-29T11:55:57.170356627Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:55:57.176331 containerd[1506]: time="2025-01-29T11:55:57.176210068Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:55:57.178290 containerd[1506]: time="2025-01-29T11:55:57.177967732Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 3.499644756s" Jan 29 11:55:57.178290 containerd[1506]: time="2025-01-29T11:55:57.178016978Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jan 29 11:55:57.208490 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 29 11:55:57.220079 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:55:57.494515 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:55:57.504713 (kubelet)[2200]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:55:57.577177 kubelet[2200]: E0129 11:55:57.577068 2200 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:55:57.581516 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:55:57.582170 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:55:58.949888 update_engine[1488]: I20250129 11:55:58.949491 1488 update_attempter.cc:509] Updating boot flags... Jan 29 11:55:59.063890 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2225) Jan 29 11:55:59.121303 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2223) Jan 29 11:56:03.315599 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:56:03.323608 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:56:03.374339 systemd[1]: Reloading requested from client PID 2239 ('systemctl') (unit session-11.scope)... Jan 29 11:56:03.374580 systemd[1]: Reloading... Jan 29 11:56:03.536322 zram_generator::config[2274]: No configuration found. Jan 29 11:56:03.722037 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:56:03.836422 systemd[1]: Reloading finished in 460 ms. Jan 29 11:56:03.915831 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:56:03.921584 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 11:56:03.921933 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:56:03.926602 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:56:04.150898 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:56:04.159912 (kubelet)[2347]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 11:56:04.225220 kubelet[2347]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:56:04.225220 kubelet[2347]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 11:56:04.225220 kubelet[2347]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:56:04.225220 kubelet[2347]: I0129 11:56:04.224955 2347 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 11:56:04.766019 kubelet[2347]: I0129 11:56:04.764453 2347 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 29 11:56:04.766019 kubelet[2347]: I0129 11:56:04.764508 2347 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 11:56:04.766019 kubelet[2347]: I0129 11:56:04.764880 2347 server.go:929] "Client rotation is on, will bootstrap in background" Jan 29 11:56:04.811242 kubelet[2347]: I0129 11:56:04.811187 2347 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 11:56:04.813468 kubelet[2347]: E0129 11:56:04.813394 2347 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.230.10.162:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.10.162:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:56:04.828376 kubelet[2347]: E0129 11:56:04.828321 2347 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 11:56:04.828376 kubelet[2347]: I0129 11:56:04.828368 2347 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 11:56:04.837522 kubelet[2347]: I0129 11:56:04.837483 2347 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 11:56:04.839009 kubelet[2347]: I0129 11:56:04.838966 2347 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 29 11:56:04.839345 kubelet[2347]: I0129 11:56:04.839288 2347 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 11:56:04.839587 kubelet[2347]: I0129 11:56:04.839341 2347 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-xy63l.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 11:56:04.839863 kubelet[2347]: I0129 11:56:04.839605 2347 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 11:56:04.839863 kubelet[2347]: I0129 11:56:04.839622 2347 container_manager_linux.go:300] "Creating device plugin manager" Jan 29 11:56:04.839863 kubelet[2347]: I0129 11:56:04.839817 2347 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:56:04.842549 kubelet[2347]: I0129 11:56:04.841979 2347 kubelet.go:408] "Attempting to sync node with API server" Jan 29 11:56:04.842549 kubelet[2347]: I0129 11:56:04.842014 2347 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 11:56:04.842549 kubelet[2347]: I0129 11:56:04.842079 2347 kubelet.go:314] "Adding apiserver pod source" Jan 29 11:56:04.842549 kubelet[2347]: I0129 11:56:04.842120 2347 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 11:56:04.846129 kubelet[2347]: W0129 11:56:04.846070 2347 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.10.162:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-xy63l.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.10.162:6443: connect: connection refused Jan 29 11:56:04.846323 kubelet[2347]: E0129 11:56:04.846291 2347 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.230.10.162:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-xy63l.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.10.162:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:56:04.848666 kubelet[2347]: W0129 11:56:04.848039 2347 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.10.162:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.230.10.162:6443: connect: connection refused Jan 29 11:56:04.848666 kubelet[2347]: E0129 11:56:04.848113 2347 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.230.10.162:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.10.162:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:56:04.849050 kubelet[2347]: I0129 11:56:04.849008 2347 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 11:56:04.851148 kubelet[2347]: I0129 11:56:04.851108 2347 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 11:56:04.851396 kubelet[2347]: W0129 11:56:04.851369 2347 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 11:56:04.853669 kubelet[2347]: I0129 11:56:04.853646 2347 server.go:1269] "Started kubelet" Jan 29 11:56:04.855170 kubelet[2347]: I0129 11:56:04.855095 2347 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 11:56:04.856939 kubelet[2347]: I0129 11:56:04.856767 2347 server.go:460] "Adding debug handlers to kubelet server" Jan 29 11:56:04.860312 kubelet[2347]: I0129 11:56:04.860276 2347 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 11:56:04.861775 kubelet[2347]: I0129 11:56:04.861335 2347 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 11:56:04.861775 kubelet[2347]: I0129 11:56:04.861671 2347 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 11:56:04.872277 kubelet[2347]: E0129 11:56:04.862820 2347 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.10.162:6443/api/v1/namespaces/default/events\": dial tcp 10.230.10.162:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-xy63l.gb1.brightbox.com.181f27d22dd16656 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-xy63l.gb1.brightbox.com,UID:srv-xy63l.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-xy63l.gb1.brightbox.com,},FirstTimestamp:2025-01-29 11:56:04.853614166 +0000 UTC m=+0.686952869,LastTimestamp:2025-01-29 11:56:04.853614166 +0000 UTC m=+0.686952869,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-xy63l.gb1.brightbox.com,}" Jan 29 11:56:04.873292 kubelet[2347]: I0129 11:56:04.871557 2347 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 11:56:04.875941 kubelet[2347]: I0129 11:56:04.874766 2347 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 29 11:56:04.875941 kubelet[2347]: E0129 11:56:04.875101 2347 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-xy63l.gb1.brightbox.com\" not found" Jan 29 11:56:04.878243 kubelet[2347]: E0129 11:56:04.878196 2347 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.10.162:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-xy63l.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.10.162:6443: connect: connection refused" interval="200ms" Jan 29 11:56:04.878685 kubelet[2347]: I0129 11:56:04.878663 2347 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 29 11:56:04.879197 kubelet[2347]: W0129 11:56:04.879151 2347 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.10.162:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.10.162:6443: connect: connection refused Jan 29 11:56:04.879384 kubelet[2347]: E0129 11:56:04.879356 2347 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.230.10.162:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.10.162:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:56:04.879614 kubelet[2347]: I0129 11:56:04.879592 2347 reconciler.go:26] "Reconciler: start to sync state" Jan 29 11:56:04.884867 kubelet[2347]: I0129 11:56:04.884843 2347 factory.go:221] Registration of the systemd container factory successfully Jan 29 11:56:04.885363 kubelet[2347]: I0129 11:56:04.885077 2347 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 11:56:04.888186 kubelet[2347]: E0129 11:56:04.888009 2347 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 11:56:04.890299 kubelet[2347]: I0129 11:56:04.889189 2347 factory.go:221] Registration of the containerd container factory successfully Jan 29 11:56:04.906109 kubelet[2347]: I0129 11:56:04.906048 2347 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 11:56:04.907937 kubelet[2347]: I0129 11:56:04.907908 2347 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 11:56:04.908112 kubelet[2347]: I0129 11:56:04.908091 2347 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 11:56:04.908233 kubelet[2347]: I0129 11:56:04.908214 2347 kubelet.go:2321] "Starting kubelet main sync loop" Jan 29 11:56:04.908426 kubelet[2347]: E0129 11:56:04.908397 2347 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 11:56:04.920279 kubelet[2347]: W0129 11:56:04.920199 2347 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.10.162:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.10.162:6443: connect: connection refused Jan 29 11:56:04.920925 kubelet[2347]: E0129 11:56:04.920880 2347 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.230.10.162:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.10.162:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:56:04.938164 kubelet[2347]: I0129 11:56:04.938137 2347 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 11:56:04.938403 kubelet[2347]: I0129 11:56:04.938382 2347 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 11:56:04.938620 kubelet[2347]: I0129 11:56:04.938601 2347 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:56:04.940963 kubelet[2347]: I0129 11:56:04.940936 2347 policy_none.go:49] "None policy: Start" Jan 29 11:56:04.942112 kubelet[2347]: I0129 11:56:04.942088 2347 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 11:56:04.942414 kubelet[2347]: I0129 11:56:04.942392 2347 state_mem.go:35] "Initializing new in-memory state store" Jan 29 11:56:04.954142 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 29 11:56:04.970303 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 29 11:56:04.975979 kubelet[2347]: E0129 11:56:04.975170 2347 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-xy63l.gb1.brightbox.com\" not found" Jan 29 11:56:04.976110 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 29 11:56:04.983933 kubelet[2347]: I0129 11:56:04.983885 2347 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 11:56:04.984228 kubelet[2347]: I0129 11:56:04.984193 2347 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 11:56:04.984328 kubelet[2347]: I0129 11:56:04.984235 2347 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 11:56:04.985334 kubelet[2347]: I0129 11:56:04.985220 2347 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 11:56:04.989264 kubelet[2347]: E0129 11:56:04.989207 2347 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-xy63l.gb1.brightbox.com\" not found" Jan 29 11:56:05.026839 systemd[1]: Created slice kubepods-burstable-pod0e26de70f5779763e58f91b7f47c90ea.slice - libcontainer container kubepods-burstable-pod0e26de70f5779763e58f91b7f47c90ea.slice. Jan 29 11:56:05.044474 systemd[1]: Created slice kubepods-burstable-podadd6d379a60f4dee6a7f204e8cdc0b23.slice - libcontainer container kubepods-burstable-podadd6d379a60f4dee6a7f204e8cdc0b23.slice. Jan 29 11:56:05.052731 systemd[1]: Created slice kubepods-burstable-poda075cb861b1223cfcf69088c41070679.slice - libcontainer container kubepods-burstable-poda075cb861b1223cfcf69088c41070679.slice. Jan 29 11:56:05.079726 kubelet[2347]: E0129 11:56:05.079659 2347 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.10.162:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-xy63l.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.10.162:6443: connect: connection refused" interval="400ms" Jan 29 11:56:05.080951 kubelet[2347]: I0129 11:56:05.080905 2347 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0e26de70f5779763e58f91b7f47c90ea-usr-share-ca-certificates\") pod \"kube-apiserver-srv-xy63l.gb1.brightbox.com\" (UID: \"0e26de70f5779763e58f91b7f47c90ea\") " pod="kube-system/kube-apiserver-srv-xy63l.gb1.brightbox.com" Jan 29 11:56:05.081039 kubelet[2347]: I0129 11:56:05.080959 2347 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/add6d379a60f4dee6a7f204e8cdc0b23-k8s-certs\") pod \"kube-controller-manager-srv-xy63l.gb1.brightbox.com\" (UID: \"add6d379a60f4dee6a7f204e8cdc0b23\") " pod="kube-system/kube-controller-manager-srv-xy63l.gb1.brightbox.com" Jan 29 11:56:05.081039 kubelet[2347]: I0129 11:56:05.080992 2347 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/add6d379a60f4dee6a7f204e8cdc0b23-kubeconfig\") pod \"kube-controller-manager-srv-xy63l.gb1.brightbox.com\" (UID: \"add6d379a60f4dee6a7f204e8cdc0b23\") " pod="kube-system/kube-controller-manager-srv-xy63l.gb1.brightbox.com" Jan 29 11:56:05.081039 kubelet[2347]: I0129 11:56:05.081019 2347 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a075cb861b1223cfcf69088c41070679-kubeconfig\") pod \"kube-scheduler-srv-xy63l.gb1.brightbox.com\" (UID: \"a075cb861b1223cfcf69088c41070679\") " pod="kube-system/kube-scheduler-srv-xy63l.gb1.brightbox.com" Jan 29 11:56:05.081206 kubelet[2347]: I0129 11:56:05.081046 2347 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0e26de70f5779763e58f91b7f47c90ea-ca-certs\") pod \"kube-apiserver-srv-xy63l.gb1.brightbox.com\" (UID: \"0e26de70f5779763e58f91b7f47c90ea\") " pod="kube-system/kube-apiserver-srv-xy63l.gb1.brightbox.com" Jan 29 11:56:05.081206 kubelet[2347]: I0129 11:56:05.081072 2347 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0e26de70f5779763e58f91b7f47c90ea-k8s-certs\") pod \"kube-apiserver-srv-xy63l.gb1.brightbox.com\" (UID: \"0e26de70f5779763e58f91b7f47c90ea\") " pod="kube-system/kube-apiserver-srv-xy63l.gb1.brightbox.com" Jan 29 11:56:05.081206 kubelet[2347]: I0129 11:56:05.081097 2347 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/add6d379a60f4dee6a7f204e8cdc0b23-ca-certs\") pod \"kube-controller-manager-srv-xy63l.gb1.brightbox.com\" (UID: \"add6d379a60f4dee6a7f204e8cdc0b23\") " pod="kube-system/kube-controller-manager-srv-xy63l.gb1.brightbox.com" Jan 29 11:56:05.081206 kubelet[2347]: I0129 11:56:05.081121 2347 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/add6d379a60f4dee6a7f204e8cdc0b23-flexvolume-dir\") pod \"kube-controller-manager-srv-xy63l.gb1.brightbox.com\" (UID: \"add6d379a60f4dee6a7f204e8cdc0b23\") " pod="kube-system/kube-controller-manager-srv-xy63l.gb1.brightbox.com" Jan 29 11:56:05.081206 kubelet[2347]: I0129 11:56:05.081155 2347 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/add6d379a60f4dee6a7f204e8cdc0b23-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-xy63l.gb1.brightbox.com\" (UID: \"add6d379a60f4dee6a7f204e8cdc0b23\") " pod="kube-system/kube-controller-manager-srv-xy63l.gb1.brightbox.com" Jan 29 11:56:05.088000 kubelet[2347]: I0129 11:56:05.087963 2347 kubelet_node_status.go:72] "Attempting to register node" node="srv-xy63l.gb1.brightbox.com" Jan 29 11:56:05.088520 kubelet[2347]: E0129 11:56:05.088478 2347 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.230.10.162:6443/api/v1/nodes\": dial tcp 10.230.10.162:6443: connect: connection refused" node="srv-xy63l.gb1.brightbox.com" Jan 29 11:56:05.292898 kubelet[2347]: I0129 11:56:05.292609 2347 kubelet_node_status.go:72] "Attempting to register node" node="srv-xy63l.gb1.brightbox.com" Jan 29 11:56:05.293725 kubelet[2347]: E0129 11:56:05.293685 2347 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.230.10.162:6443/api/v1/nodes\": dial tcp 10.230.10.162:6443: connect: connection refused" node="srv-xy63l.gb1.brightbox.com" Jan 29 11:56:05.344478 containerd[1506]: time="2025-01-29T11:56:05.344408497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-xy63l.gb1.brightbox.com,Uid:0e26de70f5779763e58f91b7f47c90ea,Namespace:kube-system,Attempt:0,}" Jan 29 11:56:05.350499 containerd[1506]: time="2025-01-29T11:56:05.350464374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-xy63l.gb1.brightbox.com,Uid:add6d379a60f4dee6a7f204e8cdc0b23,Namespace:kube-system,Attempt:0,}" Jan 29 11:56:05.356735 containerd[1506]: time="2025-01-29T11:56:05.356453923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-xy63l.gb1.brightbox.com,Uid:a075cb861b1223cfcf69088c41070679,Namespace:kube-system,Attempt:0,}" Jan 29 11:56:05.481312 kubelet[2347]: E0129 11:56:05.481165 2347 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.10.162:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-xy63l.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.10.162:6443: connect: connection refused" interval="800ms" Jan 29 11:56:05.691599 kubelet[2347]: W0129 11:56:05.691415 2347 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.10.162:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.230.10.162:6443: connect: connection refused Jan 29 11:56:05.691599 kubelet[2347]: E0129 11:56:05.691497 2347 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.230.10.162:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.10.162:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:56:05.697200 kubelet[2347]: I0129 11:56:05.696347 2347 kubelet_node_status.go:72] "Attempting to register node" node="srv-xy63l.gb1.brightbox.com" Jan 29 11:56:05.697200 kubelet[2347]: E0129 11:56:05.696720 2347 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.230.10.162:6443/api/v1/nodes\": dial tcp 10.230.10.162:6443: connect: connection refused" node="srv-xy63l.gb1.brightbox.com" Jan 29 11:56:05.876880 kubelet[2347]: W0129 11:56:05.876763 2347 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.10.162:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.10.162:6443: connect: connection refused Jan 29 11:56:05.877206 kubelet[2347]: E0129 11:56:05.877174 2347 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.230.10.162:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.10.162:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:56:05.947035 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1157589368.mount: Deactivated successfully. Jan 29 11:56:05.955016 containerd[1506]: time="2025-01-29T11:56:05.953739421Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:56:05.956662 containerd[1506]: time="2025-01-29T11:56:05.955576238Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:56:05.957214 containerd[1506]: time="2025-01-29T11:56:05.957162726Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jan 29 11:56:05.957887 containerd[1506]: time="2025-01-29T11:56:05.957850041Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 11:56:05.959178 containerd[1506]: time="2025-01-29T11:56:05.959142267Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:56:05.961022 containerd[1506]: time="2025-01-29T11:56:05.960973823Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 11:56:05.963817 containerd[1506]: time="2025-01-29T11:56:05.963749655Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:56:05.967405 containerd[1506]: time="2025-01-29T11:56:05.967363633Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 622.204693ms" Jan 29 11:56:05.968742 containerd[1506]: time="2025-01-29T11:56:05.968707363Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:56:05.972169 containerd[1506]: time="2025-01-29T11:56:05.972133394Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 621.571572ms" Jan 29 11:56:05.975206 containerd[1506]: time="2025-01-29T11:56:05.975159156Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 618.628747ms" Jan 29 11:56:06.171437 containerd[1506]: time="2025-01-29T11:56:06.167132544Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:56:06.171437 containerd[1506]: time="2025-01-29T11:56:06.170380042Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:56:06.171437 containerd[1506]: time="2025-01-29T11:56:06.170406735Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:56:06.171437 containerd[1506]: time="2025-01-29T11:56:06.170533872Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:56:06.175397 containerd[1506]: time="2025-01-29T11:56:06.175100554Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:56:06.175397 containerd[1506]: time="2025-01-29T11:56:06.175192748Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:56:06.175397 containerd[1506]: time="2025-01-29T11:56:06.175221102Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:56:06.175895 containerd[1506]: time="2025-01-29T11:56:06.175695168Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:56:06.185311 containerd[1506]: time="2025-01-29T11:56:06.184898128Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:56:06.185311 containerd[1506]: time="2025-01-29T11:56:06.184975343Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:56:06.185311 containerd[1506]: time="2025-01-29T11:56:06.185001015Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:56:06.185311 containerd[1506]: time="2025-01-29T11:56:06.185132744Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:56:06.218711 systemd[1]: Started cri-containerd-743b7cfa78a3147d98b6cc744d69e50c1daa7a7d564be0f3cdc52f506b7e250f.scope - libcontainer container 743b7cfa78a3147d98b6cc744d69e50c1daa7a7d564be0f3cdc52f506b7e250f. Jan 29 11:56:06.238531 systemd[1]: Started cri-containerd-625858f67b766ecbc9fe56e1991ea9333f7c0d0489c2c21d3855d5dcb20ecfb8.scope - libcontainer container 625858f67b766ecbc9fe56e1991ea9333f7c0d0489c2c21d3855d5dcb20ecfb8. Jan 29 11:56:06.241884 systemd[1]: Started cri-containerd-7f5f4c06e4562befbdcdb1c1d9ac6831384d40a918a18c6db51eaa98b76e8cf2.scope - libcontainer container 7f5f4c06e4562befbdcdb1c1d9ac6831384d40a918a18c6db51eaa98b76e8cf2. Jan 29 11:56:06.282855 kubelet[2347]: E0129 11:56:06.282627 2347 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.10.162:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-xy63l.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.10.162:6443: connect: connection refused" interval="1.6s" Jan 29 11:56:06.320670 containerd[1506]: time="2025-01-29T11:56:06.320056564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-xy63l.gb1.brightbox.com,Uid:add6d379a60f4dee6a7f204e8cdc0b23,Namespace:kube-system,Attempt:0,} returns sandbox id \"743b7cfa78a3147d98b6cc744d69e50c1daa7a7d564be0f3cdc52f506b7e250f\"" Jan 29 11:56:06.335432 containerd[1506]: time="2025-01-29T11:56:06.334232989Z" level=info msg="CreateContainer within sandbox \"743b7cfa78a3147d98b6cc744d69e50c1daa7a7d564be0f3cdc52f506b7e250f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 29 11:56:06.356899 kubelet[2347]: W0129 11:56:06.356375 2347 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.10.162:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-xy63l.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.10.162:6443: connect: connection refused Jan 29 11:56:06.356899 kubelet[2347]: E0129 11:56:06.356498 2347 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.230.10.162:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-xy63l.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.10.162:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:56:06.358520 containerd[1506]: time="2025-01-29T11:56:06.358454175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-xy63l.gb1.brightbox.com,Uid:0e26de70f5779763e58f91b7f47c90ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"625858f67b766ecbc9fe56e1991ea9333f7c0d0489c2c21d3855d5dcb20ecfb8\"" Jan 29 11:56:06.364153 containerd[1506]: time="2025-01-29T11:56:06.363936093Z" level=info msg="CreateContainer within sandbox \"625858f67b766ecbc9fe56e1991ea9333f7c0d0489c2c21d3855d5dcb20ecfb8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 29 11:56:06.374032 containerd[1506]: time="2025-01-29T11:56:06.373943808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-xy63l.gb1.brightbox.com,Uid:a075cb861b1223cfcf69088c41070679,Namespace:kube-system,Attempt:0,} returns sandbox id \"7f5f4c06e4562befbdcdb1c1d9ac6831384d40a918a18c6db51eaa98b76e8cf2\"" Jan 29 11:56:06.377640 containerd[1506]: time="2025-01-29T11:56:06.377194051Z" level=info msg="CreateContainer within sandbox \"743b7cfa78a3147d98b6cc744d69e50c1daa7a7d564be0f3cdc52f506b7e250f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6ebb057b7bc0f9b7d3c9a36c637780e79e0a7f95e45cd429e725fee347912ad0\"" Jan 29 11:56:06.379943 containerd[1506]: time="2025-01-29T11:56:06.379908601Z" level=info msg="StartContainer for \"6ebb057b7bc0f9b7d3c9a36c637780e79e0a7f95e45cd429e725fee347912ad0\"" Jan 29 11:56:06.381723 containerd[1506]: time="2025-01-29T11:56:06.381691487Z" level=info msg="CreateContainer within sandbox \"7f5f4c06e4562befbdcdb1c1d9ac6831384d40a918a18c6db51eaa98b76e8cf2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 29 11:56:06.387852 containerd[1506]: time="2025-01-29T11:56:06.387790927Z" level=info msg="CreateContainer within sandbox \"625858f67b766ecbc9fe56e1991ea9333f7c0d0489c2c21d3855d5dcb20ecfb8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7cceb7608878c8f9953a1bf375eab9eb0cd9417d5c101a242d3873b9826871a0\"" Jan 29 11:56:06.390069 containerd[1506]: time="2025-01-29T11:56:06.390035029Z" level=info msg="StartContainer for \"7cceb7608878c8f9953a1bf375eab9eb0cd9417d5c101a242d3873b9826871a0\"" Jan 29 11:56:06.413828 containerd[1506]: time="2025-01-29T11:56:06.413731358Z" level=info msg="CreateContainer within sandbox \"7f5f4c06e4562befbdcdb1c1d9ac6831384d40a918a18c6db51eaa98b76e8cf2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c9747bdd108895a77e24f0a54410efd50aa1af95ecf3b7055e1abe744d949ace\"" Jan 29 11:56:06.415364 containerd[1506]: time="2025-01-29T11:56:06.415277359Z" level=info msg="StartContainer for \"c9747bdd108895a77e24f0a54410efd50aa1af95ecf3b7055e1abe744d949ace\"" Jan 29 11:56:06.438464 systemd[1]: Started cri-containerd-6ebb057b7bc0f9b7d3c9a36c637780e79e0a7f95e45cd429e725fee347912ad0.scope - libcontainer container 6ebb057b7bc0f9b7d3c9a36c637780e79e0a7f95e45cd429e725fee347912ad0. Jan 29 11:56:06.472543 systemd[1]: Started cri-containerd-7cceb7608878c8f9953a1bf375eab9eb0cd9417d5c101a242d3873b9826871a0.scope - libcontainer container 7cceb7608878c8f9953a1bf375eab9eb0cd9417d5c101a242d3873b9826871a0. Jan 29 11:56:06.480445 systemd[1]: Started cri-containerd-c9747bdd108895a77e24f0a54410efd50aa1af95ecf3b7055e1abe744d949ace.scope - libcontainer container c9747bdd108895a77e24f0a54410efd50aa1af95ecf3b7055e1abe744d949ace. Jan 29 11:56:06.501100 kubelet[2347]: I0129 11:56:06.500637 2347 kubelet_node_status.go:72] "Attempting to register node" node="srv-xy63l.gb1.brightbox.com" Jan 29 11:56:06.501100 kubelet[2347]: E0129 11:56:06.501058 2347 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.230.10.162:6443/api/v1/nodes\": dial tcp 10.230.10.162:6443: connect: connection refused" node="srv-xy63l.gb1.brightbox.com" Jan 29 11:56:06.505620 kubelet[2347]: W0129 11:56:06.505494 2347 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.10.162:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.10.162:6443: connect: connection refused Jan 29 11:56:06.505620 kubelet[2347]: E0129 11:56:06.505572 2347 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.230.10.162:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.10.162:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:56:06.577467 containerd[1506]: time="2025-01-29T11:56:06.577087439Z" level=info msg="StartContainer for \"6ebb057b7bc0f9b7d3c9a36c637780e79e0a7f95e45cd429e725fee347912ad0\" returns successfully" Jan 29 11:56:06.583872 containerd[1506]: time="2025-01-29T11:56:06.583462630Z" level=info msg="StartContainer for \"7cceb7608878c8f9953a1bf375eab9eb0cd9417d5c101a242d3873b9826871a0\" returns successfully" Jan 29 11:56:06.605462 containerd[1506]: time="2025-01-29T11:56:06.605402418Z" level=info msg="StartContainer for \"c9747bdd108895a77e24f0a54410efd50aa1af95ecf3b7055e1abe744d949ace\" returns successfully" Jan 29 11:56:06.857177 kubelet[2347]: E0129 11:56:06.856088 2347 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.230.10.162:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.10.162:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:56:08.106337 kubelet[2347]: I0129 11:56:08.105983 2347 kubelet_node_status.go:72] "Attempting to register node" node="srv-xy63l.gb1.brightbox.com" Jan 29 11:56:09.941206 kubelet[2347]: I0129 11:56:09.941088 2347 kubelet_node_status.go:75] "Successfully registered node" node="srv-xy63l.gb1.brightbox.com" Jan 29 11:56:09.941206 kubelet[2347]: E0129 11:56:09.941199 2347 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"srv-xy63l.gb1.brightbox.com\": node \"srv-xy63l.gb1.brightbox.com\" not found" Jan 29 11:56:10.032684 kubelet[2347]: E0129 11:56:10.032464 2347 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{srv-xy63l.gb1.brightbox.com.181f27d22dd16656 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-xy63l.gb1.brightbox.com,UID:srv-xy63l.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-xy63l.gb1.brightbox.com,},FirstTimestamp:2025-01-29 11:56:04.853614166 +0000 UTC m=+0.686952869,LastTimestamp:2025-01-29 11:56:04.853614166 +0000 UTC m=+0.686952869,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-xy63l.gb1.brightbox.com,}" Jan 29 11:56:10.077576 kubelet[2347]: E0129 11:56:10.077498 2347 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="3.2s" Jan 29 11:56:10.093347 kubelet[2347]: E0129 11:56:10.093161 2347 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{srv-xy63l.gb1.brightbox.com.181f27d22fde0b17 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-xy63l.gb1.brightbox.com,UID:srv-xy63l.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:srv-xy63l.gb1.brightbox.com,},FirstTimestamp:2025-01-29 11:56:04.887997207 +0000 UTC m=+0.721335917,LastTimestamp:2025-01-29 11:56:04.887997207 +0000 UTC m=+0.721335917,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-xy63l.gb1.brightbox.com,}" Jan 29 11:56:10.852215 kubelet[2347]: I0129 11:56:10.852072 2347 apiserver.go:52] "Watching apiserver" Jan 29 11:56:10.879186 kubelet[2347]: I0129 11:56:10.879137 2347 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 29 11:56:10.891416 kubelet[2347]: W0129 11:56:10.891188 2347 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 11:56:11.746935 kubelet[2347]: W0129 11:56:11.745953 2347 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 11:56:12.158792 systemd[1]: Reloading requested from client PID 2627 ('systemctl') (unit session-11.scope)... Jan 29 11:56:12.158838 systemd[1]: Reloading... Jan 29 11:56:12.300361 zram_generator::config[2666]: No configuration found. Jan 29 11:56:12.490838 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:56:12.636574 systemd[1]: Reloading finished in 476 ms. Jan 29 11:56:12.713404 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:56:12.732384 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 11:56:12.733713 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:56:12.733833 systemd[1]: kubelet.service: Consumed 1.220s CPU time, 112.4M memory peak, 0B memory swap peak. Jan 29 11:56:12.743802 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:56:12.984403 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:56:13.001827 (kubelet)[2729]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 11:56:13.103994 kubelet[2729]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:56:13.103994 kubelet[2729]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 11:56:13.103994 kubelet[2729]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:56:13.104916 kubelet[2729]: I0129 11:56:13.104056 2729 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 11:56:13.118130 kubelet[2729]: I0129 11:56:13.117824 2729 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 29 11:56:13.118130 kubelet[2729]: I0129 11:56:13.117862 2729 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 11:56:13.118410 kubelet[2729]: I0129 11:56:13.118227 2729 server.go:929] "Client rotation is on, will bootstrap in background" Jan 29 11:56:13.120912 kubelet[2729]: I0129 11:56:13.120879 2729 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 11:56:13.132324 kubelet[2729]: I0129 11:56:13.131799 2729 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 11:56:13.139085 kubelet[2729]: E0129 11:56:13.139037 2729 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 11:56:13.139393 kubelet[2729]: I0129 11:56:13.139336 2729 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 11:56:13.152770 kubelet[2729]: I0129 11:56:13.152738 2729 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 11:56:13.153079 kubelet[2729]: I0129 11:56:13.153056 2729 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 29 11:56:13.153523 kubelet[2729]: I0129 11:56:13.153439 2729 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 11:56:13.153896 kubelet[2729]: I0129 11:56:13.153650 2729 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-xy63l.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 11:56:13.155269 kubelet[2729]: I0129 11:56:13.154136 2729 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 11:56:13.155269 kubelet[2729]: I0129 11:56:13.154163 2729 container_manager_linux.go:300] "Creating device plugin manager" Jan 29 11:56:13.155269 kubelet[2729]: I0129 11:56:13.154261 2729 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:56:13.155269 kubelet[2729]: I0129 11:56:13.154427 2729 kubelet.go:408] "Attempting to sync node with API server" Jan 29 11:56:13.155269 kubelet[2729]: I0129 11:56:13.154454 2729 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 11:56:13.155269 kubelet[2729]: I0129 11:56:13.154504 2729 kubelet.go:314] "Adding apiserver pod source" Jan 29 11:56:13.155269 kubelet[2729]: I0129 11:56:13.154526 2729 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 11:56:13.155751 sudo[2744]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 29 11:56:13.156358 sudo[2744]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 29 11:56:13.159424 kubelet[2729]: I0129 11:56:13.159390 2729 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 11:56:13.160009 kubelet[2729]: I0129 11:56:13.159981 2729 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 11:56:13.161762 kubelet[2729]: I0129 11:56:13.161735 2729 server.go:1269] "Started kubelet" Jan 29 11:56:13.174305 kubelet[2729]: I0129 11:56:13.171224 2729 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 11:56:13.193833 kubelet[2729]: I0129 11:56:13.193786 2729 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 11:56:13.199950 kubelet[2729]: I0129 11:56:13.199557 2729 server.go:460] "Adding debug handlers to kubelet server" Jan 29 11:56:13.201098 kubelet[2729]: I0129 11:56:13.200942 2729 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 11:56:13.201723 kubelet[2729]: I0129 11:56:13.201275 2729 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 11:56:13.201723 kubelet[2729]: I0129 11:56:13.201592 2729 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 11:56:13.211296 kubelet[2729]: I0129 11:56:13.210581 2729 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 29 11:56:13.213343 kubelet[2729]: I0129 11:56:13.213315 2729 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 29 11:56:13.213620 kubelet[2729]: I0129 11:56:13.213596 2729 reconciler.go:26] "Reconciler: start to sync state" Jan 29 11:56:13.219512 kubelet[2729]: E0129 11:56:13.219354 2729 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 11:56:13.219874 kubelet[2729]: I0129 11:56:13.219806 2729 factory.go:221] Registration of the systemd container factory successfully Jan 29 11:56:13.219985 kubelet[2729]: I0129 11:56:13.219944 2729 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 11:56:13.226762 kubelet[2729]: I0129 11:56:13.226485 2729 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 11:56:13.231298 kubelet[2729]: I0129 11:56:13.229646 2729 factory.go:221] Registration of the containerd container factory successfully Jan 29 11:56:13.239793 kubelet[2729]: I0129 11:56:13.239765 2729 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 11:56:13.240114 kubelet[2729]: I0129 11:56:13.240094 2729 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 11:56:13.240477 kubelet[2729]: I0129 11:56:13.240399 2729 kubelet.go:2321] "Starting kubelet main sync loop" Jan 29 11:56:13.241327 kubelet[2729]: E0129 11:56:13.240922 2729 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 11:56:13.342804 kubelet[2729]: E0129 11:56:13.342548 2729 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 29 11:56:13.351715 kubelet[2729]: I0129 11:56:13.351686 2729 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 11:56:13.351715 kubelet[2729]: I0129 11:56:13.351710 2729 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 11:56:13.351861 kubelet[2729]: I0129 11:56:13.351744 2729 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:56:13.352546 kubelet[2729]: I0129 11:56:13.352033 2729 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 29 11:56:13.352546 kubelet[2729]: I0129 11:56:13.352084 2729 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 29 11:56:13.352546 kubelet[2729]: I0129 11:56:13.352116 2729 policy_none.go:49] "None policy: Start" Jan 29 11:56:13.353601 kubelet[2729]: I0129 11:56:13.353335 2729 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 11:56:13.353601 kubelet[2729]: I0129 11:56:13.353376 2729 state_mem.go:35] "Initializing new in-memory state store" Jan 29 11:56:13.353601 kubelet[2729]: I0129 11:56:13.353591 2729 state_mem.go:75] "Updated machine memory state" Jan 29 11:56:13.363427 kubelet[2729]: I0129 11:56:13.362773 2729 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 11:56:13.364971 kubelet[2729]: I0129 11:56:13.364717 2729 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 11:56:13.364971 kubelet[2729]: I0129 11:56:13.364751 2729 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 11:56:13.365486 kubelet[2729]: I0129 11:56:13.365129 2729 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 11:56:13.500953 kubelet[2729]: I0129 11:56:13.500912 2729 kubelet_node_status.go:72] "Attempting to register node" node="srv-xy63l.gb1.brightbox.com" Jan 29 11:56:13.522049 kubelet[2729]: I0129 11:56:13.521426 2729 kubelet_node_status.go:111] "Node was previously registered" node="srv-xy63l.gb1.brightbox.com" Jan 29 11:56:13.522049 kubelet[2729]: I0129 11:56:13.521592 2729 kubelet_node_status.go:75] "Successfully registered node" node="srv-xy63l.gb1.brightbox.com" Jan 29 11:56:13.561006 kubelet[2729]: W0129 11:56:13.560415 2729 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 11:56:13.561006 kubelet[2729]: E0129 11:56:13.560512 2729 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-srv-xy63l.gb1.brightbox.com\" already exists" pod="kube-system/kube-controller-manager-srv-xy63l.gb1.brightbox.com" Jan 29 11:56:13.563281 kubelet[2729]: W0129 11:56:13.562804 2729 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 11:56:13.563281 kubelet[2729]: E0129 11:56:13.562858 2729 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-srv-xy63l.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-xy63l.gb1.brightbox.com" Jan 29 11:56:13.563834 kubelet[2729]: W0129 11:56:13.563687 2729 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 11:56:13.715660 kubelet[2729]: I0129 11:56:13.715604 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0e26de70f5779763e58f91b7f47c90ea-usr-share-ca-certificates\") pod \"kube-apiserver-srv-xy63l.gb1.brightbox.com\" (UID: \"0e26de70f5779763e58f91b7f47c90ea\") " pod="kube-system/kube-apiserver-srv-xy63l.gb1.brightbox.com" Jan 29 11:56:13.715829 kubelet[2729]: I0129 11:56:13.715687 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/add6d379a60f4dee6a7f204e8cdc0b23-ca-certs\") pod \"kube-controller-manager-srv-xy63l.gb1.brightbox.com\" (UID: \"add6d379a60f4dee6a7f204e8cdc0b23\") " pod="kube-system/kube-controller-manager-srv-xy63l.gb1.brightbox.com" Jan 29 11:56:13.715898 kubelet[2729]: I0129 11:56:13.715769 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/add6d379a60f4dee6a7f204e8cdc0b23-k8s-certs\") pod \"kube-controller-manager-srv-xy63l.gb1.brightbox.com\" (UID: \"add6d379a60f4dee6a7f204e8cdc0b23\") " pod="kube-system/kube-controller-manager-srv-xy63l.gb1.brightbox.com" Jan 29 11:56:13.715898 kubelet[2729]: I0129 11:56:13.715878 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/add6d379a60f4dee6a7f204e8cdc0b23-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-xy63l.gb1.brightbox.com\" (UID: \"add6d379a60f4dee6a7f204e8cdc0b23\") " pod="kube-system/kube-controller-manager-srv-xy63l.gb1.brightbox.com" Jan 29 11:56:13.716034 kubelet[2729]: I0129 11:56:13.715911 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0e26de70f5779763e58f91b7f47c90ea-ca-certs\") pod \"kube-apiserver-srv-xy63l.gb1.brightbox.com\" (UID: \"0e26de70f5779763e58f91b7f47c90ea\") " pod="kube-system/kube-apiserver-srv-xy63l.gb1.brightbox.com" Jan 29 11:56:13.716034 kubelet[2729]: I0129 11:56:13.715936 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0e26de70f5779763e58f91b7f47c90ea-k8s-certs\") pod \"kube-apiserver-srv-xy63l.gb1.brightbox.com\" (UID: \"0e26de70f5779763e58f91b7f47c90ea\") " pod="kube-system/kube-apiserver-srv-xy63l.gb1.brightbox.com" Jan 29 11:56:13.716034 kubelet[2729]: I0129 11:56:13.715963 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/add6d379a60f4dee6a7f204e8cdc0b23-kubeconfig\") pod \"kube-controller-manager-srv-xy63l.gb1.brightbox.com\" (UID: \"add6d379a60f4dee6a7f204e8cdc0b23\") " pod="kube-system/kube-controller-manager-srv-xy63l.gb1.brightbox.com" Jan 29 11:56:13.716034 kubelet[2729]: I0129 11:56:13.716002 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a075cb861b1223cfcf69088c41070679-kubeconfig\") pod \"kube-scheduler-srv-xy63l.gb1.brightbox.com\" (UID: \"a075cb861b1223cfcf69088c41070679\") " pod="kube-system/kube-scheduler-srv-xy63l.gb1.brightbox.com" Jan 29 11:56:13.716237 kubelet[2729]: I0129 11:56:13.716032 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/add6d379a60f4dee6a7f204e8cdc0b23-flexvolume-dir\") pod \"kube-controller-manager-srv-xy63l.gb1.brightbox.com\" (UID: \"add6d379a60f4dee6a7f204e8cdc0b23\") " pod="kube-system/kube-controller-manager-srv-xy63l.gb1.brightbox.com" Jan 29 11:56:13.993149 sudo[2744]: pam_unix(sudo:session): session closed for user root Jan 29 11:56:14.166127 kubelet[2729]: I0129 11:56:14.166063 2729 apiserver.go:52] "Watching apiserver" Jan 29 11:56:14.213703 kubelet[2729]: I0129 11:56:14.213605 2729 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 29 11:56:14.327504 kubelet[2729]: W0129 11:56:14.327090 2729 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 11:56:14.327504 kubelet[2729]: E0129 11:56:14.327178 2729 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-srv-xy63l.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-xy63l.gb1.brightbox.com" Jan 29 11:56:14.365437 kubelet[2729]: I0129 11:56:14.364906 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-xy63l.gb1.brightbox.com" podStartSLOduration=4.364380795 podStartE2EDuration="4.364380795s" podCreationTimestamp="2025-01-29 11:56:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:56:14.350966382 +0000 UTC m=+1.331739303" watchObservedRunningTime="2025-01-29 11:56:14.364380795 +0000 UTC m=+1.345153728" Jan 29 11:56:14.366935 kubelet[2729]: I0129 11:56:14.366004 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-xy63l.gb1.brightbox.com" podStartSLOduration=3.365994262 podStartE2EDuration="3.365994262s" podCreationTimestamp="2025-01-29 11:56:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:56:14.365059979 +0000 UTC m=+1.345832914" watchObservedRunningTime="2025-01-29 11:56:14.365994262 +0000 UTC m=+1.346767213" Jan 29 11:56:16.796980 sudo[1781]: pam_unix(sudo:session): session closed for user root Jan 29 11:56:16.941794 sshd[1780]: Connection closed by 139.178.68.195 port 35664 Jan 29 11:56:16.953379 sshd-session[1778]: pam_unix(sshd:session): session closed for user core Jan 29 11:56:16.958782 systemd[1]: sshd@8-10.230.10.162:22-139.178.68.195:35664.service: Deactivated successfully. Jan 29 11:56:16.962746 systemd[1]: session-11.scope: Deactivated successfully. Jan 29 11:56:16.963132 systemd[1]: session-11.scope: Consumed 9.285s CPU time, 136.2M memory peak, 0B memory swap peak. Jan 29 11:56:16.965127 systemd-logind[1487]: Session 11 logged out. Waiting for processes to exit. Jan 29 11:56:16.967462 systemd-logind[1487]: Removed session 11. Jan 29 11:56:17.259505 kubelet[2729]: I0129 11:56:17.259139 2729 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 29 11:56:17.260423 containerd[1506]: time="2025-01-29T11:56:17.260133878Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 11:56:17.261092 kubelet[2729]: I0129 11:56:17.261049 2729 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 29 11:56:18.259676 kubelet[2729]: I0129 11:56:18.259442 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-xy63l.gb1.brightbox.com" podStartSLOduration=5.259387861 podStartE2EDuration="5.259387861s" podCreationTimestamp="2025-01-29 11:56:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:56:14.379123494 +0000 UTC m=+1.359896439" watchObservedRunningTime="2025-01-29 11:56:18.259387861 +0000 UTC m=+5.240160805" Jan 29 11:56:18.285992 systemd[1]: Created slice kubepods-besteffort-pod6cb41318_1f9d_42a5_b851_1d618eade06c.slice - libcontainer container kubepods-besteffort-pod6cb41318_1f9d_42a5_b851_1d618eade06c.slice. Jan 29 11:56:18.305020 systemd[1]: Created slice kubepods-burstable-podb8948ca6_8c18_4419_a43d_5ca59e8c990a.slice - libcontainer container kubepods-burstable-podb8948ca6_8c18_4419_a43d_5ca59e8c990a.slice. Jan 29 11:56:18.350773 kubelet[2729]: I0129 11:56:18.349510 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b8948ca6-8c18-4419-a43d-5ca59e8c990a-bpf-maps\") pod \"cilium-2v86t\" (UID: \"b8948ca6-8c18-4419-a43d-5ca59e8c990a\") " pod="kube-system/cilium-2v86t" Jan 29 11:56:18.350773 kubelet[2729]: I0129 11:56:18.349571 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b8948ca6-8c18-4419-a43d-5ca59e8c990a-lib-modules\") pod \"cilium-2v86t\" (UID: \"b8948ca6-8c18-4419-a43d-5ca59e8c990a\") " pod="kube-system/cilium-2v86t" Jan 29 11:56:18.350773 kubelet[2729]: I0129 11:56:18.349615 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6cb41318-1f9d-42a5-b851-1d618eade06c-kube-proxy\") pod \"kube-proxy-bhfqt\" (UID: \"6cb41318-1f9d-42a5-b851-1d618eade06c\") " pod="kube-system/kube-proxy-bhfqt" Jan 29 11:56:18.350773 kubelet[2729]: I0129 11:56:18.349645 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b8948ca6-8c18-4419-a43d-5ca59e8c990a-cilium-run\") pod \"cilium-2v86t\" (UID: \"b8948ca6-8c18-4419-a43d-5ca59e8c990a\") " pod="kube-system/cilium-2v86t" Jan 29 11:56:18.350773 kubelet[2729]: I0129 11:56:18.349671 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b8948ca6-8c18-4419-a43d-5ca59e8c990a-cni-path\") pod \"cilium-2v86t\" (UID: \"b8948ca6-8c18-4419-a43d-5ca59e8c990a\") " pod="kube-system/cilium-2v86t" Jan 29 11:56:18.350773 kubelet[2729]: I0129 11:56:18.349695 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b8948ca6-8c18-4419-a43d-5ca59e8c990a-xtables-lock\") pod \"cilium-2v86t\" (UID: \"b8948ca6-8c18-4419-a43d-5ca59e8c990a\") " pod="kube-system/cilium-2v86t" Jan 29 11:56:18.351305 kubelet[2729]: I0129 11:56:18.349726 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hjqk\" (UniqueName: \"kubernetes.io/projected/6cb41318-1f9d-42a5-b851-1d618eade06c-kube-api-access-6hjqk\") pod \"kube-proxy-bhfqt\" (UID: \"6cb41318-1f9d-42a5-b851-1d618eade06c\") " pod="kube-system/kube-proxy-bhfqt" Jan 29 11:56:18.351305 kubelet[2729]: I0129 11:56:18.349760 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b8948ca6-8c18-4419-a43d-5ca59e8c990a-cilium-cgroup\") pod \"cilium-2v86t\" (UID: \"b8948ca6-8c18-4419-a43d-5ca59e8c990a\") " pod="kube-system/cilium-2v86t" Jan 29 11:56:18.351305 kubelet[2729]: I0129 11:56:18.349786 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b8948ca6-8c18-4419-a43d-5ca59e8c990a-etc-cni-netd\") pod \"cilium-2v86t\" (UID: \"b8948ca6-8c18-4419-a43d-5ca59e8c990a\") " pod="kube-system/cilium-2v86t" Jan 29 11:56:18.351305 kubelet[2729]: I0129 11:56:18.349812 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b8948ca6-8c18-4419-a43d-5ca59e8c990a-host-proc-sys-kernel\") pod \"cilium-2v86t\" (UID: \"b8948ca6-8c18-4419-a43d-5ca59e8c990a\") " pod="kube-system/cilium-2v86t" Jan 29 11:56:18.351305 kubelet[2729]: I0129 11:56:18.349836 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b8948ca6-8c18-4419-a43d-5ca59e8c990a-hubble-tls\") pod \"cilium-2v86t\" (UID: \"b8948ca6-8c18-4419-a43d-5ca59e8c990a\") " pod="kube-system/cilium-2v86t" Jan 29 11:56:18.352521 kubelet[2729]: I0129 11:56:18.349860 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6cb41318-1f9d-42a5-b851-1d618eade06c-lib-modules\") pod \"kube-proxy-bhfqt\" (UID: \"6cb41318-1f9d-42a5-b851-1d618eade06c\") " pod="kube-system/kube-proxy-bhfqt" Jan 29 11:56:18.352521 kubelet[2729]: I0129 11:56:18.349885 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b8948ca6-8c18-4419-a43d-5ca59e8c990a-host-proc-sys-net\") pod \"cilium-2v86t\" (UID: \"b8948ca6-8c18-4419-a43d-5ca59e8c990a\") " pod="kube-system/cilium-2v86t" Jan 29 11:56:18.352521 kubelet[2729]: I0129 11:56:18.349940 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b8948ca6-8c18-4419-a43d-5ca59e8c990a-clustermesh-secrets\") pod \"cilium-2v86t\" (UID: \"b8948ca6-8c18-4419-a43d-5ca59e8c990a\") " pod="kube-system/cilium-2v86t" Jan 29 11:56:18.352521 kubelet[2729]: I0129 11:56:18.349974 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bd8cd\" (UniqueName: \"kubernetes.io/projected/b8948ca6-8c18-4419-a43d-5ca59e8c990a-kube-api-access-bd8cd\") pod \"cilium-2v86t\" (UID: \"b8948ca6-8c18-4419-a43d-5ca59e8c990a\") " pod="kube-system/cilium-2v86t" Jan 29 11:56:18.352521 kubelet[2729]: I0129 11:56:18.350001 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b8948ca6-8c18-4419-a43d-5ca59e8c990a-cilium-config-path\") pod \"cilium-2v86t\" (UID: \"b8948ca6-8c18-4419-a43d-5ca59e8c990a\") " pod="kube-system/cilium-2v86t" Jan 29 11:56:18.352794 kubelet[2729]: I0129 11:56:18.350036 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6cb41318-1f9d-42a5-b851-1d618eade06c-xtables-lock\") pod \"kube-proxy-bhfqt\" (UID: \"6cb41318-1f9d-42a5-b851-1d618eade06c\") " pod="kube-system/kube-proxy-bhfqt" Jan 29 11:56:18.352794 kubelet[2729]: I0129 11:56:18.350063 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b8948ca6-8c18-4419-a43d-5ca59e8c990a-hostproc\") pod \"cilium-2v86t\" (UID: \"b8948ca6-8c18-4419-a43d-5ca59e8c990a\") " pod="kube-system/cilium-2v86t" Jan 29 11:56:18.441099 systemd[1]: Created slice kubepods-besteffort-pod9a111d66_9811_4caf_8a2f_a909f93335a2.slice - libcontainer container kubepods-besteffort-pod9a111d66_9811_4caf_8a2f_a909f93335a2.slice. Jan 29 11:56:18.452016 kubelet[2729]: I0129 11:56:18.450436 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9a111d66-9811-4caf-8a2f-a909f93335a2-cilium-config-path\") pod \"cilium-operator-5d85765b45-c5sjl\" (UID: \"9a111d66-9811-4caf-8a2f-a909f93335a2\") " pod="kube-system/cilium-operator-5d85765b45-c5sjl" Jan 29 11:56:18.452016 kubelet[2729]: I0129 11:56:18.450700 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prjfx\" (UniqueName: \"kubernetes.io/projected/9a111d66-9811-4caf-8a2f-a909f93335a2-kube-api-access-prjfx\") pod \"cilium-operator-5d85765b45-c5sjl\" (UID: \"9a111d66-9811-4caf-8a2f-a909f93335a2\") " pod="kube-system/cilium-operator-5d85765b45-c5sjl" Jan 29 11:56:18.599371 containerd[1506]: time="2025-01-29T11:56:18.599168704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bhfqt,Uid:6cb41318-1f9d-42a5-b851-1d618eade06c,Namespace:kube-system,Attempt:0,}" Jan 29 11:56:18.613340 containerd[1506]: time="2025-01-29T11:56:18.613015037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2v86t,Uid:b8948ca6-8c18-4419-a43d-5ca59e8c990a,Namespace:kube-system,Attempt:0,}" Jan 29 11:56:18.650127 containerd[1506]: time="2025-01-29T11:56:18.649264034Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:56:18.652390 containerd[1506]: time="2025-01-29T11:56:18.652073736Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:56:18.652390 containerd[1506]: time="2025-01-29T11:56:18.652135191Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:56:18.652554 containerd[1506]: time="2025-01-29T11:56:18.652315302Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:56:18.679682 containerd[1506]: time="2025-01-29T11:56:18.678853740Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:56:18.682482 containerd[1506]: time="2025-01-29T11:56:18.682160134Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:56:18.682482 containerd[1506]: time="2025-01-29T11:56:18.682221624Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:56:18.682868 containerd[1506]: time="2025-01-29T11:56:18.682701712Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:56:18.684546 systemd[1]: Started cri-containerd-830e86a1b64f8e84bda8e078126b50909f9bb77f154b1cf36a2e66435fc6f7c8.scope - libcontainer container 830e86a1b64f8e84bda8e078126b50909f9bb77f154b1cf36a2e66435fc6f7c8. Jan 29 11:56:18.719521 systemd[1]: Started cri-containerd-c8293ebc33abe4d15156f065699e245f942f17f05082bbc48c61641a53691ec1.scope - libcontainer container c8293ebc33abe4d15156f065699e245f942f17f05082bbc48c61641a53691ec1. Jan 29 11:56:18.746247 containerd[1506]: time="2025-01-29T11:56:18.745935815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bhfqt,Uid:6cb41318-1f9d-42a5-b851-1d618eade06c,Namespace:kube-system,Attempt:0,} returns sandbox id \"830e86a1b64f8e84bda8e078126b50909f9bb77f154b1cf36a2e66435fc6f7c8\"" Jan 29 11:56:18.749042 containerd[1506]: time="2025-01-29T11:56:18.747757744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-c5sjl,Uid:9a111d66-9811-4caf-8a2f-a909f93335a2,Namespace:kube-system,Attempt:0,}" Jan 29 11:56:18.762855 containerd[1506]: time="2025-01-29T11:56:18.762734236Z" level=info msg="CreateContainer within sandbox \"830e86a1b64f8e84bda8e078126b50909f9bb77f154b1cf36a2e66435fc6f7c8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 11:56:18.774887 containerd[1506]: time="2025-01-29T11:56:18.774820918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2v86t,Uid:b8948ca6-8c18-4419-a43d-5ca59e8c990a,Namespace:kube-system,Attempt:0,} returns sandbox id \"c8293ebc33abe4d15156f065699e245f942f17f05082bbc48c61641a53691ec1\"" Jan 29 11:56:18.779840 containerd[1506]: time="2025-01-29T11:56:18.779382011Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 29 11:56:18.805498 containerd[1506]: time="2025-01-29T11:56:18.805439873Z" level=info msg="CreateContainer within sandbox \"830e86a1b64f8e84bda8e078126b50909f9bb77f154b1cf36a2e66435fc6f7c8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2840b07463493a39bb549a1701809b1e8c318f7d1abd86fb4b965ad9bd0aec2f\"" Jan 29 11:56:18.807436 containerd[1506]: time="2025-01-29T11:56:18.805953050Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:56:18.807436 containerd[1506]: time="2025-01-29T11:56:18.806040726Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:56:18.807436 containerd[1506]: time="2025-01-29T11:56:18.806066309Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:56:18.807436 containerd[1506]: time="2025-01-29T11:56:18.806526591Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:56:18.809975 containerd[1506]: time="2025-01-29T11:56:18.807175182Z" level=info msg="StartContainer for \"2840b07463493a39bb549a1701809b1e8c318f7d1abd86fb4b965ad9bd0aec2f\"" Jan 29 11:56:18.841763 systemd[1]: Started cri-containerd-605e66f5609825d5304ad8b01e638d8402729b31e63cbfb4054d78f069353393.scope - libcontainer container 605e66f5609825d5304ad8b01e638d8402729b31e63cbfb4054d78f069353393. Jan 29 11:56:18.862984 systemd[1]: Started cri-containerd-2840b07463493a39bb549a1701809b1e8c318f7d1abd86fb4b965ad9bd0aec2f.scope - libcontainer container 2840b07463493a39bb549a1701809b1e8c318f7d1abd86fb4b965ad9bd0aec2f. Jan 29 11:56:18.935499 containerd[1506]: time="2025-01-29T11:56:18.935233993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-c5sjl,Uid:9a111d66-9811-4caf-8a2f-a909f93335a2,Namespace:kube-system,Attempt:0,} returns sandbox id \"605e66f5609825d5304ad8b01e638d8402729b31e63cbfb4054d78f069353393\"" Jan 29 11:56:18.935499 containerd[1506]: time="2025-01-29T11:56:18.935240735Z" level=info msg="StartContainer for \"2840b07463493a39bb549a1701809b1e8c318f7d1abd86fb4b965ad9bd0aec2f\" returns successfully" Jan 29 11:56:19.343368 kubelet[2729]: I0129 11:56:19.342468 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bhfqt" podStartSLOduration=1.34244933 podStartE2EDuration="1.34244933s" podCreationTimestamp="2025-01-29 11:56:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:56:19.342014312 +0000 UTC m=+6.322787255" watchObservedRunningTime="2025-01-29 11:56:19.34244933 +0000 UTC m=+6.323222271" Jan 29 11:56:26.207751 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4195037273.mount: Deactivated successfully. Jan 29 11:56:29.620290 containerd[1506]: time="2025-01-29T11:56:29.620134955Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:56:29.623405 containerd[1506]: time="2025-01-29T11:56:29.623240221Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 29 11:56:29.624649 containerd[1506]: time="2025-01-29T11:56:29.624599634Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:56:29.628410 containerd[1506]: time="2025-01-29T11:56:29.628355025Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.848900418s" Jan 29 11:56:29.628535 containerd[1506]: time="2025-01-29T11:56:29.628450891Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 29 11:56:29.631284 containerd[1506]: time="2025-01-29T11:56:29.630552494Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 29 11:56:29.635302 containerd[1506]: time="2025-01-29T11:56:29.635259073Z" level=info msg="CreateContainer within sandbox \"c8293ebc33abe4d15156f065699e245f942f17f05082bbc48c61641a53691ec1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 29 11:56:29.700898 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1293510049.mount: Deactivated successfully. Jan 29 11:56:29.708833 containerd[1506]: time="2025-01-29T11:56:29.708779864Z" level=info msg="CreateContainer within sandbox \"c8293ebc33abe4d15156f065699e245f942f17f05082bbc48c61641a53691ec1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4d21cdb274271da57f34e5caa38e89d09fffb39f11540d84d0453467b120b587\"" Jan 29 11:56:29.710362 containerd[1506]: time="2025-01-29T11:56:29.709587834Z" level=info msg="StartContainer for \"4d21cdb274271da57f34e5caa38e89d09fffb39f11540d84d0453467b120b587\"" Jan 29 11:56:29.850501 systemd[1]: Started cri-containerd-4d21cdb274271da57f34e5caa38e89d09fffb39f11540d84d0453467b120b587.scope - libcontainer container 4d21cdb274271da57f34e5caa38e89d09fffb39f11540d84d0453467b120b587. Jan 29 11:56:29.902476 containerd[1506]: time="2025-01-29T11:56:29.900955110Z" level=info msg="StartContainer for \"4d21cdb274271da57f34e5caa38e89d09fffb39f11540d84d0453467b120b587\" returns successfully" Jan 29 11:56:29.930672 systemd[1]: cri-containerd-4d21cdb274271da57f34e5caa38e89d09fffb39f11540d84d0453467b120b587.scope: Deactivated successfully. Jan 29 11:56:30.247732 containerd[1506]: time="2025-01-29T11:56:30.226425005Z" level=info msg="shim disconnected" id=4d21cdb274271da57f34e5caa38e89d09fffb39f11540d84d0453467b120b587 namespace=k8s.io Jan 29 11:56:30.247732 containerd[1506]: time="2025-01-29T11:56:30.247659853Z" level=warning msg="cleaning up after shim disconnected" id=4d21cdb274271da57f34e5caa38e89d09fffb39f11540d84d0453467b120b587 namespace=k8s.io Jan 29 11:56:30.247732 containerd[1506]: time="2025-01-29T11:56:30.247705006Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:56:30.391230 containerd[1506]: time="2025-01-29T11:56:30.390669887Z" level=info msg="CreateContainer within sandbox \"c8293ebc33abe4d15156f065699e245f942f17f05082bbc48c61641a53691ec1\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 29 11:56:30.403095 containerd[1506]: time="2025-01-29T11:56:30.402972219Z" level=info msg="CreateContainer within sandbox \"c8293ebc33abe4d15156f065699e245f942f17f05082bbc48c61641a53691ec1\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"aae0b561f9ea8d1c2d274a953c98d3fe4cf64b1609fe68dcf3875aef4070f31e\"" Jan 29 11:56:30.404547 containerd[1506]: time="2025-01-29T11:56:30.404515301Z" level=info msg="StartContainer for \"aae0b561f9ea8d1c2d274a953c98d3fe4cf64b1609fe68dcf3875aef4070f31e\"" Jan 29 11:56:30.453469 systemd[1]: Started cri-containerd-aae0b561f9ea8d1c2d274a953c98d3fe4cf64b1609fe68dcf3875aef4070f31e.scope - libcontainer container aae0b561f9ea8d1c2d274a953c98d3fe4cf64b1609fe68dcf3875aef4070f31e. Jan 29 11:56:30.495392 containerd[1506]: time="2025-01-29T11:56:30.494903472Z" level=info msg="StartContainer for \"aae0b561f9ea8d1c2d274a953c98d3fe4cf64b1609fe68dcf3875aef4070f31e\" returns successfully" Jan 29 11:56:30.513558 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 11:56:30.514810 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:56:30.515162 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:56:30.521903 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:56:30.522281 systemd[1]: cri-containerd-aae0b561f9ea8d1c2d274a953c98d3fe4cf64b1609fe68dcf3875aef4070f31e.scope: Deactivated successfully. Jan 29 11:56:30.571799 containerd[1506]: time="2025-01-29T11:56:30.571615113Z" level=info msg="shim disconnected" id=aae0b561f9ea8d1c2d274a953c98d3fe4cf64b1609fe68dcf3875aef4070f31e namespace=k8s.io Jan 29 11:56:30.571799 containerd[1506]: time="2025-01-29T11:56:30.571685124Z" level=warning msg="cleaning up after shim disconnected" id=aae0b561f9ea8d1c2d274a953c98d3fe4cf64b1609fe68dcf3875aef4070f31e namespace=k8s.io Jan 29 11:56:30.571799 containerd[1506]: time="2025-01-29T11:56:30.571701829Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:56:30.594760 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:56:30.697269 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4d21cdb274271da57f34e5caa38e89d09fffb39f11540d84d0453467b120b587-rootfs.mount: Deactivated successfully. Jan 29 11:56:31.420786 containerd[1506]: time="2025-01-29T11:56:31.420567174Z" level=info msg="CreateContainer within sandbox \"c8293ebc33abe4d15156f065699e245f942f17f05082bbc48c61641a53691ec1\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 29 11:56:31.503352 containerd[1506]: time="2025-01-29T11:56:31.503086634Z" level=info msg="CreateContainer within sandbox \"c8293ebc33abe4d15156f065699e245f942f17f05082bbc48c61641a53691ec1\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"93da139c5aae959c22d607a33fb204a72286a21c2fd0cfce0005b42f9c86a54d\"" Jan 29 11:56:31.505404 containerd[1506]: time="2025-01-29T11:56:31.504097416Z" level=info msg="StartContainer for \"93da139c5aae959c22d607a33fb204a72286a21c2fd0cfce0005b42f9c86a54d\"" Jan 29 11:56:31.559504 systemd[1]: Started cri-containerd-93da139c5aae959c22d607a33fb204a72286a21c2fd0cfce0005b42f9c86a54d.scope - libcontainer container 93da139c5aae959c22d607a33fb204a72286a21c2fd0cfce0005b42f9c86a54d. Jan 29 11:56:31.617285 containerd[1506]: time="2025-01-29T11:56:31.616001920Z" level=info msg="StartContainer for \"93da139c5aae959c22d607a33fb204a72286a21c2fd0cfce0005b42f9c86a54d\" returns successfully" Jan 29 11:56:31.622372 systemd[1]: cri-containerd-93da139c5aae959c22d607a33fb204a72286a21c2fd0cfce0005b42f9c86a54d.scope: Deactivated successfully. Jan 29 11:56:31.671319 containerd[1506]: time="2025-01-29T11:56:31.670966227Z" level=info msg="shim disconnected" id=93da139c5aae959c22d607a33fb204a72286a21c2fd0cfce0005b42f9c86a54d namespace=k8s.io Jan 29 11:56:31.672344 containerd[1506]: time="2025-01-29T11:56:31.672289094Z" level=warning msg="cleaning up after shim disconnected" id=93da139c5aae959c22d607a33fb204a72286a21c2fd0cfce0005b42f9c86a54d namespace=k8s.io Jan 29 11:56:31.672486 containerd[1506]: time="2025-01-29T11:56:31.672415907Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:56:31.698660 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-93da139c5aae959c22d607a33fb204a72286a21c2fd0cfce0005b42f9c86a54d-rootfs.mount: Deactivated successfully. Jan 29 11:56:32.029646 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount765826734.mount: Deactivated successfully. Jan 29 11:56:32.401472 containerd[1506]: time="2025-01-29T11:56:32.400824367Z" level=info msg="CreateContainer within sandbox \"c8293ebc33abe4d15156f065699e245f942f17f05082bbc48c61641a53691ec1\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 29 11:56:32.426972 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount770852868.mount: Deactivated successfully. Jan 29 11:56:32.436346 containerd[1506]: time="2025-01-29T11:56:32.436240823Z" level=info msg="CreateContainer within sandbox \"c8293ebc33abe4d15156f065699e245f942f17f05082bbc48c61641a53691ec1\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f1be44ef90bf19733472c354ec273333ab9fc6323f886c9ebd2cfc0733da3eb5\"" Jan 29 11:56:32.438599 containerd[1506]: time="2025-01-29T11:56:32.438176366Z" level=info msg="StartContainer for \"f1be44ef90bf19733472c354ec273333ab9fc6323f886c9ebd2cfc0733da3eb5\"" Jan 29 11:56:32.560076 systemd[1]: Started cri-containerd-f1be44ef90bf19733472c354ec273333ab9fc6323f886c9ebd2cfc0733da3eb5.scope - libcontainer container f1be44ef90bf19733472c354ec273333ab9fc6323f886c9ebd2cfc0733da3eb5. Jan 29 11:56:32.620083 systemd[1]: cri-containerd-f1be44ef90bf19733472c354ec273333ab9fc6323f886c9ebd2cfc0733da3eb5.scope: Deactivated successfully. Jan 29 11:56:32.625111 containerd[1506]: time="2025-01-29T11:56:32.624400813Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb8948ca6_8c18_4419_a43d_5ca59e8c990a.slice/cri-containerd-f1be44ef90bf19733472c354ec273333ab9fc6323f886c9ebd2cfc0733da3eb5.scope/memory.events\": no such file or directory" Jan 29 11:56:32.628038 containerd[1506]: time="2025-01-29T11:56:32.627833834Z" level=info msg="StartContainer for \"f1be44ef90bf19733472c354ec273333ab9fc6323f886c9ebd2cfc0733da3eb5\" returns successfully" Jan 29 11:56:32.701650 containerd[1506]: time="2025-01-29T11:56:32.701485846Z" level=info msg="shim disconnected" id=f1be44ef90bf19733472c354ec273333ab9fc6323f886c9ebd2cfc0733da3eb5 namespace=k8s.io Jan 29 11:56:32.701650 containerd[1506]: time="2025-01-29T11:56:32.701570946Z" level=warning msg="cleaning up after shim disconnected" id=f1be44ef90bf19733472c354ec273333ab9fc6323f886c9ebd2cfc0733da3eb5 namespace=k8s.io Jan 29 11:56:32.701650 containerd[1506]: time="2025-01-29T11:56:32.701588440Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:56:32.734433 containerd[1506]: time="2025-01-29T11:56:32.734363661Z" level=warning msg="cleanup warnings time=\"2025-01-29T11:56:32Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 11:56:32.982452 containerd[1506]: time="2025-01-29T11:56:32.982377788Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:56:33.029964 containerd[1506]: time="2025-01-29T11:56:33.029866705Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 29 11:56:33.031153 containerd[1506]: time="2025-01-29T11:56:33.031120644Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:56:33.033906 containerd[1506]: time="2025-01-29T11:56:33.033563468Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.402957358s" Jan 29 11:56:33.033906 containerd[1506]: time="2025-01-29T11:56:33.033612473Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 29 11:56:33.038336 containerd[1506]: time="2025-01-29T11:56:33.037974734Z" level=info msg="CreateContainer within sandbox \"605e66f5609825d5304ad8b01e638d8402729b31e63cbfb4054d78f069353393\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 29 11:56:33.055149 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2542515035.mount: Deactivated successfully. Jan 29 11:56:33.059665 containerd[1506]: time="2025-01-29T11:56:33.059626308Z" level=info msg="CreateContainer within sandbox \"605e66f5609825d5304ad8b01e638d8402729b31e63cbfb4054d78f069353393\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"815adf09c3f315e126861071c9a55f78a9ed8cba25891c055d6e67e3f21b7613\"" Jan 29 11:56:33.060753 containerd[1506]: time="2025-01-29T11:56:33.060668142Z" level=info msg="StartContainer for \"815adf09c3f315e126861071c9a55f78a9ed8cba25891c055d6e67e3f21b7613\"" Jan 29 11:56:33.111493 systemd[1]: Started cri-containerd-815adf09c3f315e126861071c9a55f78a9ed8cba25891c055d6e67e3f21b7613.scope - libcontainer container 815adf09c3f315e126861071c9a55f78a9ed8cba25891c055d6e67e3f21b7613. Jan 29 11:56:33.164075 containerd[1506]: time="2025-01-29T11:56:33.164010576Z" level=info msg="StartContainer for \"815adf09c3f315e126861071c9a55f78a9ed8cba25891c055d6e67e3f21b7613\" returns successfully" Jan 29 11:56:33.412098 containerd[1506]: time="2025-01-29T11:56:33.411944518Z" level=info msg="CreateContainer within sandbox \"c8293ebc33abe4d15156f065699e245f942f17f05082bbc48c61641a53691ec1\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 29 11:56:33.454196 containerd[1506]: time="2025-01-29T11:56:33.454127612Z" level=info msg="CreateContainer within sandbox \"c8293ebc33abe4d15156f065699e245f942f17f05082bbc48c61641a53691ec1\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"68978afd564942d230eaa93c1a315cd66e995272829186d8c73aff8bff45dab8\"" Jan 29 11:56:33.456097 containerd[1506]: time="2025-01-29T11:56:33.455801620Z" level=info msg="StartContainer for \"68978afd564942d230eaa93c1a315cd66e995272829186d8c73aff8bff45dab8\"" Jan 29 11:56:33.524475 systemd[1]: Started cri-containerd-68978afd564942d230eaa93c1a315cd66e995272829186d8c73aff8bff45dab8.scope - libcontainer container 68978afd564942d230eaa93c1a315cd66e995272829186d8c73aff8bff45dab8. Jan 29 11:56:33.628393 kubelet[2729]: I0129 11:56:33.626511 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-c5sjl" podStartSLOduration=1.529387937 podStartE2EDuration="15.6264392s" podCreationTimestamp="2025-01-29 11:56:18 +0000 UTC" firstStartedPulling="2025-01-29 11:56:18.937970776 +0000 UTC m=+5.918743701" lastFinishedPulling="2025-01-29 11:56:33.035022026 +0000 UTC m=+20.015794964" observedRunningTime="2025-01-29 11:56:33.499482418 +0000 UTC m=+20.480255374" watchObservedRunningTime="2025-01-29 11:56:33.6264392 +0000 UTC m=+20.607212144" Jan 29 11:56:33.667063 containerd[1506]: time="2025-01-29T11:56:33.666233582Z" level=info msg="StartContainer for \"68978afd564942d230eaa93c1a315cd66e995272829186d8c73aff8bff45dab8\" returns successfully" Jan 29 11:56:33.698632 systemd[1]: run-containerd-runc-k8s.io-815adf09c3f315e126861071c9a55f78a9ed8cba25891c055d6e67e3f21b7613-runc.j5B9Kc.mount: Deactivated successfully. Jan 29 11:56:34.219765 kubelet[2729]: I0129 11:56:34.219707 2729 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 29 11:56:34.318742 kubelet[2729]: W0129 11:56:34.318582 2729 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:srv-xy63l.gb1.brightbox.com" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'srv-xy63l.gb1.brightbox.com' and this object Jan 29 11:56:34.318742 kubelet[2729]: E0129 11:56:34.318685 2729 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:srv-xy63l.gb1.brightbox.com\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'srv-xy63l.gb1.brightbox.com' and this object" logger="UnhandledError" Jan 29 11:56:34.322559 systemd[1]: Created slice kubepods-burstable-pod3e550092_5f6d_44d2_8e3d_7ad5978564cd.slice - libcontainer container kubepods-burstable-pod3e550092_5f6d_44d2_8e3d_7ad5978564cd.slice. Jan 29 11:56:34.336468 systemd[1]: Created slice kubepods-burstable-podca88a35a_4b47_491a_8a28_67b8770f4563.slice - libcontainer container kubepods-burstable-podca88a35a_4b47_491a_8a28_67b8770f4563.slice. Jan 29 11:56:34.461608 kubelet[2729]: I0129 11:56:34.461356 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmll7\" (UniqueName: \"kubernetes.io/projected/ca88a35a-4b47-491a-8a28-67b8770f4563-kube-api-access-wmll7\") pod \"coredns-6f6b679f8f-hnnc7\" (UID: \"ca88a35a-4b47-491a-8a28-67b8770f4563\") " pod="kube-system/coredns-6f6b679f8f-hnnc7" Jan 29 11:56:34.461608 kubelet[2729]: I0129 11:56:34.461421 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ca88a35a-4b47-491a-8a28-67b8770f4563-config-volume\") pod \"coredns-6f6b679f8f-hnnc7\" (UID: \"ca88a35a-4b47-491a-8a28-67b8770f4563\") " pod="kube-system/coredns-6f6b679f8f-hnnc7" Jan 29 11:56:34.461608 kubelet[2729]: I0129 11:56:34.461455 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p844v\" (UniqueName: \"kubernetes.io/projected/3e550092-5f6d-44d2-8e3d-7ad5978564cd-kube-api-access-p844v\") pod \"coredns-6f6b679f8f-bfk2d\" (UID: \"3e550092-5f6d-44d2-8e3d-7ad5978564cd\") " pod="kube-system/coredns-6f6b679f8f-bfk2d" Jan 29 11:56:34.461608 kubelet[2729]: I0129 11:56:34.461483 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3e550092-5f6d-44d2-8e3d-7ad5978564cd-config-volume\") pod \"coredns-6f6b679f8f-bfk2d\" (UID: \"3e550092-5f6d-44d2-8e3d-7ad5978564cd\") " pod="kube-system/coredns-6f6b679f8f-bfk2d" Jan 29 11:56:34.565945 kubelet[2729]: I0129 11:56:34.565025 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-2v86t" podStartSLOduration=5.71373239 podStartE2EDuration="16.565005274s" podCreationTimestamp="2025-01-29 11:56:18 +0000 UTC" firstStartedPulling="2025-01-29 11:56:18.778639596 +0000 UTC m=+5.759412524" lastFinishedPulling="2025-01-29 11:56:29.629912484 +0000 UTC m=+16.610685408" observedRunningTime="2025-01-29 11:56:34.537623167 +0000 UTC m=+21.518396123" watchObservedRunningTime="2025-01-29 11:56:34.565005274 +0000 UTC m=+21.545778198" Jan 29 11:56:35.563191 kubelet[2729]: E0129 11:56:35.563119 2729 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Jan 29 11:56:35.563956 kubelet[2729]: E0129 11:56:35.563375 2729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3e550092-5f6d-44d2-8e3d-7ad5978564cd-config-volume podName:3e550092-5f6d-44d2-8e3d-7ad5978564cd nodeName:}" failed. No retries permitted until 2025-01-29 11:56:36.063333637 +0000 UTC m=+23.044106560 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/3e550092-5f6d-44d2-8e3d-7ad5978564cd-config-volume") pod "coredns-6f6b679f8f-bfk2d" (UID: "3e550092-5f6d-44d2-8e3d-7ad5978564cd") : failed to sync configmap cache: timed out waiting for the condition Jan 29 11:56:35.565775 kubelet[2729]: E0129 11:56:35.565728 2729 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Jan 29 11:56:35.565877 kubelet[2729]: E0129 11:56:35.565823 2729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ca88a35a-4b47-491a-8a28-67b8770f4563-config-volume podName:ca88a35a-4b47-491a-8a28-67b8770f4563 nodeName:}" failed. No retries permitted until 2025-01-29 11:56:36.065807006 +0000 UTC m=+23.046579929 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/ca88a35a-4b47-491a-8a28-67b8770f4563-config-volume") pod "coredns-6f6b679f8f-hnnc7" (UID: "ca88a35a-4b47-491a-8a28-67b8770f4563") : failed to sync configmap cache: timed out waiting for the condition Jan 29 11:56:36.129109 containerd[1506]: time="2025-01-29T11:56:36.129056650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-bfk2d,Uid:3e550092-5f6d-44d2-8e3d-7ad5978564cd,Namespace:kube-system,Attempt:0,}" Jan 29 11:56:36.144558 containerd[1506]: time="2025-01-29T11:56:36.142988278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hnnc7,Uid:ca88a35a-4b47-491a-8a28-67b8770f4563,Namespace:kube-system,Attempt:0,}" Jan 29 11:56:37.682567 systemd-networkd[1430]: cilium_host: Link UP Jan 29 11:56:37.685125 systemd-networkd[1430]: cilium_net: Link UP Jan 29 11:56:37.688287 systemd-networkd[1430]: cilium_net: Gained carrier Jan 29 11:56:37.688611 systemd-networkd[1430]: cilium_host: Gained carrier Jan 29 11:56:37.862445 systemd-networkd[1430]: cilium_vxlan: Link UP Jan 29 11:56:37.862457 systemd-networkd[1430]: cilium_vxlan: Gained carrier Jan 29 11:56:38.168633 systemd-networkd[1430]: cilium_net: Gained IPv6LL Jan 29 11:56:38.240525 systemd-networkd[1430]: cilium_host: Gained IPv6LL Jan 29 11:56:38.473331 kernel: NET: Registered PF_ALG protocol family Jan 29 11:56:39.136705 systemd-networkd[1430]: cilium_vxlan: Gained IPv6LL Jan 29 11:56:39.548054 systemd-networkd[1430]: lxc_health: Link UP Jan 29 11:56:39.558891 systemd-networkd[1430]: lxc_health: Gained carrier Jan 29 11:56:40.256515 systemd-networkd[1430]: lxcf24dbc155898: Link UP Jan 29 11:56:40.286342 kernel: eth0: renamed from tmp6e288 Jan 29 11:56:40.283488 systemd-networkd[1430]: lxc69054879da7f: Link UP Jan 29 11:56:40.293384 kernel: eth0: renamed from tmp04f39 Jan 29 11:56:40.301074 systemd-networkd[1430]: lxcf24dbc155898: Gained carrier Jan 29 11:56:40.303310 systemd-networkd[1430]: lxc69054879da7f: Gained carrier Jan 29 11:56:41.184449 systemd-networkd[1430]: lxc_health: Gained IPv6LL Jan 29 11:56:41.504596 systemd-networkd[1430]: lxc69054879da7f: Gained IPv6LL Jan 29 11:56:41.762516 systemd-networkd[1430]: lxcf24dbc155898: Gained IPv6LL Jan 29 11:56:43.355226 kubelet[2729]: I0129 11:56:43.354600 2729 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:56:46.129020 containerd[1506]: time="2025-01-29T11:56:46.128525838Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:56:46.129020 containerd[1506]: time="2025-01-29T11:56:46.128736122Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:56:46.129020 containerd[1506]: time="2025-01-29T11:56:46.128765794Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:56:46.132688 containerd[1506]: time="2025-01-29T11:56:46.128927031Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:56:46.235968 systemd[1]: Started cri-containerd-04f39bbc9eece90d4a3b849823d8eae417a2da90fdb4c710e904703a89988f8b.scope - libcontainer container 04f39bbc9eece90d4a3b849823d8eae417a2da90fdb4c710e904703a89988f8b. Jan 29 11:56:46.247119 containerd[1506]: time="2025-01-29T11:56:46.246198751Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:56:46.247119 containerd[1506]: time="2025-01-29T11:56:46.246346954Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:56:46.247119 containerd[1506]: time="2025-01-29T11:56:46.246370796Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:56:46.248282 containerd[1506]: time="2025-01-29T11:56:46.247079425Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:56:46.318956 systemd[1]: run-containerd-runc-k8s.io-6e288faca8dfd78a2b10193245d4eb13f6fb100aba1cedbe8638745ab272fa1b-runc.ABnrVf.mount: Deactivated successfully. Jan 29 11:56:46.330400 systemd[1]: Started cri-containerd-6e288faca8dfd78a2b10193245d4eb13f6fb100aba1cedbe8638745ab272fa1b.scope - libcontainer container 6e288faca8dfd78a2b10193245d4eb13f6fb100aba1cedbe8638745ab272fa1b. Jan 29 11:56:46.378118 containerd[1506]: time="2025-01-29T11:56:46.378006329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-bfk2d,Uid:3e550092-5f6d-44d2-8e3d-7ad5978564cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"04f39bbc9eece90d4a3b849823d8eae417a2da90fdb4c710e904703a89988f8b\"" Jan 29 11:56:46.387041 containerd[1506]: time="2025-01-29T11:56:46.386025423Z" level=info msg="CreateContainer within sandbox \"04f39bbc9eece90d4a3b849823d8eae417a2da90fdb4c710e904703a89988f8b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 11:56:46.425493 containerd[1506]: time="2025-01-29T11:56:46.424818303Z" level=info msg="CreateContainer within sandbox \"04f39bbc9eece90d4a3b849823d8eae417a2da90fdb4c710e904703a89988f8b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cf2558adb16c58e2e919bb5e562f7c7295208920a081ba6d52879fe76dd9ce9c\"" Jan 29 11:56:46.429061 containerd[1506]: time="2025-01-29T11:56:46.428144573Z" level=info msg="StartContainer for \"cf2558adb16c58e2e919bb5e562f7c7295208920a081ba6d52879fe76dd9ce9c\"" Jan 29 11:56:46.446681 containerd[1506]: time="2025-01-29T11:56:46.446620862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hnnc7,Uid:ca88a35a-4b47-491a-8a28-67b8770f4563,Namespace:kube-system,Attempt:0,} returns sandbox id \"6e288faca8dfd78a2b10193245d4eb13f6fb100aba1cedbe8638745ab272fa1b\"" Jan 29 11:56:46.456500 containerd[1506]: time="2025-01-29T11:56:46.455616962Z" level=info msg="CreateContainer within sandbox \"6e288faca8dfd78a2b10193245d4eb13f6fb100aba1cedbe8638745ab272fa1b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 11:56:46.474307 containerd[1506]: time="2025-01-29T11:56:46.474131929Z" level=info msg="CreateContainer within sandbox \"6e288faca8dfd78a2b10193245d4eb13f6fb100aba1cedbe8638745ab272fa1b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4a092e2bbc84e78ff7a5e12bbb3e5f8a587a3229248fea074adc890804399e44\"" Jan 29 11:56:46.475732 containerd[1506]: time="2025-01-29T11:56:46.475213974Z" level=info msg="StartContainer for \"4a092e2bbc84e78ff7a5e12bbb3e5f8a587a3229248fea074adc890804399e44\"" Jan 29 11:56:46.496585 systemd[1]: Started cri-containerd-cf2558adb16c58e2e919bb5e562f7c7295208920a081ba6d52879fe76dd9ce9c.scope - libcontainer container cf2558adb16c58e2e919bb5e562f7c7295208920a081ba6d52879fe76dd9ce9c. Jan 29 11:56:46.533567 systemd[1]: Started cri-containerd-4a092e2bbc84e78ff7a5e12bbb3e5f8a587a3229248fea074adc890804399e44.scope - libcontainer container 4a092e2bbc84e78ff7a5e12bbb3e5f8a587a3229248fea074adc890804399e44. Jan 29 11:56:46.580701 containerd[1506]: time="2025-01-29T11:56:46.580586942Z" level=info msg="StartContainer for \"cf2558adb16c58e2e919bb5e562f7c7295208920a081ba6d52879fe76dd9ce9c\" returns successfully" Jan 29 11:56:46.589573 containerd[1506]: time="2025-01-29T11:56:46.589523160Z" level=info msg="StartContainer for \"4a092e2bbc84e78ff7a5e12bbb3e5f8a587a3229248fea074adc890804399e44\" returns successfully" Jan 29 11:56:47.491218 kubelet[2729]: I0129 11:56:47.491001 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-hnnc7" podStartSLOduration=29.490486703 podStartE2EDuration="29.490486703s" podCreationTimestamp="2025-01-29 11:56:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:56:47.488647907 +0000 UTC m=+34.469420868" watchObservedRunningTime="2025-01-29 11:56:47.490486703 +0000 UTC m=+34.471259642" Jan 29 11:56:47.517288 kubelet[2729]: I0129 11:56:47.515562 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-bfk2d" podStartSLOduration=29.515544515 podStartE2EDuration="29.515544515s" podCreationTimestamp="2025-01-29 11:56:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:56:47.511464733 +0000 UTC m=+34.492237689" watchObservedRunningTime="2025-01-29 11:56:47.515544515 +0000 UTC m=+34.496317450" Jan 29 11:57:24.057742 systemd[1]: Started sshd@9-10.230.10.162:22-139.178.68.195:33008.service - OpenSSH per-connection server daemon (139.178.68.195:33008). Jan 29 11:57:24.998889 sshd[4118]: Accepted publickey for core from 139.178.68.195 port 33008 ssh2: RSA SHA256:1NfIXCvxej/z4X5wlkwGw1mN1hR8YjkIU7Ph0XPIiZI Jan 29 11:57:25.003764 sshd-session[4118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:57:25.016314 systemd-logind[1487]: New session 12 of user core. Jan 29 11:57:25.027550 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 29 11:57:26.163052 sshd[4121]: Connection closed by 139.178.68.195 port 33008 Jan 29 11:57:26.165418 sshd-session[4118]: pam_unix(sshd:session): session closed for user core Jan 29 11:57:26.173981 systemd-logind[1487]: Session 12 logged out. Waiting for processes to exit. Jan 29 11:57:26.176861 systemd[1]: sshd@9-10.230.10.162:22-139.178.68.195:33008.service: Deactivated successfully. Jan 29 11:57:26.181433 systemd[1]: session-12.scope: Deactivated successfully. Jan 29 11:57:26.183701 systemd-logind[1487]: Removed session 12. Jan 29 11:57:31.327679 systemd[1]: Started sshd@10-10.230.10.162:22-139.178.68.195:33634.service - OpenSSH per-connection server daemon (139.178.68.195:33634). Jan 29 11:57:32.235269 sshd[4133]: Accepted publickey for core from 139.178.68.195 port 33634 ssh2: RSA SHA256:1NfIXCvxej/z4X5wlkwGw1mN1hR8YjkIU7Ph0XPIiZI Jan 29 11:57:32.237420 sshd-session[4133]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:57:32.245669 systemd-logind[1487]: New session 13 of user core. Jan 29 11:57:32.256451 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 29 11:57:32.978356 sshd[4135]: Connection closed by 139.178.68.195 port 33634 Jan 29 11:57:32.979424 sshd-session[4133]: pam_unix(sshd:session): session closed for user core Jan 29 11:57:32.985415 systemd[1]: sshd@10-10.230.10.162:22-139.178.68.195:33634.service: Deactivated successfully. Jan 29 11:57:32.989425 systemd[1]: session-13.scope: Deactivated successfully. Jan 29 11:57:32.990964 systemd-logind[1487]: Session 13 logged out. Waiting for processes to exit. Jan 29 11:57:32.993233 systemd-logind[1487]: Removed session 13. Jan 29 11:57:38.139870 systemd[1]: Started sshd@11-10.230.10.162:22-139.178.68.195:33946.service - OpenSSH per-connection server daemon (139.178.68.195:33946). Jan 29 11:57:39.040609 sshd[4148]: Accepted publickey for core from 139.178.68.195 port 33946 ssh2: RSA SHA256:1NfIXCvxej/z4X5wlkwGw1mN1hR8YjkIU7Ph0XPIiZI Jan 29 11:57:39.042769 sshd-session[4148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:57:39.050709 systemd-logind[1487]: New session 14 of user core. Jan 29 11:57:39.055489 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 29 11:57:39.755975 sshd[4150]: Connection closed by 139.178.68.195 port 33946 Jan 29 11:57:39.755589 sshd-session[4148]: pam_unix(sshd:session): session closed for user core Jan 29 11:57:39.761757 systemd[1]: sshd@11-10.230.10.162:22-139.178.68.195:33946.service: Deactivated successfully. Jan 29 11:57:39.765404 systemd[1]: session-14.scope: Deactivated successfully. Jan 29 11:57:39.766775 systemd-logind[1487]: Session 14 logged out. Waiting for processes to exit. Jan 29 11:57:39.768516 systemd-logind[1487]: Removed session 14. Jan 29 11:57:44.917622 systemd[1]: Started sshd@12-10.230.10.162:22-139.178.68.195:36440.service - OpenSSH per-connection server daemon (139.178.68.195:36440). Jan 29 11:57:45.831861 sshd[4163]: Accepted publickey for core from 139.178.68.195 port 36440 ssh2: RSA SHA256:1NfIXCvxej/z4X5wlkwGw1mN1hR8YjkIU7Ph0XPIiZI Jan 29 11:57:45.834240 sshd-session[4163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:57:45.842635 systemd-logind[1487]: New session 15 of user core. Jan 29 11:57:45.848563 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 29 11:57:46.563169 sshd[4165]: Connection closed by 139.178.68.195 port 36440 Jan 29 11:57:46.564528 sshd-session[4163]: pam_unix(sshd:session): session closed for user core Jan 29 11:57:46.569306 systemd[1]: sshd@12-10.230.10.162:22-139.178.68.195:36440.service: Deactivated successfully. Jan 29 11:57:46.572151 systemd[1]: session-15.scope: Deactivated successfully. Jan 29 11:57:46.574601 systemd-logind[1487]: Session 15 logged out. Waiting for processes to exit. Jan 29 11:57:46.576165 systemd-logind[1487]: Removed session 15. Jan 29 11:57:46.722669 systemd[1]: Started sshd@13-10.230.10.162:22-139.178.68.195:36448.service - OpenSSH per-connection server daemon (139.178.68.195:36448). Jan 29 11:57:47.629853 sshd[4177]: Accepted publickey for core from 139.178.68.195 port 36448 ssh2: RSA SHA256:1NfIXCvxej/z4X5wlkwGw1mN1hR8YjkIU7Ph0XPIiZI Jan 29 11:57:47.631232 sshd-session[4177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:57:47.640563 systemd-logind[1487]: New session 16 of user core. Jan 29 11:57:47.650503 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 29 11:57:48.427348 sshd[4179]: Connection closed by 139.178.68.195 port 36448 Jan 29 11:57:48.428574 sshd-session[4177]: pam_unix(sshd:session): session closed for user core Jan 29 11:57:48.434487 systemd[1]: sshd@13-10.230.10.162:22-139.178.68.195:36448.service: Deactivated successfully. Jan 29 11:57:48.437993 systemd[1]: session-16.scope: Deactivated successfully. Jan 29 11:57:48.439570 systemd-logind[1487]: Session 16 logged out. Waiting for processes to exit. Jan 29 11:57:48.441371 systemd-logind[1487]: Removed session 16. Jan 29 11:57:48.589655 systemd[1]: Started sshd@14-10.230.10.162:22-139.178.68.195:36462.service - OpenSSH per-connection server daemon (139.178.68.195:36462). Jan 29 11:57:49.482152 sshd[4188]: Accepted publickey for core from 139.178.68.195 port 36462 ssh2: RSA SHA256:1NfIXCvxej/z4X5wlkwGw1mN1hR8YjkIU7Ph0XPIiZI Jan 29 11:57:49.484304 sshd-session[4188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:57:49.491661 systemd-logind[1487]: New session 17 of user core. Jan 29 11:57:49.501647 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 29 11:57:50.197025 sshd[4192]: Connection closed by 139.178.68.195 port 36462 Jan 29 11:57:50.198077 sshd-session[4188]: pam_unix(sshd:session): session closed for user core Jan 29 11:57:50.202038 systemd[1]: sshd@14-10.230.10.162:22-139.178.68.195:36462.service: Deactivated successfully. Jan 29 11:57:50.205224 systemd[1]: session-17.scope: Deactivated successfully. Jan 29 11:57:50.207733 systemd-logind[1487]: Session 17 logged out. Waiting for processes to exit. Jan 29 11:57:50.209954 systemd-logind[1487]: Removed session 17. Jan 29 11:57:55.362661 systemd[1]: Started sshd@15-10.230.10.162:22-139.178.68.195:57792.service - OpenSSH per-connection server daemon (139.178.68.195:57792). Jan 29 11:57:56.247035 sshd[4202]: Accepted publickey for core from 139.178.68.195 port 57792 ssh2: RSA SHA256:1NfIXCvxej/z4X5wlkwGw1mN1hR8YjkIU7Ph0XPIiZI Jan 29 11:57:56.248052 sshd-session[4202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:57:56.254435 systemd-logind[1487]: New session 18 of user core. Jan 29 11:57:56.268531 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 29 11:57:56.942064 sshd[4204]: Connection closed by 139.178.68.195 port 57792 Jan 29 11:57:56.943129 sshd-session[4202]: pam_unix(sshd:session): session closed for user core Jan 29 11:57:56.948531 systemd[1]: sshd@15-10.230.10.162:22-139.178.68.195:57792.service: Deactivated successfully. Jan 29 11:57:56.951740 systemd[1]: session-18.scope: Deactivated successfully. Jan 29 11:57:56.952980 systemd-logind[1487]: Session 18 logged out. Waiting for processes to exit. Jan 29 11:57:56.954412 systemd-logind[1487]: Removed session 18. Jan 29 11:57:57.099617 systemd[1]: Started sshd@16-10.230.10.162:22-139.178.68.195:57796.service - OpenSSH per-connection server daemon (139.178.68.195:57796). Jan 29 11:57:58.005899 sshd[4216]: Accepted publickey for core from 139.178.68.195 port 57796 ssh2: RSA SHA256:1NfIXCvxej/z4X5wlkwGw1mN1hR8YjkIU7Ph0XPIiZI Jan 29 11:57:58.008353 sshd-session[4216]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:57:58.014673 systemd-logind[1487]: New session 19 of user core. Jan 29 11:57:58.023520 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 29 11:57:58.353599 systemd[1]: Started sshd@17-10.230.10.162:22-92.255.85.188:22392.service - OpenSSH per-connection server daemon (92.255.85.188:22392). Jan 29 11:57:58.880692 sshd[4220]: Invalid user teste from 92.255.85.188 port 22392 Jan 29 11:57:59.019951 sshd[4220]: Connection closed by invalid user teste 92.255.85.188 port 22392 [preauth] Jan 29 11:57:59.021852 systemd[1]: sshd@17-10.230.10.162:22-92.255.85.188:22392.service: Deactivated successfully. Jan 29 11:57:59.077986 sshd[4218]: Connection closed by 139.178.68.195 port 57796 Jan 29 11:57:59.079924 sshd-session[4216]: pam_unix(sshd:session): session closed for user core Jan 29 11:57:59.084573 systemd[1]: sshd@16-10.230.10.162:22-139.178.68.195:57796.service: Deactivated successfully. Jan 29 11:57:59.087866 systemd[1]: session-19.scope: Deactivated successfully. Jan 29 11:57:59.090924 systemd-logind[1487]: Session 19 logged out. Waiting for processes to exit. Jan 29 11:57:59.093380 systemd-logind[1487]: Removed session 19. Jan 29 11:57:59.239700 systemd[1]: Started sshd@18-10.230.10.162:22-139.178.68.195:57802.service - OpenSSH per-connection server daemon (139.178.68.195:57802). Jan 29 11:58:00.157063 sshd[4232]: Accepted publickey for core from 139.178.68.195 port 57802 ssh2: RSA SHA256:1NfIXCvxej/z4X5wlkwGw1mN1hR8YjkIU7Ph0XPIiZI Jan 29 11:58:00.159079 sshd-session[4232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:58:00.166116 systemd-logind[1487]: New session 20 of user core. Jan 29 11:58:00.172653 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 29 11:58:03.010193 sshd[4234]: Connection closed by 139.178.68.195 port 57802 Jan 29 11:58:03.011513 sshd-session[4232]: pam_unix(sshd:session): session closed for user core Jan 29 11:58:03.022173 systemd-logind[1487]: Session 20 logged out. Waiting for processes to exit. Jan 29 11:58:03.022939 systemd[1]: sshd@18-10.230.10.162:22-139.178.68.195:57802.service: Deactivated successfully. Jan 29 11:58:03.027125 systemd[1]: session-20.scope: Deactivated successfully. Jan 29 11:58:03.029646 systemd-logind[1487]: Removed session 20. Jan 29 11:58:03.170756 systemd[1]: Started sshd@19-10.230.10.162:22-139.178.68.195:57816.service - OpenSSH per-connection server daemon (139.178.68.195:57816). Jan 29 11:58:04.073912 sshd[4250]: Accepted publickey for core from 139.178.68.195 port 57816 ssh2: RSA SHA256:1NfIXCvxej/z4X5wlkwGw1mN1hR8YjkIU7Ph0XPIiZI Jan 29 11:58:04.076068 sshd-session[4250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:58:04.085541 systemd-logind[1487]: New session 21 of user core. Jan 29 11:58:04.094433 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 29 11:58:05.002166 sshd[4252]: Connection closed by 139.178.68.195 port 57816 Jan 29 11:58:05.003437 sshd-session[4250]: pam_unix(sshd:session): session closed for user core Jan 29 11:58:05.008720 systemd-logind[1487]: Session 21 logged out. Waiting for processes to exit. Jan 29 11:58:05.009155 systemd[1]: sshd@19-10.230.10.162:22-139.178.68.195:57816.service: Deactivated successfully. Jan 29 11:58:05.011703 systemd[1]: session-21.scope: Deactivated successfully. Jan 29 11:58:05.012977 systemd-logind[1487]: Removed session 21. Jan 29 11:58:05.159107 systemd[1]: Started sshd@20-10.230.10.162:22-139.178.68.195:47728.service - OpenSSH per-connection server daemon (139.178.68.195:47728). Jan 29 11:58:06.068335 sshd[4260]: Accepted publickey for core from 139.178.68.195 port 47728 ssh2: RSA SHA256:1NfIXCvxej/z4X5wlkwGw1mN1hR8YjkIU7Ph0XPIiZI Jan 29 11:58:06.070368 sshd-session[4260]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:58:06.078390 systemd-logind[1487]: New session 22 of user core. Jan 29 11:58:06.086457 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 29 11:58:06.784667 sshd[4262]: Connection closed by 139.178.68.195 port 47728 Jan 29 11:58:06.785839 sshd-session[4260]: pam_unix(sshd:session): session closed for user core Jan 29 11:58:06.791076 systemd[1]: sshd@20-10.230.10.162:22-139.178.68.195:47728.service: Deactivated successfully. Jan 29 11:58:06.795199 systemd[1]: session-22.scope: Deactivated successfully. Jan 29 11:58:06.796600 systemd-logind[1487]: Session 22 logged out. Waiting for processes to exit. Jan 29 11:58:06.798276 systemd-logind[1487]: Removed session 22. Jan 29 11:58:11.948362 systemd[1]: Started sshd@21-10.230.10.162:22-139.178.68.195:47730.service - OpenSSH per-connection server daemon (139.178.68.195:47730). Jan 29 11:58:12.849747 sshd[4277]: Accepted publickey for core from 139.178.68.195 port 47730 ssh2: RSA SHA256:1NfIXCvxej/z4X5wlkwGw1mN1hR8YjkIU7Ph0XPIiZI Jan 29 11:58:12.851848 sshd-session[4277]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:58:12.860876 systemd-logind[1487]: New session 23 of user core. Jan 29 11:58:12.869478 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 29 11:58:13.562318 sshd[4279]: Connection closed by 139.178.68.195 port 47730 Jan 29 11:58:13.563332 sshd-session[4277]: pam_unix(sshd:session): session closed for user core Jan 29 11:58:13.567240 systemd[1]: sshd@21-10.230.10.162:22-139.178.68.195:47730.service: Deactivated successfully. Jan 29 11:58:13.569727 systemd[1]: session-23.scope: Deactivated successfully. Jan 29 11:58:13.571838 systemd-logind[1487]: Session 23 logged out. Waiting for processes to exit. Jan 29 11:58:13.573533 systemd-logind[1487]: Removed session 23. Jan 29 11:58:18.718624 systemd[1]: Started sshd@22-10.230.10.162:22-139.178.68.195:47984.service - OpenSSH per-connection server daemon (139.178.68.195:47984). Jan 29 11:58:19.619858 sshd[4292]: Accepted publickey for core from 139.178.68.195 port 47984 ssh2: RSA SHA256:1NfIXCvxej/z4X5wlkwGw1mN1hR8YjkIU7Ph0XPIiZI Jan 29 11:58:19.621931 sshd-session[4292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:58:19.634779 systemd-logind[1487]: New session 24 of user core. Jan 29 11:58:19.641501 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 29 11:58:20.362912 sshd[4296]: Connection closed by 139.178.68.195 port 47984 Jan 29 11:58:20.364345 sshd-session[4292]: pam_unix(sshd:session): session closed for user core Jan 29 11:58:20.370503 systemd[1]: sshd@22-10.230.10.162:22-139.178.68.195:47984.service: Deactivated successfully. Jan 29 11:58:20.375428 systemd[1]: session-24.scope: Deactivated successfully. Jan 29 11:58:20.376658 systemd-logind[1487]: Session 24 logged out. Waiting for processes to exit. Jan 29 11:58:20.378374 systemd-logind[1487]: Removed session 24. Jan 29 11:58:25.531713 systemd[1]: Started sshd@23-10.230.10.162:22-139.178.68.195:47234.service - OpenSSH per-connection server daemon (139.178.68.195:47234). Jan 29 11:58:26.442487 sshd[4307]: Accepted publickey for core from 139.178.68.195 port 47234 ssh2: RSA SHA256:1NfIXCvxej/z4X5wlkwGw1mN1hR8YjkIU7Ph0XPIiZI Jan 29 11:58:26.444870 sshd-session[4307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:58:26.452713 systemd-logind[1487]: New session 25 of user core. Jan 29 11:58:26.464538 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 29 11:58:27.160291 sshd[4309]: Connection closed by 139.178.68.195 port 47234 Jan 29 11:58:27.161375 sshd-session[4307]: pam_unix(sshd:session): session closed for user core Jan 29 11:58:27.166860 systemd[1]: sshd@23-10.230.10.162:22-139.178.68.195:47234.service: Deactivated successfully. Jan 29 11:58:27.170010 systemd[1]: session-25.scope: Deactivated successfully. Jan 29 11:58:27.171409 systemd-logind[1487]: Session 25 logged out. Waiting for processes to exit. Jan 29 11:58:27.172954 systemd-logind[1487]: Removed session 25. Jan 29 11:58:27.318591 systemd[1]: Started sshd@24-10.230.10.162:22-139.178.68.195:47250.service - OpenSSH per-connection server daemon (139.178.68.195:47250). Jan 29 11:58:28.218458 sshd[4320]: Accepted publickey for core from 139.178.68.195 port 47250 ssh2: RSA SHA256:1NfIXCvxej/z4X5wlkwGw1mN1hR8YjkIU7Ph0XPIiZI Jan 29 11:58:28.221902 sshd-session[4320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:58:28.230233 systemd-logind[1487]: New session 26 of user core. Jan 29 11:58:28.237699 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 29 11:58:30.848145 systemd[1]: run-containerd-runc-k8s.io-68978afd564942d230eaa93c1a315cd66e995272829186d8c73aff8bff45dab8-runc.kAsDrq.mount: Deactivated successfully. Jan 29 11:58:30.883859 containerd[1506]: time="2025-01-29T11:58:30.883345754Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 11:58:30.892331 containerd[1506]: time="2025-01-29T11:58:30.892186370Z" level=info msg="StopContainer for \"815adf09c3f315e126861071c9a55f78a9ed8cba25891c055d6e67e3f21b7613\" with timeout 30 (s)" Jan 29 11:58:30.895735 containerd[1506]: time="2025-01-29T11:58:30.895302429Z" level=info msg="Stop container \"815adf09c3f315e126861071c9a55f78a9ed8cba25891c055d6e67e3f21b7613\" with signal terminated" Jan 29 11:58:30.905906 containerd[1506]: time="2025-01-29T11:58:30.905874998Z" level=info msg="StopContainer for \"68978afd564942d230eaa93c1a315cd66e995272829186d8c73aff8bff45dab8\" with timeout 2 (s)" Jan 29 11:58:30.906697 containerd[1506]: time="2025-01-29T11:58:30.906657803Z" level=info msg="Stop container \"68978afd564942d230eaa93c1a315cd66e995272829186d8c73aff8bff45dab8\" with signal terminated" Jan 29 11:58:30.919917 systemd[1]: cri-containerd-815adf09c3f315e126861071c9a55f78a9ed8cba25891c055d6e67e3f21b7613.scope: Deactivated successfully. Jan 29 11:58:30.936010 systemd-networkd[1430]: lxc_health: Link DOWN Jan 29 11:58:30.936024 systemd-networkd[1430]: lxc_health: Lost carrier Jan 29 11:58:30.962821 systemd[1]: cri-containerd-68978afd564942d230eaa93c1a315cd66e995272829186d8c73aff8bff45dab8.scope: Deactivated successfully. Jan 29 11:58:30.963227 systemd[1]: cri-containerd-68978afd564942d230eaa93c1a315cd66e995272829186d8c73aff8bff45dab8.scope: Consumed 10.703s CPU time. Jan 29 11:58:30.990156 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-815adf09c3f315e126861071c9a55f78a9ed8cba25891c055d6e67e3f21b7613-rootfs.mount: Deactivated successfully. Jan 29 11:58:30.999103 containerd[1506]: time="2025-01-29T11:58:30.998917867Z" level=info msg="shim disconnected" id=815adf09c3f315e126861071c9a55f78a9ed8cba25891c055d6e67e3f21b7613 namespace=k8s.io Jan 29 11:58:30.999103 containerd[1506]: time="2025-01-29T11:58:30.999064913Z" level=warning msg="cleaning up after shim disconnected" id=815adf09c3f315e126861071c9a55f78a9ed8cba25891c055d6e67e3f21b7613 namespace=k8s.io Jan 29 11:58:30.999103 containerd[1506]: time="2025-01-29T11:58:30.999088825Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:58:31.034972 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-68978afd564942d230eaa93c1a315cd66e995272829186d8c73aff8bff45dab8-rootfs.mount: Deactivated successfully. Jan 29 11:58:31.039645 containerd[1506]: time="2025-01-29T11:58:31.039533090Z" level=info msg="shim disconnected" id=68978afd564942d230eaa93c1a315cd66e995272829186d8c73aff8bff45dab8 namespace=k8s.io Jan 29 11:58:31.039796 containerd[1506]: time="2025-01-29T11:58:31.039666832Z" level=warning msg="cleaning up after shim disconnected" id=68978afd564942d230eaa93c1a315cd66e995272829186d8c73aff8bff45dab8 namespace=k8s.io Jan 29 11:58:31.039796 containerd[1506]: time="2025-01-29T11:58:31.039686249Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:58:31.053441 containerd[1506]: time="2025-01-29T11:58:31.053386423Z" level=info msg="StopContainer for \"815adf09c3f315e126861071c9a55f78a9ed8cba25891c055d6e67e3f21b7613\" returns successfully" Jan 29 11:58:31.060667 containerd[1506]: time="2025-01-29T11:58:31.054978505Z" level=info msg="StopPodSandbox for \"605e66f5609825d5304ad8b01e638d8402729b31e63cbfb4054d78f069353393\"" Jan 29 11:58:31.063733 containerd[1506]: time="2025-01-29T11:58:31.063138686Z" level=info msg="Container to stop \"815adf09c3f315e126861071c9a55f78a9ed8cba25891c055d6e67e3f21b7613\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:58:31.069576 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-605e66f5609825d5304ad8b01e638d8402729b31e63cbfb4054d78f069353393-shm.mount: Deactivated successfully. Jan 29 11:58:31.088143 containerd[1506]: time="2025-01-29T11:58:31.088094249Z" level=info msg="StopContainer for \"68978afd564942d230eaa93c1a315cd66e995272829186d8c73aff8bff45dab8\" returns successfully" Jan 29 11:58:31.089548 containerd[1506]: time="2025-01-29T11:58:31.088997319Z" level=info msg="StopPodSandbox for \"c8293ebc33abe4d15156f065699e245f942f17f05082bbc48c61641a53691ec1\"" Jan 29 11:58:31.089548 containerd[1506]: time="2025-01-29T11:58:31.089057135Z" level=info msg="Container to stop \"f1be44ef90bf19733472c354ec273333ab9fc6323f886c9ebd2cfc0733da3eb5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:58:31.089548 containerd[1506]: time="2025-01-29T11:58:31.089124854Z" level=info msg="Container to stop \"93da139c5aae959c22d607a33fb204a72286a21c2fd0cfce0005b42f9c86a54d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:58:31.089548 containerd[1506]: time="2025-01-29T11:58:31.089144582Z" level=info msg="Container to stop \"4d21cdb274271da57f34e5caa38e89d09fffb39f11540d84d0453467b120b587\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:58:31.089548 containerd[1506]: time="2025-01-29T11:58:31.089160651Z" level=info msg="Container to stop \"aae0b561f9ea8d1c2d274a953c98d3fe4cf64b1609fe68dcf3875aef4070f31e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:58:31.089548 containerd[1506]: time="2025-01-29T11:58:31.089176732Z" level=info msg="Container to stop \"68978afd564942d230eaa93c1a315cd66e995272829186d8c73aff8bff45dab8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:58:31.095171 systemd[1]: cri-containerd-605e66f5609825d5304ad8b01e638d8402729b31e63cbfb4054d78f069353393.scope: Deactivated successfully. Jan 29 11:58:31.108351 systemd[1]: cri-containerd-c8293ebc33abe4d15156f065699e245f942f17f05082bbc48c61641a53691ec1.scope: Deactivated successfully. Jan 29 11:58:31.162931 containerd[1506]: time="2025-01-29T11:58:31.162812538Z" level=info msg="shim disconnected" id=605e66f5609825d5304ad8b01e638d8402729b31e63cbfb4054d78f069353393 namespace=k8s.io Jan 29 11:58:31.162931 containerd[1506]: time="2025-01-29T11:58:31.162898803Z" level=warning msg="cleaning up after shim disconnected" id=605e66f5609825d5304ad8b01e638d8402729b31e63cbfb4054d78f069353393 namespace=k8s.io Jan 29 11:58:31.162931 containerd[1506]: time="2025-01-29T11:58:31.162915283Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:58:31.163318 containerd[1506]: time="2025-01-29T11:58:31.163175706Z" level=info msg="shim disconnected" id=c8293ebc33abe4d15156f065699e245f942f17f05082bbc48c61641a53691ec1 namespace=k8s.io Jan 29 11:58:31.163318 containerd[1506]: time="2025-01-29T11:58:31.163210047Z" level=warning msg="cleaning up after shim disconnected" id=c8293ebc33abe4d15156f065699e245f942f17f05082bbc48c61641a53691ec1 namespace=k8s.io Jan 29 11:58:31.163318 containerd[1506]: time="2025-01-29T11:58:31.163221989Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:58:31.189778 containerd[1506]: time="2025-01-29T11:58:31.189680375Z" level=info msg="TearDown network for sandbox \"c8293ebc33abe4d15156f065699e245f942f17f05082bbc48c61641a53691ec1\" successfully" Jan 29 11:58:31.190126 containerd[1506]: time="2025-01-29T11:58:31.190079722Z" level=info msg="StopPodSandbox for \"c8293ebc33abe4d15156f065699e245f942f17f05082bbc48c61641a53691ec1\" returns successfully" Jan 29 11:58:31.197800 containerd[1506]: time="2025-01-29T11:58:31.196870495Z" level=warning msg="cleanup warnings time=\"2025-01-29T11:58:31Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 11:58:31.200005 containerd[1506]: time="2025-01-29T11:58:31.199969454Z" level=info msg="TearDown network for sandbox \"605e66f5609825d5304ad8b01e638d8402729b31e63cbfb4054d78f069353393\" successfully" Jan 29 11:58:31.200005 containerd[1506]: time="2025-01-29T11:58:31.200003776Z" level=info msg="StopPodSandbox for \"605e66f5609825d5304ad8b01e638d8402729b31e63cbfb4054d78f069353393\" returns successfully" Jan 29 11:58:31.263240 kubelet[2729]: I0129 11:58:31.262703 2729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b8948ca6-8c18-4419-a43d-5ca59e8c990a-cilium-run\") pod \"b8948ca6-8c18-4419-a43d-5ca59e8c990a\" (UID: \"b8948ca6-8c18-4419-a43d-5ca59e8c990a\") " Jan 29 11:58:31.263240 kubelet[2729]: I0129 11:58:31.262789 2729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b8948ca6-8c18-4419-a43d-5ca59e8c990a-lib-modules\") pod \"b8948ca6-8c18-4419-a43d-5ca59e8c990a\" (UID: \"b8948ca6-8c18-4419-a43d-5ca59e8c990a\") " Jan 29 11:58:31.263240 kubelet[2729]: I0129 11:58:31.262852 2729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b8948ca6-8c18-4419-a43d-5ca59e8c990a-cni-path\") pod \"b8948ca6-8c18-4419-a43d-5ca59e8c990a\" (UID: \"b8948ca6-8c18-4419-a43d-5ca59e8c990a\") " Jan 29 11:58:31.263240 kubelet[2729]: I0129 11:58:31.262911 2729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9a111d66-9811-4caf-8a2f-a909f93335a2-cilium-config-path\") pod \"9a111d66-9811-4caf-8a2f-a909f93335a2\" (UID: \"9a111d66-9811-4caf-8a2f-a909f93335a2\") " Jan 29 11:58:31.263240 kubelet[2729]: I0129 11:58:31.262949 2729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b8948ca6-8c18-4419-a43d-5ca59e8c990a-hubble-tls\") pod \"b8948ca6-8c18-4419-a43d-5ca59e8c990a\" (UID: \"b8948ca6-8c18-4419-a43d-5ca59e8c990a\") " Jan 29 11:58:31.263240 kubelet[2729]: I0129 11:58:31.262977 2729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b8948ca6-8c18-4419-a43d-5ca59e8c990a-cilium-config-path\") pod \"b8948ca6-8c18-4419-a43d-5ca59e8c990a\" (UID: \"b8948ca6-8c18-4419-a43d-5ca59e8c990a\") " Jan 29 11:58:31.265396 kubelet[2729]: I0129 11:58:31.263003 2729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b8948ca6-8c18-4419-a43d-5ca59e8c990a-bpf-maps\") pod \"b8948ca6-8c18-4419-a43d-5ca59e8c990a\" (UID: \"b8948ca6-8c18-4419-a43d-5ca59e8c990a\") " Jan 29 11:58:31.265396 kubelet[2729]: I0129 11:58:31.263029 2729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b8948ca6-8c18-4419-a43d-5ca59e8c990a-host-proc-sys-kernel\") pod \"b8948ca6-8c18-4419-a43d-5ca59e8c990a\" (UID: \"b8948ca6-8c18-4419-a43d-5ca59e8c990a\") " Jan 29 11:58:31.265396 kubelet[2729]: I0129 11:58:31.263052 2729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b8948ca6-8c18-4419-a43d-5ca59e8c990a-cilium-cgroup\") pod \"b8948ca6-8c18-4419-a43d-5ca59e8c990a\" (UID: \"b8948ca6-8c18-4419-a43d-5ca59e8c990a\") " Jan 29 11:58:31.265396 kubelet[2729]: I0129 11:58:31.263075 2729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b8948ca6-8c18-4419-a43d-5ca59e8c990a-etc-cni-netd\") pod \"b8948ca6-8c18-4419-a43d-5ca59e8c990a\" (UID: \"b8948ca6-8c18-4419-a43d-5ca59e8c990a\") " Jan 29 11:58:31.265396 kubelet[2729]: I0129 11:58:31.263108 2729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b8948ca6-8c18-4419-a43d-5ca59e8c990a-host-proc-sys-net\") pod \"b8948ca6-8c18-4419-a43d-5ca59e8c990a\" (UID: \"b8948ca6-8c18-4419-a43d-5ca59e8c990a\") " Jan 29 11:58:31.265396 kubelet[2729]: I0129 11:58:31.263156 2729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prjfx\" (UniqueName: \"kubernetes.io/projected/9a111d66-9811-4caf-8a2f-a909f93335a2-kube-api-access-prjfx\") pod \"9a111d66-9811-4caf-8a2f-a909f93335a2\" (UID: \"9a111d66-9811-4caf-8a2f-a909f93335a2\") " Jan 29 11:58:31.265767 kubelet[2729]: I0129 11:58:31.263220 2729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b8948ca6-8c18-4419-a43d-5ca59e8c990a-clustermesh-secrets\") pod \"b8948ca6-8c18-4419-a43d-5ca59e8c990a\" (UID: \"b8948ca6-8c18-4419-a43d-5ca59e8c990a\") " Jan 29 11:58:31.265767 kubelet[2729]: I0129 11:58:31.263337 2729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b8948ca6-8c18-4419-a43d-5ca59e8c990a-hostproc\") pod \"b8948ca6-8c18-4419-a43d-5ca59e8c990a\" (UID: \"b8948ca6-8c18-4419-a43d-5ca59e8c990a\") " Jan 29 11:58:31.265767 kubelet[2729]: I0129 11:58:31.263368 2729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b8948ca6-8c18-4419-a43d-5ca59e8c990a-xtables-lock\") pod \"b8948ca6-8c18-4419-a43d-5ca59e8c990a\" (UID: \"b8948ca6-8c18-4419-a43d-5ca59e8c990a\") " Jan 29 11:58:31.265767 kubelet[2729]: I0129 11:58:31.263399 2729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bd8cd\" (UniqueName: \"kubernetes.io/projected/b8948ca6-8c18-4419-a43d-5ca59e8c990a-kube-api-access-bd8cd\") pod \"b8948ca6-8c18-4419-a43d-5ca59e8c990a\" (UID: \"b8948ca6-8c18-4419-a43d-5ca59e8c990a\") " Jan 29 11:58:31.275525 kubelet[2729]: I0129 11:58:31.274003 2729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8948ca6-8c18-4419-a43d-5ca59e8c990a-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b8948ca6-8c18-4419-a43d-5ca59e8c990a" (UID: "b8948ca6-8c18-4419-a43d-5ca59e8c990a"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:58:31.275525 kubelet[2729]: I0129 11:58:31.275143 2729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8948ca6-8c18-4419-a43d-5ca59e8c990a-cni-path" (OuterVolumeSpecName: "cni-path") pod "b8948ca6-8c18-4419-a43d-5ca59e8c990a" (UID: "b8948ca6-8c18-4419-a43d-5ca59e8c990a"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:58:31.275525 kubelet[2729]: I0129 11:58:31.273917 2729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8948ca6-8c18-4419-a43d-5ca59e8c990a-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b8948ca6-8c18-4419-a43d-5ca59e8c990a" (UID: "b8948ca6-8c18-4419-a43d-5ca59e8c990a"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:58:31.275951 kubelet[2729]: I0129 11:58:31.275792 2729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8948ca6-8c18-4419-a43d-5ca59e8c990a-kube-api-access-bd8cd" (OuterVolumeSpecName: "kube-api-access-bd8cd") pod "b8948ca6-8c18-4419-a43d-5ca59e8c990a" (UID: "b8948ca6-8c18-4419-a43d-5ca59e8c990a"). InnerVolumeSpecName "kube-api-access-bd8cd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:58:31.276797 kubelet[2729]: I0129 11:58:31.276087 2729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8948ca6-8c18-4419-a43d-5ca59e8c990a-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b8948ca6-8c18-4419-a43d-5ca59e8c990a" (UID: "b8948ca6-8c18-4419-a43d-5ca59e8c990a"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:58:31.283673 kubelet[2729]: I0129 11:58:31.281438 2729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a111d66-9811-4caf-8a2f-a909f93335a2-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9a111d66-9811-4caf-8a2f-a909f93335a2" (UID: "9a111d66-9811-4caf-8a2f-a909f93335a2"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:58:31.283673 kubelet[2729]: I0129 11:58:31.281504 2729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8948ca6-8c18-4419-a43d-5ca59e8c990a-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b8948ca6-8c18-4419-a43d-5ca59e8c990a" (UID: "b8948ca6-8c18-4419-a43d-5ca59e8c990a"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:58:31.288283 kubelet[2729]: I0129 11:58:31.287024 2729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8948ca6-8c18-4419-a43d-5ca59e8c990a-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b8948ca6-8c18-4419-a43d-5ca59e8c990a" (UID: "b8948ca6-8c18-4419-a43d-5ca59e8c990a"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:58:31.288457 kubelet[2729]: I0129 11:58:31.288389 2729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a111d66-9811-4caf-8a2f-a909f93335a2-kube-api-access-prjfx" (OuterVolumeSpecName: "kube-api-access-prjfx") pod "9a111d66-9811-4caf-8a2f-a909f93335a2" (UID: "9a111d66-9811-4caf-8a2f-a909f93335a2"). InnerVolumeSpecName "kube-api-access-prjfx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:58:31.293709 kubelet[2729]: I0129 11:58:31.293660 2729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8948ca6-8c18-4419-a43d-5ca59e8c990a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b8948ca6-8c18-4419-a43d-5ca59e8c990a" (UID: "b8948ca6-8c18-4419-a43d-5ca59e8c990a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:58:31.293806 kubelet[2729]: I0129 11:58:31.293737 2729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8948ca6-8c18-4419-a43d-5ca59e8c990a-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b8948ca6-8c18-4419-a43d-5ca59e8c990a" (UID: "b8948ca6-8c18-4419-a43d-5ca59e8c990a"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:58:31.293806 kubelet[2729]: I0129 11:58:31.293768 2729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8948ca6-8c18-4419-a43d-5ca59e8c990a-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b8948ca6-8c18-4419-a43d-5ca59e8c990a" (UID: "b8948ca6-8c18-4419-a43d-5ca59e8c990a"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:58:31.293806 kubelet[2729]: I0129 11:58:31.293796 2729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8948ca6-8c18-4419-a43d-5ca59e8c990a-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b8948ca6-8c18-4419-a43d-5ca59e8c990a" (UID: "b8948ca6-8c18-4419-a43d-5ca59e8c990a"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:58:31.296284 kubelet[2729]: I0129 11:58:31.295401 2729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8948ca6-8c18-4419-a43d-5ca59e8c990a-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b8948ca6-8c18-4419-a43d-5ca59e8c990a" (UID: "b8948ca6-8c18-4419-a43d-5ca59e8c990a"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:58:31.296284 kubelet[2729]: I0129 11:58:31.295484 2729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8948ca6-8c18-4419-a43d-5ca59e8c990a-hostproc" (OuterVolumeSpecName: "hostproc") pod "b8948ca6-8c18-4419-a43d-5ca59e8c990a" (UID: "b8948ca6-8c18-4419-a43d-5ca59e8c990a"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:58:31.296284 kubelet[2729]: I0129 11:58:31.295752 2729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8948ca6-8c18-4419-a43d-5ca59e8c990a-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b8948ca6-8c18-4419-a43d-5ca59e8c990a" (UID: "b8948ca6-8c18-4419-a43d-5ca59e8c990a"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:58:31.365808 kubelet[2729]: I0129 11:58:31.365581 2729 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b8948ca6-8c18-4419-a43d-5ca59e8c990a-cilium-config-path\") on node \"srv-xy63l.gb1.brightbox.com\" DevicePath \"\"" Jan 29 11:58:31.365808 kubelet[2729]: I0129 11:58:31.365652 2729 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b8948ca6-8c18-4419-a43d-5ca59e8c990a-bpf-maps\") on node \"srv-xy63l.gb1.brightbox.com\" DevicePath \"\"" Jan 29 11:58:31.365808 kubelet[2729]: I0129 11:58:31.365671 2729 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b8948ca6-8c18-4419-a43d-5ca59e8c990a-host-proc-sys-kernel\") on node \"srv-xy63l.gb1.brightbox.com\" DevicePath \"\"" Jan 29 11:58:31.365808 kubelet[2729]: I0129 11:58:31.365686 2729 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b8948ca6-8c18-4419-a43d-5ca59e8c990a-cilium-cgroup\") on node \"srv-xy63l.gb1.brightbox.com\" DevicePath \"\"" Jan 29 11:58:31.365808 kubelet[2729]: I0129 11:58:31.365715 2729 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b8948ca6-8c18-4419-a43d-5ca59e8c990a-etc-cni-netd\") on node \"srv-xy63l.gb1.brightbox.com\" DevicePath \"\"" Jan 29 11:58:31.365808 kubelet[2729]: I0129 11:58:31.365733 2729 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b8948ca6-8c18-4419-a43d-5ca59e8c990a-host-proc-sys-net\") on node \"srv-xy63l.gb1.brightbox.com\" DevicePath \"\"" Jan 29 11:58:31.365808 kubelet[2729]: I0129 11:58:31.365747 2729 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-prjfx\" (UniqueName: \"kubernetes.io/projected/9a111d66-9811-4caf-8a2f-a909f93335a2-kube-api-access-prjfx\") on node \"srv-xy63l.gb1.brightbox.com\" DevicePath \"\"" Jan 29 11:58:31.366414 kubelet[2729]: I0129 11:58:31.365764 2729 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b8948ca6-8c18-4419-a43d-5ca59e8c990a-clustermesh-secrets\") on node \"srv-xy63l.gb1.brightbox.com\" DevicePath \"\"" Jan 29 11:58:31.366414 kubelet[2729]: I0129 11:58:31.365791 2729 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b8948ca6-8c18-4419-a43d-5ca59e8c990a-hostproc\") on node \"srv-xy63l.gb1.brightbox.com\" DevicePath \"\"" Jan 29 11:58:31.366414 kubelet[2729]: I0129 11:58:31.365851 2729 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b8948ca6-8c18-4419-a43d-5ca59e8c990a-xtables-lock\") on node \"srv-xy63l.gb1.brightbox.com\" DevicePath \"\"" Jan 29 11:58:31.366414 kubelet[2729]: I0129 11:58:31.365870 2729 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-bd8cd\" (UniqueName: \"kubernetes.io/projected/b8948ca6-8c18-4419-a43d-5ca59e8c990a-kube-api-access-bd8cd\") on node \"srv-xy63l.gb1.brightbox.com\" DevicePath \"\"" Jan 29 11:58:31.366414 kubelet[2729]: I0129 11:58:31.365885 2729 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b8948ca6-8c18-4419-a43d-5ca59e8c990a-cilium-run\") on node \"srv-xy63l.gb1.brightbox.com\" DevicePath \"\"" Jan 29 11:58:31.366414 kubelet[2729]: I0129 11:58:31.365901 2729 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b8948ca6-8c18-4419-a43d-5ca59e8c990a-lib-modules\") on node \"srv-xy63l.gb1.brightbox.com\" DevicePath \"\"" Jan 29 11:58:31.366414 kubelet[2729]: I0129 11:58:31.365915 2729 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b8948ca6-8c18-4419-a43d-5ca59e8c990a-cni-path\") on node \"srv-xy63l.gb1.brightbox.com\" DevicePath \"\"" Jan 29 11:58:31.366414 kubelet[2729]: I0129 11:58:31.365930 2729 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9a111d66-9811-4caf-8a2f-a909f93335a2-cilium-config-path\") on node \"srv-xy63l.gb1.brightbox.com\" DevicePath \"\"" Jan 29 11:58:31.366863 kubelet[2729]: I0129 11:58:31.365944 2729 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b8948ca6-8c18-4419-a43d-5ca59e8c990a-hubble-tls\") on node \"srv-xy63l.gb1.brightbox.com\" DevicePath \"\"" Jan 29 11:58:31.765286 systemd[1]: Removed slice kubepods-besteffort-pod9a111d66_9811_4caf_8a2f_a909f93335a2.slice - libcontainer container kubepods-besteffort-pod9a111d66_9811_4caf_8a2f_a909f93335a2.slice. Jan 29 11:58:31.776272 kubelet[2729]: I0129 11:58:31.775374 2729 scope.go:117] "RemoveContainer" containerID="815adf09c3f315e126861071c9a55f78a9ed8cba25891c055d6e67e3f21b7613" Jan 29 11:58:31.788053 containerd[1506]: time="2025-01-29T11:58:31.787906308Z" level=info msg="RemoveContainer for \"815adf09c3f315e126861071c9a55f78a9ed8cba25891c055d6e67e3f21b7613\"" Jan 29 11:58:31.795546 systemd[1]: Removed slice kubepods-burstable-podb8948ca6_8c18_4419_a43d_5ca59e8c990a.slice - libcontainer container kubepods-burstable-podb8948ca6_8c18_4419_a43d_5ca59e8c990a.slice. Jan 29 11:58:31.795704 systemd[1]: kubepods-burstable-podb8948ca6_8c18_4419_a43d_5ca59e8c990a.slice: Consumed 10.832s CPU time. Jan 29 11:58:31.802396 containerd[1506]: time="2025-01-29T11:58:31.802348802Z" level=info msg="RemoveContainer for \"815adf09c3f315e126861071c9a55f78a9ed8cba25891c055d6e67e3f21b7613\" returns successfully" Jan 29 11:58:31.802888 kubelet[2729]: I0129 11:58:31.802853 2729 scope.go:117] "RemoveContainer" containerID="815adf09c3f315e126861071c9a55f78a9ed8cba25891c055d6e67e3f21b7613" Jan 29 11:58:31.803488 containerd[1506]: time="2025-01-29T11:58:31.803363117Z" level=error msg="ContainerStatus for \"815adf09c3f315e126861071c9a55f78a9ed8cba25891c055d6e67e3f21b7613\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"815adf09c3f315e126861071c9a55f78a9ed8cba25891c055d6e67e3f21b7613\": not found" Jan 29 11:58:31.805339 kubelet[2729]: E0129 11:58:31.804916 2729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"815adf09c3f315e126861071c9a55f78a9ed8cba25891c055d6e67e3f21b7613\": not found" containerID="815adf09c3f315e126861071c9a55f78a9ed8cba25891c055d6e67e3f21b7613" Jan 29 11:58:31.805339 kubelet[2729]: I0129 11:58:31.804980 2729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"815adf09c3f315e126861071c9a55f78a9ed8cba25891c055d6e67e3f21b7613"} err="failed to get container status \"815adf09c3f315e126861071c9a55f78a9ed8cba25891c055d6e67e3f21b7613\": rpc error: code = NotFound desc = an error occurred when try to find container \"815adf09c3f315e126861071c9a55f78a9ed8cba25891c055d6e67e3f21b7613\": not found" Jan 29 11:58:31.805339 kubelet[2729]: I0129 11:58:31.805095 2729 scope.go:117] "RemoveContainer" containerID="68978afd564942d230eaa93c1a315cd66e995272829186d8c73aff8bff45dab8" Jan 29 11:58:31.810632 containerd[1506]: time="2025-01-29T11:58:31.810593510Z" level=info msg="RemoveContainer for \"68978afd564942d230eaa93c1a315cd66e995272829186d8c73aff8bff45dab8\"" Jan 29 11:58:31.817548 containerd[1506]: time="2025-01-29T11:58:31.817388472Z" level=info msg="RemoveContainer for \"68978afd564942d230eaa93c1a315cd66e995272829186d8c73aff8bff45dab8\" returns successfully" Jan 29 11:58:31.819569 kubelet[2729]: I0129 11:58:31.819450 2729 scope.go:117] "RemoveContainer" containerID="f1be44ef90bf19733472c354ec273333ab9fc6323f886c9ebd2cfc0733da3eb5" Jan 29 11:58:31.821994 containerd[1506]: time="2025-01-29T11:58:31.821953247Z" level=info msg="RemoveContainer for \"f1be44ef90bf19733472c354ec273333ab9fc6323f886c9ebd2cfc0733da3eb5\"" Jan 29 11:58:31.826596 containerd[1506]: time="2025-01-29T11:58:31.826563304Z" level=info msg="RemoveContainer for \"f1be44ef90bf19733472c354ec273333ab9fc6323f886c9ebd2cfc0733da3eb5\" returns successfully" Jan 29 11:58:31.826853 kubelet[2729]: I0129 11:58:31.826803 2729 scope.go:117] "RemoveContainer" containerID="93da139c5aae959c22d607a33fb204a72286a21c2fd0cfce0005b42f9c86a54d" Jan 29 11:58:31.828663 containerd[1506]: time="2025-01-29T11:58:31.828628544Z" level=info msg="RemoveContainer for \"93da139c5aae959c22d607a33fb204a72286a21c2fd0cfce0005b42f9c86a54d\"" Jan 29 11:58:31.839287 containerd[1506]: time="2025-01-29T11:58:31.837879632Z" level=info msg="RemoveContainer for \"93da139c5aae959c22d607a33fb204a72286a21c2fd0cfce0005b42f9c86a54d\" returns successfully" Jan 29 11:58:31.841775 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-605e66f5609825d5304ad8b01e638d8402729b31e63cbfb4054d78f069353393-rootfs.mount: Deactivated successfully. Jan 29 11:58:31.842216 kubelet[2729]: I0129 11:58:31.841768 2729 scope.go:117] "RemoveContainer" containerID="aae0b561f9ea8d1c2d274a953c98d3fe4cf64b1609fe68dcf3875aef4070f31e" Jan 29 11:58:31.841991 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c8293ebc33abe4d15156f065699e245f942f17f05082bbc48c61641a53691ec1-rootfs.mount: Deactivated successfully. Jan 29 11:58:31.842130 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c8293ebc33abe4d15156f065699e245f942f17f05082bbc48c61641a53691ec1-shm.mount: Deactivated successfully. Jan 29 11:58:31.842305 systemd[1]: var-lib-kubelet-pods-9a111d66\x2d9811\x2d4caf\x2d8a2f\x2da909f93335a2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dprjfx.mount: Deactivated successfully. Jan 29 11:58:31.842452 systemd[1]: var-lib-kubelet-pods-b8948ca6\x2d8c18\x2d4419\x2da43d\x2d5ca59e8c990a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbd8cd.mount: Deactivated successfully. Jan 29 11:58:31.842583 systemd[1]: var-lib-kubelet-pods-b8948ca6\x2d8c18\x2d4419\x2da43d\x2d5ca59e8c990a-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 29 11:58:31.842712 systemd[1]: var-lib-kubelet-pods-b8948ca6\x2d8c18\x2d4419\x2da43d\x2d5ca59e8c990a-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 29 11:58:31.853671 containerd[1506]: time="2025-01-29T11:58:31.853582663Z" level=info msg="RemoveContainer for \"aae0b561f9ea8d1c2d274a953c98d3fe4cf64b1609fe68dcf3875aef4070f31e\"" Jan 29 11:58:31.867697 containerd[1506]: time="2025-01-29T11:58:31.867517525Z" level=info msg="RemoveContainer for \"aae0b561f9ea8d1c2d274a953c98d3fe4cf64b1609fe68dcf3875aef4070f31e\" returns successfully" Jan 29 11:58:31.868182 kubelet[2729]: I0129 11:58:31.868015 2729 scope.go:117] "RemoveContainer" containerID="4d21cdb274271da57f34e5caa38e89d09fffb39f11540d84d0453467b120b587" Jan 29 11:58:31.870156 containerd[1506]: time="2025-01-29T11:58:31.869857517Z" level=info msg="RemoveContainer for \"4d21cdb274271da57f34e5caa38e89d09fffb39f11540d84d0453467b120b587\"" Jan 29 11:58:31.881029 containerd[1506]: time="2025-01-29T11:58:31.880235520Z" level=info msg="RemoveContainer for \"4d21cdb274271da57f34e5caa38e89d09fffb39f11540d84d0453467b120b587\" returns successfully" Jan 29 11:58:31.882047 kubelet[2729]: I0129 11:58:31.881971 2729 scope.go:117] "RemoveContainer" containerID="68978afd564942d230eaa93c1a315cd66e995272829186d8c73aff8bff45dab8" Jan 29 11:58:31.882949 containerd[1506]: time="2025-01-29T11:58:31.882703870Z" level=error msg="ContainerStatus for \"68978afd564942d230eaa93c1a315cd66e995272829186d8c73aff8bff45dab8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"68978afd564942d230eaa93c1a315cd66e995272829186d8c73aff8bff45dab8\": not found" Jan 29 11:58:31.883103 kubelet[2729]: E0129 11:58:31.883067 2729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"68978afd564942d230eaa93c1a315cd66e995272829186d8c73aff8bff45dab8\": not found" containerID="68978afd564942d230eaa93c1a315cd66e995272829186d8c73aff8bff45dab8" Jan 29 11:58:31.883308 kubelet[2729]: I0129 11:58:31.883115 2729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"68978afd564942d230eaa93c1a315cd66e995272829186d8c73aff8bff45dab8"} err="failed to get container status \"68978afd564942d230eaa93c1a315cd66e995272829186d8c73aff8bff45dab8\": rpc error: code = NotFound desc = an error occurred when try to find container \"68978afd564942d230eaa93c1a315cd66e995272829186d8c73aff8bff45dab8\": not found" Jan 29 11:58:31.883427 kubelet[2729]: I0129 11:58:31.883313 2729 scope.go:117] "RemoveContainer" containerID="f1be44ef90bf19733472c354ec273333ab9fc6323f886c9ebd2cfc0733da3eb5" Jan 29 11:58:31.883810 containerd[1506]: time="2025-01-29T11:58:31.883644474Z" level=error msg="ContainerStatus for \"f1be44ef90bf19733472c354ec273333ab9fc6323f886c9ebd2cfc0733da3eb5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f1be44ef90bf19733472c354ec273333ab9fc6323f886c9ebd2cfc0733da3eb5\": not found" Jan 29 11:58:31.884161 kubelet[2729]: E0129 11:58:31.883962 2729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f1be44ef90bf19733472c354ec273333ab9fc6323f886c9ebd2cfc0733da3eb5\": not found" containerID="f1be44ef90bf19733472c354ec273333ab9fc6323f886c9ebd2cfc0733da3eb5" Jan 29 11:58:31.884161 kubelet[2729]: I0129 11:58:31.883993 2729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f1be44ef90bf19733472c354ec273333ab9fc6323f886c9ebd2cfc0733da3eb5"} err="failed to get container status \"f1be44ef90bf19733472c354ec273333ab9fc6323f886c9ebd2cfc0733da3eb5\": rpc error: code = NotFound desc = an error occurred when try to find container \"f1be44ef90bf19733472c354ec273333ab9fc6323f886c9ebd2cfc0733da3eb5\": not found" Jan 29 11:58:31.884161 kubelet[2729]: I0129 11:58:31.884031 2729 scope.go:117] "RemoveContainer" containerID="93da139c5aae959c22d607a33fb204a72286a21c2fd0cfce0005b42f9c86a54d" Jan 29 11:58:31.884584 containerd[1506]: time="2025-01-29T11:58:31.884411443Z" level=error msg="ContainerStatus for \"93da139c5aae959c22d607a33fb204a72286a21c2fd0cfce0005b42f9c86a54d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"93da139c5aae959c22d607a33fb204a72286a21c2fd0cfce0005b42f9c86a54d\": not found" Jan 29 11:58:31.885673 kubelet[2729]: E0129 11:58:31.885147 2729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"93da139c5aae959c22d607a33fb204a72286a21c2fd0cfce0005b42f9c86a54d\": not found" containerID="93da139c5aae959c22d607a33fb204a72286a21c2fd0cfce0005b42f9c86a54d" Jan 29 11:58:31.885673 kubelet[2729]: I0129 11:58:31.885203 2729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"93da139c5aae959c22d607a33fb204a72286a21c2fd0cfce0005b42f9c86a54d"} err="failed to get container status \"93da139c5aae959c22d607a33fb204a72286a21c2fd0cfce0005b42f9c86a54d\": rpc error: code = NotFound desc = an error occurred when try to find container \"93da139c5aae959c22d607a33fb204a72286a21c2fd0cfce0005b42f9c86a54d\": not found" Jan 29 11:58:31.885673 kubelet[2729]: I0129 11:58:31.885226 2729 scope.go:117] "RemoveContainer" containerID="aae0b561f9ea8d1c2d274a953c98d3fe4cf64b1609fe68dcf3875aef4070f31e" Jan 29 11:58:31.886471 containerd[1506]: time="2025-01-29T11:58:31.885602578Z" level=error msg="ContainerStatus for \"aae0b561f9ea8d1c2d274a953c98d3fe4cf64b1609fe68dcf3875aef4070f31e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"aae0b561f9ea8d1c2d274a953c98d3fe4cf64b1609fe68dcf3875aef4070f31e\": not found" Jan 29 11:58:31.886662 kubelet[2729]: E0129 11:58:31.886142 2729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"aae0b561f9ea8d1c2d274a953c98d3fe4cf64b1609fe68dcf3875aef4070f31e\": not found" containerID="aae0b561f9ea8d1c2d274a953c98d3fe4cf64b1609fe68dcf3875aef4070f31e" Jan 29 11:58:31.886662 kubelet[2729]: I0129 11:58:31.886176 2729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"aae0b561f9ea8d1c2d274a953c98d3fe4cf64b1609fe68dcf3875aef4070f31e"} err="failed to get container status \"aae0b561f9ea8d1c2d274a953c98d3fe4cf64b1609fe68dcf3875aef4070f31e\": rpc error: code = NotFound desc = an error occurred when try to find container \"aae0b561f9ea8d1c2d274a953c98d3fe4cf64b1609fe68dcf3875aef4070f31e\": not found" Jan 29 11:58:31.886662 kubelet[2729]: I0129 11:58:31.886214 2729 scope.go:117] "RemoveContainer" containerID="4d21cdb274271da57f34e5caa38e89d09fffb39f11540d84d0453467b120b587" Jan 29 11:58:31.886937 containerd[1506]: time="2025-01-29T11:58:31.886898167Z" level=error msg="ContainerStatus for \"4d21cdb274271da57f34e5caa38e89d09fffb39f11540d84d0453467b120b587\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4d21cdb274271da57f34e5caa38e89d09fffb39f11540d84d0453467b120b587\": not found" Jan 29 11:58:31.887164 kubelet[2729]: E0129 11:58:31.887072 2729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4d21cdb274271da57f34e5caa38e89d09fffb39f11540d84d0453467b120b587\": not found" containerID="4d21cdb274271da57f34e5caa38e89d09fffb39f11540d84d0453467b120b587" Jan 29 11:58:31.887164 kubelet[2729]: I0129 11:58:31.887104 2729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4d21cdb274271da57f34e5caa38e89d09fffb39f11540d84d0453467b120b587"} err="failed to get container status \"4d21cdb274271da57f34e5caa38e89d09fffb39f11540d84d0453467b120b587\": rpc error: code = NotFound desc = an error occurred when try to find container \"4d21cdb274271da57f34e5caa38e89d09fffb39f11540d84d0453467b120b587\": not found" Jan 29 11:58:32.827404 sshd[4322]: Connection closed by 139.178.68.195 port 47250 Jan 29 11:58:32.828897 sshd-session[4320]: pam_unix(sshd:session): session closed for user core Jan 29 11:58:32.833227 systemd[1]: sshd@24-10.230.10.162:22-139.178.68.195:47250.service: Deactivated successfully. Jan 29 11:58:32.836743 systemd[1]: session-26.scope: Deactivated successfully. Jan 29 11:58:32.837121 systemd[1]: session-26.scope: Consumed 1.404s CPU time. Jan 29 11:58:32.839946 systemd-logind[1487]: Session 26 logged out. Waiting for processes to exit. Jan 29 11:58:32.842562 systemd-logind[1487]: Removed session 26. Jan 29 11:58:32.995586 systemd[1]: Started sshd@25-10.230.10.162:22-139.178.68.195:47264.service - OpenSSH per-connection server daemon (139.178.68.195:47264). Jan 29 11:58:33.257715 kubelet[2729]: I0129 11:58:33.257557 2729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a111d66-9811-4caf-8a2f-a909f93335a2" path="/var/lib/kubelet/pods/9a111d66-9811-4caf-8a2f-a909f93335a2/volumes" Jan 29 11:58:33.259465 kubelet[2729]: I0129 11:58:33.259016 2729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8948ca6-8c18-4419-a43d-5ca59e8c990a" path="/var/lib/kubelet/pods/b8948ca6-8c18-4419-a43d-5ca59e8c990a/volumes" Jan 29 11:58:33.444689 kubelet[2729]: E0129 11:58:33.438668 2729 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 29 11:58:33.895642 sshd[4486]: Accepted publickey for core from 139.178.68.195 port 47264 ssh2: RSA SHA256:1NfIXCvxej/z4X5wlkwGw1mN1hR8YjkIU7Ph0XPIiZI Jan 29 11:58:33.897867 sshd-session[4486]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:58:33.905456 systemd-logind[1487]: New session 27 of user core. Jan 29 11:58:33.911485 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 29 11:58:35.595005 kubelet[2729]: I0129 11:58:35.594869 2729 setters.go:600] "Node became not ready" node="srv-xy63l.gb1.brightbox.com" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-29T11:58:35Z","lastTransitionTime":"2025-01-29T11:58:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 29 11:58:35.665830 kubelet[2729]: E0129 11:58:35.665015 2729 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b8948ca6-8c18-4419-a43d-5ca59e8c990a" containerName="apply-sysctl-overwrites" Jan 29 11:58:35.665830 kubelet[2729]: E0129 11:58:35.665074 2729 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b8948ca6-8c18-4419-a43d-5ca59e8c990a" containerName="mount-bpf-fs" Jan 29 11:58:35.665830 kubelet[2729]: E0129 11:58:35.665088 2729 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b8948ca6-8c18-4419-a43d-5ca59e8c990a" containerName="clean-cilium-state" Jan 29 11:58:35.665830 kubelet[2729]: E0129 11:58:35.665098 2729 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b8948ca6-8c18-4419-a43d-5ca59e8c990a" containerName="cilium-agent" Jan 29 11:58:35.665830 kubelet[2729]: E0129 11:58:35.665109 2729 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b8948ca6-8c18-4419-a43d-5ca59e8c990a" containerName="mount-cgroup" Jan 29 11:58:35.665830 kubelet[2729]: E0129 11:58:35.665120 2729 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9a111d66-9811-4caf-8a2f-a909f93335a2" containerName="cilium-operator" Jan 29 11:58:35.676577 kubelet[2729]: I0129 11:58:35.672086 2729 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a111d66-9811-4caf-8a2f-a909f93335a2" containerName="cilium-operator" Jan 29 11:58:35.676577 kubelet[2729]: I0129 11:58:35.676166 2729 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8948ca6-8c18-4419-a43d-5ca59e8c990a" containerName="cilium-agent" Jan 29 11:58:35.731971 systemd[1]: Created slice kubepods-burstable-podb8ade772_e431_4747_8807_65016b0e2eb0.slice - libcontainer container kubepods-burstable-podb8ade772_e431_4747_8807_65016b0e2eb0.slice. Jan 29 11:58:35.786294 sshd[4488]: Connection closed by 139.178.68.195 port 47264 Jan 29 11:58:35.785963 sshd-session[4486]: pam_unix(sshd:session): session closed for user core Jan 29 11:58:35.791090 systemd[1]: sshd@25-10.230.10.162:22-139.178.68.195:47264.service: Deactivated successfully. Jan 29 11:58:35.795818 systemd[1]: session-27.scope: Deactivated successfully. Jan 29 11:58:35.796257 systemd[1]: session-27.scope: Consumed 1.148s CPU time. Jan 29 11:58:35.796884 kubelet[2729]: I0129 11:58:35.796832 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b8ade772-e431-4747-8807-65016b0e2eb0-host-proc-sys-net\") pod \"cilium-cf869\" (UID: \"b8ade772-e431-4747-8807-65016b0e2eb0\") " pod="kube-system/cilium-cf869" Jan 29 11:58:35.796993 kubelet[2729]: I0129 11:58:35.796931 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b8ade772-e431-4747-8807-65016b0e2eb0-hostproc\") pod \"cilium-cf869\" (UID: \"b8ade772-e431-4747-8807-65016b0e2eb0\") " pod="kube-system/cilium-cf869" Jan 29 11:58:35.796993 kubelet[2729]: I0129 11:58:35.796971 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b8ade772-e431-4747-8807-65016b0e2eb0-host-proc-sys-kernel\") pod \"cilium-cf869\" (UID: \"b8ade772-e431-4747-8807-65016b0e2eb0\") " pod="kube-system/cilium-cf869" Jan 29 11:58:35.797118 kubelet[2729]: I0129 11:58:35.797008 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bh4mn\" (UniqueName: \"kubernetes.io/projected/b8ade772-e431-4747-8807-65016b0e2eb0-kube-api-access-bh4mn\") pod \"cilium-cf869\" (UID: \"b8ade772-e431-4747-8807-65016b0e2eb0\") " pod="kube-system/cilium-cf869" Jan 29 11:58:35.797118 kubelet[2729]: I0129 11:58:35.797063 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b8ade772-e431-4747-8807-65016b0e2eb0-hubble-tls\") pod \"cilium-cf869\" (UID: \"b8ade772-e431-4747-8807-65016b0e2eb0\") " pod="kube-system/cilium-cf869" Jan 29 11:58:35.797118 kubelet[2729]: I0129 11:58:35.797095 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b8ade772-e431-4747-8807-65016b0e2eb0-cilium-run\") pod \"cilium-cf869\" (UID: \"b8ade772-e431-4747-8807-65016b0e2eb0\") " pod="kube-system/cilium-cf869" Jan 29 11:58:35.797452 kubelet[2729]: I0129 11:58:35.797133 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b8ade772-e431-4747-8807-65016b0e2eb0-lib-modules\") pod \"cilium-cf869\" (UID: \"b8ade772-e431-4747-8807-65016b0e2eb0\") " pod="kube-system/cilium-cf869" Jan 29 11:58:35.797452 kubelet[2729]: I0129 11:58:35.797176 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b8ade772-e431-4747-8807-65016b0e2eb0-cni-path\") pod \"cilium-cf869\" (UID: \"b8ade772-e431-4747-8807-65016b0e2eb0\") " pod="kube-system/cilium-cf869" Jan 29 11:58:35.797452 kubelet[2729]: I0129 11:58:35.797205 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b8ade772-e431-4747-8807-65016b0e2eb0-etc-cni-netd\") pod \"cilium-cf869\" (UID: \"b8ade772-e431-4747-8807-65016b0e2eb0\") " pod="kube-system/cilium-cf869" Jan 29 11:58:35.797452 kubelet[2729]: I0129 11:58:35.797263 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b8ade772-e431-4747-8807-65016b0e2eb0-xtables-lock\") pod \"cilium-cf869\" (UID: \"b8ade772-e431-4747-8807-65016b0e2eb0\") " pod="kube-system/cilium-cf869" Jan 29 11:58:35.797452 kubelet[2729]: I0129 11:58:35.797301 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b8ade772-e431-4747-8807-65016b0e2eb0-bpf-maps\") pod \"cilium-cf869\" (UID: \"b8ade772-e431-4747-8807-65016b0e2eb0\") " pod="kube-system/cilium-cf869" Jan 29 11:58:35.797452 kubelet[2729]: I0129 11:58:35.797339 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b8ade772-e431-4747-8807-65016b0e2eb0-clustermesh-secrets\") pod \"cilium-cf869\" (UID: \"b8ade772-e431-4747-8807-65016b0e2eb0\") " pod="kube-system/cilium-cf869" Jan 29 11:58:35.797746 kubelet[2729]: I0129 11:58:35.797366 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b8ade772-e431-4747-8807-65016b0e2eb0-cilium-config-path\") pod \"cilium-cf869\" (UID: \"b8ade772-e431-4747-8807-65016b0e2eb0\") " pod="kube-system/cilium-cf869" Jan 29 11:58:35.797746 kubelet[2729]: I0129 11:58:35.797389 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b8ade772-e431-4747-8807-65016b0e2eb0-cilium-cgroup\") pod \"cilium-cf869\" (UID: \"b8ade772-e431-4747-8807-65016b0e2eb0\") " pod="kube-system/cilium-cf869" Jan 29 11:58:35.797746 kubelet[2729]: I0129 11:58:35.797410 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b8ade772-e431-4747-8807-65016b0e2eb0-cilium-ipsec-secrets\") pod \"cilium-cf869\" (UID: \"b8ade772-e431-4747-8807-65016b0e2eb0\") " pod="kube-system/cilium-cf869" Jan 29 11:58:35.799059 systemd-logind[1487]: Session 27 logged out. Waiting for processes to exit. Jan 29 11:58:35.801473 systemd-logind[1487]: Removed session 27. Jan 29 11:58:35.959715 systemd[1]: Started sshd@26-10.230.10.162:22-139.178.68.195:40492.service - OpenSSH per-connection server daemon (139.178.68.195:40492). Jan 29 11:58:36.044073 containerd[1506]: time="2025-01-29T11:58:36.043951083Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cf869,Uid:b8ade772-e431-4747-8807-65016b0e2eb0,Namespace:kube-system,Attempt:0,}" Jan 29 11:58:36.083412 containerd[1506]: time="2025-01-29T11:58:36.081934140Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:58:36.083412 containerd[1506]: time="2025-01-29T11:58:36.083311881Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:58:36.083412 containerd[1506]: time="2025-01-29T11:58:36.083353517Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:58:36.085518 containerd[1506]: time="2025-01-29T11:58:36.083535879Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:58:36.130044 systemd[1]: Started cri-containerd-ee946e1bd8e4453284a0d512b30ee15aa29d3944df9f8bcff4e644c6ecbf72c2.scope - libcontainer container ee946e1bd8e4453284a0d512b30ee15aa29d3944df9f8bcff4e644c6ecbf72c2. Jan 29 11:58:36.184446 containerd[1506]: time="2025-01-29T11:58:36.184107829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cf869,Uid:b8ade772-e431-4747-8807-65016b0e2eb0,Namespace:kube-system,Attempt:0,} returns sandbox id \"ee946e1bd8e4453284a0d512b30ee15aa29d3944df9f8bcff4e644c6ecbf72c2\"" Jan 29 11:58:36.190273 containerd[1506]: time="2025-01-29T11:58:36.189817811Z" level=info msg="CreateContainer within sandbox \"ee946e1bd8e4453284a0d512b30ee15aa29d3944df9f8bcff4e644c6ecbf72c2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 29 11:58:36.204923 containerd[1506]: time="2025-01-29T11:58:36.204779130Z" level=info msg="CreateContainer within sandbox \"ee946e1bd8e4453284a0d512b30ee15aa29d3944df9f8bcff4e644c6ecbf72c2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0ad32b7aa6e268ff1ca4d0a99bbd2e7039b5ef961e13eca69b5e887c92006e93\"" Jan 29 11:58:36.207319 containerd[1506]: time="2025-01-29T11:58:36.205288898Z" level=info msg="StartContainer for \"0ad32b7aa6e268ff1ca4d0a99bbd2e7039b5ef961e13eca69b5e887c92006e93\"" Jan 29 11:58:36.242474 systemd[1]: Started cri-containerd-0ad32b7aa6e268ff1ca4d0a99bbd2e7039b5ef961e13eca69b5e887c92006e93.scope - libcontainer container 0ad32b7aa6e268ff1ca4d0a99bbd2e7039b5ef961e13eca69b5e887c92006e93. Jan 29 11:58:36.284660 containerd[1506]: time="2025-01-29T11:58:36.284604799Z" level=info msg="StartContainer for \"0ad32b7aa6e268ff1ca4d0a99bbd2e7039b5ef961e13eca69b5e887c92006e93\" returns successfully" Jan 29 11:58:36.306111 systemd[1]: cri-containerd-0ad32b7aa6e268ff1ca4d0a99bbd2e7039b5ef961e13eca69b5e887c92006e93.scope: Deactivated successfully. Jan 29 11:58:36.350588 containerd[1506]: time="2025-01-29T11:58:36.350338318Z" level=info msg="shim disconnected" id=0ad32b7aa6e268ff1ca4d0a99bbd2e7039b5ef961e13eca69b5e887c92006e93 namespace=k8s.io Jan 29 11:58:36.350588 containerd[1506]: time="2025-01-29T11:58:36.350578367Z" level=warning msg="cleaning up after shim disconnected" id=0ad32b7aa6e268ff1ca4d0a99bbd2e7039b5ef961e13eca69b5e887c92006e93 namespace=k8s.io Jan 29 11:58:36.350895 containerd[1506]: time="2025-01-29T11:58:36.350602839Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:58:36.806953 containerd[1506]: time="2025-01-29T11:58:36.806833569Z" level=info msg="CreateContainer within sandbox \"ee946e1bd8e4453284a0d512b30ee15aa29d3944df9f8bcff4e644c6ecbf72c2\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 29 11:58:36.820162 containerd[1506]: time="2025-01-29T11:58:36.820008826Z" level=info msg="CreateContainer within sandbox \"ee946e1bd8e4453284a0d512b30ee15aa29d3944df9f8bcff4e644c6ecbf72c2\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ffaf794722ce44416058644baa4c054e00129451641bd5559fdd546d9decef30\"" Jan 29 11:58:36.821824 containerd[1506]: time="2025-01-29T11:58:36.821780843Z" level=info msg="StartContainer for \"ffaf794722ce44416058644baa4c054e00129451641bd5559fdd546d9decef30\"" Jan 29 11:58:36.862176 sshd[4502]: Accepted publickey for core from 139.178.68.195 port 40492 ssh2: RSA SHA256:1NfIXCvxej/z4X5wlkwGw1mN1hR8YjkIU7Ph0XPIiZI Jan 29 11:58:36.865228 sshd-session[4502]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:58:36.868495 systemd[1]: Started cri-containerd-ffaf794722ce44416058644baa4c054e00129451641bd5559fdd546d9decef30.scope - libcontainer container ffaf794722ce44416058644baa4c054e00129451641bd5559fdd546d9decef30. Jan 29 11:58:36.879002 systemd-logind[1487]: New session 28 of user core. Jan 29 11:58:36.886763 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 29 11:58:36.927785 containerd[1506]: time="2025-01-29T11:58:36.927711281Z" level=info msg="StartContainer for \"ffaf794722ce44416058644baa4c054e00129451641bd5559fdd546d9decef30\" returns successfully" Jan 29 11:58:36.941078 systemd[1]: cri-containerd-ffaf794722ce44416058644baa4c054e00129451641bd5559fdd546d9decef30.scope: Deactivated successfully. Jan 29 11:58:36.974114 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ffaf794722ce44416058644baa4c054e00129451641bd5559fdd546d9decef30-rootfs.mount: Deactivated successfully. Jan 29 11:58:36.984571 containerd[1506]: time="2025-01-29T11:58:36.984299604Z" level=info msg="shim disconnected" id=ffaf794722ce44416058644baa4c054e00129451641bd5559fdd546d9decef30 namespace=k8s.io Jan 29 11:58:36.984571 containerd[1506]: time="2025-01-29T11:58:36.984565430Z" level=warning msg="cleaning up after shim disconnected" id=ffaf794722ce44416058644baa4c054e00129451641bd5559fdd546d9decef30 namespace=k8s.io Jan 29 11:58:36.984864 containerd[1506]: time="2025-01-29T11:58:36.984595799Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:58:37.480604 sshd[4629]: Connection closed by 139.178.68.195 port 40492 Jan 29 11:58:37.481725 sshd-session[4502]: pam_unix(sshd:session): session closed for user core Jan 29 11:58:37.487993 systemd[1]: sshd@26-10.230.10.162:22-139.178.68.195:40492.service: Deactivated successfully. Jan 29 11:58:37.491636 systemd[1]: session-28.scope: Deactivated successfully. Jan 29 11:58:37.492903 systemd-logind[1487]: Session 28 logged out. Waiting for processes to exit. Jan 29 11:58:37.494521 systemd-logind[1487]: Removed session 28. Jan 29 11:58:37.639627 systemd[1]: Started sshd@27-10.230.10.162:22-139.178.68.195:40498.service - OpenSSH per-connection server daemon (139.178.68.195:40498). Jan 29 11:58:37.813011 containerd[1506]: time="2025-01-29T11:58:37.812465755Z" level=info msg="CreateContainer within sandbox \"ee946e1bd8e4453284a0d512b30ee15aa29d3944df9f8bcff4e644c6ecbf72c2\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 29 11:58:37.848622 containerd[1506]: time="2025-01-29T11:58:37.848553008Z" level=info msg="CreateContainer within sandbox \"ee946e1bd8e4453284a0d512b30ee15aa29d3944df9f8bcff4e644c6ecbf72c2\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"79b4614f5dbf34aaf28b2791fa07809fae9fddf16d44af27d158262ed02c7ee6\"" Jan 29 11:58:37.852281 containerd[1506]: time="2025-01-29T11:58:37.850818638Z" level=info msg="StartContainer for \"79b4614f5dbf34aaf28b2791fa07809fae9fddf16d44af27d158262ed02c7ee6\"" Jan 29 11:58:37.899580 systemd[1]: Started cri-containerd-79b4614f5dbf34aaf28b2791fa07809fae9fddf16d44af27d158262ed02c7ee6.scope - libcontainer container 79b4614f5dbf34aaf28b2791fa07809fae9fddf16d44af27d158262ed02c7ee6. Jan 29 11:58:37.953657 containerd[1506]: time="2025-01-29T11:58:37.953609245Z" level=info msg="StartContainer for \"79b4614f5dbf34aaf28b2791fa07809fae9fddf16d44af27d158262ed02c7ee6\" returns successfully" Jan 29 11:58:37.962103 systemd[1]: cri-containerd-79b4614f5dbf34aaf28b2791fa07809fae9fddf16d44af27d158262ed02c7ee6.scope: Deactivated successfully. Jan 29 11:58:37.994488 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-79b4614f5dbf34aaf28b2791fa07809fae9fddf16d44af27d158262ed02c7ee6-rootfs.mount: Deactivated successfully. Jan 29 11:58:38.000741 containerd[1506]: time="2025-01-29T11:58:38.000608026Z" level=info msg="shim disconnected" id=79b4614f5dbf34aaf28b2791fa07809fae9fddf16d44af27d158262ed02c7ee6 namespace=k8s.io Jan 29 11:58:38.000741 containerd[1506]: time="2025-01-29T11:58:38.000692325Z" level=warning msg="cleaning up after shim disconnected" id=79b4614f5dbf34aaf28b2791fa07809fae9fddf16d44af27d158262ed02c7ee6 namespace=k8s.io Jan 29 11:58:38.000741 containerd[1506]: time="2025-01-29T11:58:38.000708392Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:58:38.446232 kubelet[2729]: E0129 11:58:38.446156 2729 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 29 11:58:38.546289 sshd[4671]: Accepted publickey for core from 139.178.68.195 port 40498 ssh2: RSA SHA256:1NfIXCvxej/z4X5wlkwGw1mN1hR8YjkIU7Ph0XPIiZI Jan 29 11:58:38.548327 sshd-session[4671]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:58:38.556129 systemd-logind[1487]: New session 29 of user core. Jan 29 11:58:38.561455 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 29 11:58:38.832902 containerd[1506]: time="2025-01-29T11:58:38.831594947Z" level=info msg="CreateContainer within sandbox \"ee946e1bd8e4453284a0d512b30ee15aa29d3944df9f8bcff4e644c6ecbf72c2\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 29 11:58:38.852911 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2925705281.mount: Deactivated successfully. Jan 29 11:58:38.855913 containerd[1506]: time="2025-01-29T11:58:38.855575889Z" level=info msg="CreateContainer within sandbox \"ee946e1bd8e4453284a0d512b30ee15aa29d3944df9f8bcff4e644c6ecbf72c2\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"bc07134808fed48d0987c74942b426d7eab3c281bea23a7838e689cabc05800d\"" Jan 29 11:58:38.857550 containerd[1506]: time="2025-01-29T11:58:38.857459959Z" level=info msg="StartContainer for \"bc07134808fed48d0987c74942b426d7eab3c281bea23a7838e689cabc05800d\"" Jan 29 11:58:38.924439 systemd[1]: Started cri-containerd-bc07134808fed48d0987c74942b426d7eab3c281bea23a7838e689cabc05800d.scope - libcontainer container bc07134808fed48d0987c74942b426d7eab3c281bea23a7838e689cabc05800d. Jan 29 11:58:38.973420 systemd[1]: cri-containerd-bc07134808fed48d0987c74942b426d7eab3c281bea23a7838e689cabc05800d.scope: Deactivated successfully. Jan 29 11:58:38.977681 containerd[1506]: time="2025-01-29T11:58:38.977189451Z" level=info msg="StartContainer for \"bc07134808fed48d0987c74942b426d7eab3c281bea23a7838e689cabc05800d\" returns successfully" Jan 29 11:58:39.005823 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bc07134808fed48d0987c74942b426d7eab3c281bea23a7838e689cabc05800d-rootfs.mount: Deactivated successfully. Jan 29 11:58:39.014826 containerd[1506]: time="2025-01-29T11:58:39.014530929Z" level=info msg="shim disconnected" id=bc07134808fed48d0987c74942b426d7eab3c281bea23a7838e689cabc05800d namespace=k8s.io Jan 29 11:58:39.014826 containerd[1506]: time="2025-01-29T11:58:39.014606680Z" level=warning msg="cleaning up after shim disconnected" id=bc07134808fed48d0987c74942b426d7eab3c281bea23a7838e689cabc05800d namespace=k8s.io Jan 29 11:58:39.014826 containerd[1506]: time="2025-01-29T11:58:39.014638228Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:58:39.835169 containerd[1506]: time="2025-01-29T11:58:39.834994571Z" level=info msg="CreateContainer within sandbox \"ee946e1bd8e4453284a0d512b30ee15aa29d3944df9f8bcff4e644c6ecbf72c2\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 29 11:58:39.856492 containerd[1506]: time="2025-01-29T11:58:39.856118778Z" level=info msg="CreateContainer within sandbox \"ee946e1bd8e4453284a0d512b30ee15aa29d3944df9f8bcff4e644c6ecbf72c2\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6cfef9f3e17ed497ec15fa7d02298ff4533af3ec243128bed3d0f18e32bf6d97\"" Jan 29 11:58:39.857503 containerd[1506]: time="2025-01-29T11:58:39.857437404Z" level=info msg="StartContainer for \"6cfef9f3e17ed497ec15fa7d02298ff4533af3ec243128bed3d0f18e32bf6d97\"" Jan 29 11:58:39.903633 systemd[1]: Started cri-containerd-6cfef9f3e17ed497ec15fa7d02298ff4533af3ec243128bed3d0f18e32bf6d97.scope - libcontainer container 6cfef9f3e17ed497ec15fa7d02298ff4533af3ec243128bed3d0f18e32bf6d97. Jan 29 11:58:39.950834 containerd[1506]: time="2025-01-29T11:58:39.950545915Z" level=info msg="StartContainer for \"6cfef9f3e17ed497ec15fa7d02298ff4533af3ec243128bed3d0f18e32bf6d97\" returns successfully" Jan 29 11:58:40.706329 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 29 11:58:40.866824 kubelet[2729]: I0129 11:58:40.866058 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-cf869" podStartSLOduration=5.866013708 podStartE2EDuration="5.866013708s" podCreationTimestamp="2025-01-29 11:58:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:58:40.865957014 +0000 UTC m=+147.846729961" watchObservedRunningTime="2025-01-29 11:58:40.866013708 +0000 UTC m=+147.846786651" Jan 29 11:58:41.405015 systemd[1]: run-containerd-runc-k8s.io-6cfef9f3e17ed497ec15fa7d02298ff4533af3ec243128bed3d0f18e32bf6d97-runc.RTU4DP.mount: Deactivated successfully. Jan 29 11:58:44.598536 systemd-networkd[1430]: lxc_health: Link UP Jan 29 11:58:44.605716 systemd-networkd[1430]: lxc_health: Gained carrier Jan 29 11:58:45.920081 systemd[1]: run-containerd-runc-k8s.io-6cfef9f3e17ed497ec15fa7d02298ff4533af3ec243128bed3d0f18e32bf6d97-runc.SjAwXR.mount: Deactivated successfully. Jan 29 11:58:46.304611 systemd-networkd[1430]: lxc_health: Gained IPv6LL Jan 29 11:58:50.684941 sshd[4730]: Connection closed by 139.178.68.195 port 40498 Jan 29 11:58:50.686542 sshd-session[4671]: pam_unix(sshd:session): session closed for user core Jan 29 11:58:50.698477 systemd[1]: sshd@27-10.230.10.162:22-139.178.68.195:40498.service: Deactivated successfully. Jan 29 11:58:50.704674 systemd[1]: session-29.scope: Deactivated successfully. Jan 29 11:58:50.708711 systemd-logind[1487]: Session 29 logged out. Waiting for processes to exit. Jan 29 11:58:50.711776 systemd-logind[1487]: Removed session 29.