Feb 13 20:24:47.888331 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 17:40:15 -00 2025 Feb 13 20:24:47.888364 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=f28373bbaddf11103b551b595069cf5faacb27d62f1aab4f9911393ba418b416 Feb 13 20:24:47.888374 kernel: BIOS-provided physical RAM map: Feb 13 20:24:47.888384 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 13 20:24:47.888390 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 13 20:24:47.888397 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 13 20:24:47.888405 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Feb 13 20:24:47.888412 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Feb 13 20:24:47.888419 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Feb 13 20:24:47.888426 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Feb 13 20:24:47.888433 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 13 20:24:47.888440 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 13 20:24:47.888449 kernel: NX (Execute Disable) protection: active Feb 13 20:24:47.888457 kernel: APIC: Static calls initialized Feb 13 20:24:47.888465 kernel: SMBIOS 2.8 present. Feb 13 20:24:47.888474 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Feb 13 20:24:47.888482 kernel: Hypervisor detected: KVM Feb 13 20:24:47.888492 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 13 20:24:47.888500 kernel: kvm-clock: using sched offset of 3925161109 cycles Feb 13 20:24:47.888508 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 13 20:24:47.888517 kernel: tsc: Detected 2294.608 MHz processor Feb 13 20:24:47.888525 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 20:24:47.888542 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 20:24:47.888550 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Feb 13 20:24:47.888558 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Feb 13 20:24:47.888566 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 20:24:47.888576 kernel: Using GB pages for direct mapping Feb 13 20:24:47.888584 kernel: ACPI: Early table checksum verification disabled Feb 13 20:24:47.888592 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Feb 13 20:24:47.888600 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:24:47.888608 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:24:47.888615 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:24:47.888623 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Feb 13 20:24:47.888631 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:24:47.888640 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:24:47.888650 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:24:47.888658 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:24:47.888666 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Feb 13 20:24:47.888674 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Feb 13 20:24:47.888682 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Feb 13 20:24:47.888694 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Feb 13 20:24:47.888702 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Feb 13 20:24:47.888712 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Feb 13 20:24:47.888721 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Feb 13 20:24:47.888729 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 13 20:24:47.888737 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 13 20:24:47.888746 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Feb 13 20:24:47.888754 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Feb 13 20:24:47.888762 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Feb 13 20:24:47.888770 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Feb 13 20:24:47.888781 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Feb 13 20:24:47.888789 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Feb 13 20:24:47.888797 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Feb 13 20:24:47.888805 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Feb 13 20:24:47.888814 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Feb 13 20:24:47.888822 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Feb 13 20:24:47.888830 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Feb 13 20:24:47.888838 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Feb 13 20:24:47.888846 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Feb 13 20:24:47.888856 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Feb 13 20:24:47.888865 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Feb 13 20:24:47.888873 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Feb 13 20:24:47.888881 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Feb 13 20:24:47.888890 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Feb 13 20:24:47.888898 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Feb 13 20:24:47.888906 kernel: Zone ranges: Feb 13 20:24:47.888915 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 20:24:47.888923 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Feb 13 20:24:47.888931 kernel: Normal empty Feb 13 20:24:47.888942 kernel: Movable zone start for each node Feb 13 20:24:47.888950 kernel: Early memory node ranges Feb 13 20:24:47.888958 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 13 20:24:47.888966 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Feb 13 20:24:47.888975 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Feb 13 20:24:47.888983 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 20:24:47.888991 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 13 20:24:47.889000 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Feb 13 20:24:47.889028 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 13 20:24:47.889038 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 13 20:24:47.889047 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 13 20:24:47.889055 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 13 20:24:47.889063 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 13 20:24:47.889071 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 20:24:47.889080 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 13 20:24:47.889088 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 13 20:24:47.889096 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 20:24:47.889105 kernel: TSC deadline timer available Feb 13 20:24:47.889115 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Feb 13 20:24:47.889124 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Feb 13 20:24:47.889132 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Feb 13 20:24:47.889140 kernel: Booting paravirtualized kernel on KVM Feb 13 20:24:47.889149 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 20:24:47.889157 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Feb 13 20:24:47.889166 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Feb 13 20:24:47.889174 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Feb 13 20:24:47.889182 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Feb 13 20:24:47.889192 kernel: kvm-guest: PV spinlocks enabled Feb 13 20:24:47.889201 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 13 20:24:47.889210 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=f28373bbaddf11103b551b595069cf5faacb27d62f1aab4f9911393ba418b416 Feb 13 20:24:47.889219 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 20:24:47.889227 kernel: random: crng init done Feb 13 20:24:47.889235 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 20:24:47.889244 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 13 20:24:47.889252 kernel: Fallback order for Node 0: 0 Feb 13 20:24:47.889263 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Feb 13 20:24:47.889271 kernel: Policy zone: DMA32 Feb 13 20:24:47.889279 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 20:24:47.889287 kernel: software IO TLB: area num 16. Feb 13 20:24:47.889296 kernel: Memory: 1899480K/2096616K available (14336K kernel code, 2301K rwdata, 22852K rodata, 43476K init, 1596K bss, 196876K reserved, 0K cma-reserved) Feb 13 20:24:47.889304 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Feb 13 20:24:47.889312 kernel: ftrace: allocating 37893 entries in 149 pages Feb 13 20:24:47.889321 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 20:24:47.889329 kernel: Dynamic Preempt: voluntary Feb 13 20:24:47.889339 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 20:24:47.889349 kernel: rcu: RCU event tracing is enabled. Feb 13 20:24:47.889357 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Feb 13 20:24:47.889366 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 20:24:47.889375 kernel: Rude variant of Tasks RCU enabled. Feb 13 20:24:47.889393 kernel: Tracing variant of Tasks RCU enabled. Feb 13 20:24:47.889411 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 20:24:47.889420 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Feb 13 20:24:47.889429 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Feb 13 20:24:47.889438 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 20:24:47.889447 kernel: Console: colour VGA+ 80x25 Feb 13 20:24:47.889456 kernel: printk: console [tty0] enabled Feb 13 20:24:47.889467 kernel: printk: console [ttyS0] enabled Feb 13 20:24:47.889476 kernel: ACPI: Core revision 20230628 Feb 13 20:24:47.889485 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 20:24:47.889493 kernel: x2apic enabled Feb 13 20:24:47.889502 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 20:24:47.889514 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x21134f58f0d, max_idle_ns: 440795217993 ns Feb 13 20:24:47.889523 kernel: Calibrating delay loop (skipped) preset value.. 4589.21 BogoMIPS (lpj=2294608) Feb 13 20:24:47.889539 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Feb 13 20:24:47.889548 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Feb 13 20:24:47.889557 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Feb 13 20:24:47.889566 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 20:24:47.889574 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Feb 13 20:24:47.889583 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Feb 13 20:24:47.889592 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Feb 13 20:24:47.889600 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 20:24:47.889611 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Feb 13 20:24:47.889620 kernel: RETBleed: Mitigation: Enhanced IBRS Feb 13 20:24:47.889629 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 20:24:47.889638 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 13 20:24:47.889647 kernel: TAA: Mitigation: Clear CPU buffers Feb 13 20:24:47.889655 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 13 20:24:47.889664 kernel: GDS: Unknown: Dependent on hypervisor status Feb 13 20:24:47.889673 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 20:24:47.889682 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 20:24:47.889690 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 20:24:47.889699 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Feb 13 20:24:47.889710 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Feb 13 20:24:47.889719 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Feb 13 20:24:47.889728 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Feb 13 20:24:47.889737 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 20:24:47.889745 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Feb 13 20:24:47.889754 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Feb 13 20:24:47.889763 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Feb 13 20:24:47.889771 kernel: x86/fpu: xstate_offset[9]: 2432, xstate_sizes[9]: 8 Feb 13 20:24:47.889780 kernel: x86/fpu: Enabled xstate features 0x2e7, context size is 2440 bytes, using 'compacted' format. Feb 13 20:24:47.889789 kernel: Freeing SMP alternatives memory: 32K Feb 13 20:24:47.889797 kernel: pid_max: default: 32768 minimum: 301 Feb 13 20:24:47.889808 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 20:24:47.889817 kernel: landlock: Up and running. Feb 13 20:24:47.889825 kernel: SELinux: Initializing. Feb 13 20:24:47.889834 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 13 20:24:47.889843 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 13 20:24:47.889851 kernel: smpboot: CPU0: Intel Xeon Processor (Cascadelake) (family: 0x6, model: 0x55, stepping: 0x6) Feb 13 20:24:47.889860 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Feb 13 20:24:47.889869 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Feb 13 20:24:47.889878 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Feb 13 20:24:47.889887 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Feb 13 20:24:47.889898 kernel: signal: max sigframe size: 3632 Feb 13 20:24:47.889907 kernel: rcu: Hierarchical SRCU implementation. Feb 13 20:24:47.889916 kernel: rcu: Max phase no-delay instances is 400. Feb 13 20:24:47.889925 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 13 20:24:47.889934 kernel: smp: Bringing up secondary CPUs ... Feb 13 20:24:47.889943 kernel: smpboot: x86: Booting SMP configuration: Feb 13 20:24:47.889952 kernel: .... node #0, CPUs: #1 Feb 13 20:24:47.889961 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Feb 13 20:24:47.889969 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 20:24:47.889980 kernel: smpboot: Max logical packages: 16 Feb 13 20:24:47.889989 kernel: smpboot: Total of 2 processors activated (9178.43 BogoMIPS) Feb 13 20:24:47.889998 kernel: devtmpfs: initialized Feb 13 20:24:47.890029 kernel: x86/mm: Memory block size: 128MB Feb 13 20:24:47.890038 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 20:24:47.890047 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Feb 13 20:24:47.890056 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 20:24:47.890065 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 20:24:47.890074 kernel: audit: initializing netlink subsys (disabled) Feb 13 20:24:47.890086 kernel: audit: type=2000 audit(1739478287.317:1): state=initialized audit_enabled=0 res=1 Feb 13 20:24:47.890094 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 20:24:47.890103 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 20:24:47.890112 kernel: cpuidle: using governor menu Feb 13 20:24:47.890121 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 20:24:47.890129 kernel: dca service started, version 1.12.1 Feb 13 20:24:47.890138 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Feb 13 20:24:47.890148 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Feb 13 20:24:47.890157 kernel: PCI: Using configuration type 1 for base access Feb 13 20:24:47.890167 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 20:24:47.890176 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 20:24:47.890185 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 20:24:47.890194 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 20:24:47.890203 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 20:24:47.890212 kernel: ACPI: Added _OSI(Module Device) Feb 13 20:24:47.890220 kernel: ACPI: Added _OSI(Processor Device) Feb 13 20:24:47.890229 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 20:24:47.890238 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 20:24:47.890249 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 20:24:47.890257 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 20:24:47.890266 kernel: ACPI: Interpreter enabled Feb 13 20:24:47.890275 kernel: ACPI: PM: (supports S0 S5) Feb 13 20:24:47.890284 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 20:24:47.890293 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 20:24:47.890301 kernel: PCI: Using E820 reservations for host bridge windows Feb 13 20:24:47.890310 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Feb 13 20:24:47.890319 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 20:24:47.890489 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 20:24:47.890585 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 20:24:47.890668 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 20:24:47.890680 kernel: PCI host bridge to bus 0000:00 Feb 13 20:24:47.890770 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 20:24:47.890846 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 20:24:47.890932 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 20:24:47.891020 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Feb 13 20:24:47.891101 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Feb 13 20:24:47.891176 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Feb 13 20:24:47.891251 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 20:24:47.891361 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Feb 13 20:24:47.891465 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Feb 13 20:24:47.891591 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Feb 13 20:24:47.891685 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Feb 13 20:24:47.891778 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Feb 13 20:24:47.891872 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 20:24:47.891978 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Feb 13 20:24:47.892094 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Feb 13 20:24:47.892199 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Feb 13 20:24:47.892299 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Feb 13 20:24:47.892408 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Feb 13 20:24:47.892503 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Feb 13 20:24:47.892614 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Feb 13 20:24:47.892709 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Feb 13 20:24:47.892811 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Feb 13 20:24:47.892909 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Feb 13 20:24:47.893038 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Feb 13 20:24:47.893134 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Feb 13 20:24:47.893235 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Feb 13 20:24:47.893327 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Feb 13 20:24:47.893425 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Feb 13 20:24:47.893525 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Feb 13 20:24:47.893646 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Feb 13 20:24:47.893732 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Feb 13 20:24:47.893817 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Feb 13 20:24:47.893902 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Feb 13 20:24:47.893987 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Feb 13 20:24:47.894088 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Feb 13 20:24:47.894179 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Feb 13 20:24:47.894264 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Feb 13 20:24:47.894349 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Feb 13 20:24:47.894450 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Feb 13 20:24:47.894568 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Feb 13 20:24:47.894671 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Feb 13 20:24:47.894787 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Feb 13 20:24:47.894880 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Feb 13 20:24:47.894981 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Feb 13 20:24:47.895122 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Feb 13 20:24:47.895226 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Feb 13 20:24:47.895321 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Feb 13 20:24:47.895432 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Feb 13 20:24:47.895536 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Feb 13 20:24:47.895629 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Feb 13 20:24:47.895736 kernel: pci_bus 0000:02: extended config space not accessible Feb 13 20:24:47.895845 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Feb 13 20:24:47.895944 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Feb 13 20:24:47.896060 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Feb 13 20:24:47.896161 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Feb 13 20:24:47.896267 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Feb 13 20:24:47.896366 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Feb 13 20:24:47.896463 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Feb 13 20:24:47.896565 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Feb 13 20:24:47.896659 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Feb 13 20:24:47.896765 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Feb 13 20:24:47.896868 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Feb 13 20:24:47.896974 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Feb 13 20:24:47.897158 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Feb 13 20:24:47.897250 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Feb 13 20:24:47.897343 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Feb 13 20:24:47.897446 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Feb 13 20:24:47.897546 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Feb 13 20:24:47.897643 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Feb 13 20:24:47.897741 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Feb 13 20:24:47.897833 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Feb 13 20:24:47.897931 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Feb 13 20:24:47.898222 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Feb 13 20:24:47.898312 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Feb 13 20:24:47.898398 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Feb 13 20:24:47.898480 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Feb 13 20:24:47.898573 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Feb 13 20:24:47.898681 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Feb 13 20:24:47.898772 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Feb 13 20:24:47.898863 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Feb 13 20:24:47.898876 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 13 20:24:47.898897 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 13 20:24:47.898906 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 20:24:47.898915 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 13 20:24:47.898924 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Feb 13 20:24:47.898933 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Feb 13 20:24:47.898945 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Feb 13 20:24:47.898954 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Feb 13 20:24:47.898963 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Feb 13 20:24:47.898972 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Feb 13 20:24:47.898981 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Feb 13 20:24:47.898990 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Feb 13 20:24:47.899015 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Feb 13 20:24:47.899025 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Feb 13 20:24:47.899034 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Feb 13 20:24:47.902031 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Feb 13 20:24:47.902055 kernel: iommu: Default domain type: Translated Feb 13 20:24:47.902067 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 20:24:47.902077 kernel: PCI: Using ACPI for IRQ routing Feb 13 20:24:47.902087 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 20:24:47.902097 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 13 20:24:47.902107 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Feb 13 20:24:47.902273 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Feb 13 20:24:47.902379 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Feb 13 20:24:47.902473 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 20:24:47.902487 kernel: vgaarb: loaded Feb 13 20:24:47.902498 kernel: clocksource: Switched to clocksource kvm-clock Feb 13 20:24:47.902508 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 20:24:47.902518 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 20:24:47.902536 kernel: pnp: PnP ACPI init Feb 13 20:24:47.902648 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Feb 13 20:24:47.902667 kernel: pnp: PnP ACPI: found 5 devices Feb 13 20:24:47.902678 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 20:24:47.902688 kernel: NET: Registered PF_INET protocol family Feb 13 20:24:47.902699 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 20:24:47.902709 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Feb 13 20:24:47.902719 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 20:24:47.902729 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 20:24:47.902739 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 13 20:24:47.902749 kernel: TCP: Hash tables configured (established 16384 bind 16384) Feb 13 20:24:47.902762 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 13 20:24:47.902772 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 13 20:24:47.902782 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 20:24:47.902792 kernel: NET: Registered PF_XDP protocol family Feb 13 20:24:47.902890 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Feb 13 20:24:47.902989 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Feb 13 20:24:47.903105 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Feb 13 20:24:47.903205 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Feb 13 20:24:47.903302 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Feb 13 20:24:47.903396 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Feb 13 20:24:47.903496 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Feb 13 20:24:47.903626 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Feb 13 20:24:47.903722 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Feb 13 20:24:47.903820 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Feb 13 20:24:47.903913 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Feb 13 20:24:47.904029 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Feb 13 20:24:47.904137 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Feb 13 20:24:47.904230 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Feb 13 20:24:47.904335 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Feb 13 20:24:47.904428 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Feb 13 20:24:47.904536 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Feb 13 20:24:47.904631 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Feb 13 20:24:47.904730 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Feb 13 20:24:47.904822 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Feb 13 20:24:47.904917 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Feb 13 20:24:47.905156 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Feb 13 20:24:47.905253 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Feb 13 20:24:47.905377 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Feb 13 20:24:47.905475 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Feb 13 20:24:47.905577 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Feb 13 20:24:47.905672 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Feb 13 20:24:47.905764 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Feb 13 20:24:47.905857 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Feb 13 20:24:47.907053 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Feb 13 20:24:47.907193 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Feb 13 20:24:47.907293 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Feb 13 20:24:47.907396 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Feb 13 20:24:47.907490 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Feb 13 20:24:47.907649 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Feb 13 20:24:47.907743 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Feb 13 20:24:47.907846 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Feb 13 20:24:47.907930 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Feb 13 20:24:47.908899 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Feb 13 20:24:47.909057 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Feb 13 20:24:47.909149 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Feb 13 20:24:47.909265 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Feb 13 20:24:47.909375 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Feb 13 20:24:47.909468 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Feb 13 20:24:47.909573 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Feb 13 20:24:47.909667 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Feb 13 20:24:47.909775 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Feb 13 20:24:47.909868 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Feb 13 20:24:47.909961 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Feb 13 20:24:47.910069 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Feb 13 20:24:47.910163 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 20:24:47.910245 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 20:24:47.910326 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 20:24:47.910408 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Feb 13 20:24:47.910511 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Feb 13 20:24:47.910606 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Feb 13 20:24:47.910712 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Feb 13 20:24:47.910791 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Feb 13 20:24:47.910868 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Feb 13 20:24:47.910956 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Feb 13 20:24:47.912121 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Feb 13 20:24:47.912230 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Feb 13 20:24:47.912331 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Feb 13 20:24:47.912428 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Feb 13 20:24:47.912554 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Feb 13 20:24:47.912811 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Feb 13 20:24:47.912909 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Feb 13 20:24:47.912991 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Feb 13 20:24:47.913089 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Feb 13 20:24:47.914111 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Feb 13 20:24:47.914228 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Feb 13 20:24:47.914597 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Feb 13 20:24:47.914704 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Feb 13 20:24:47.914785 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Feb 13 20:24:47.914864 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Feb 13 20:24:47.914966 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Feb 13 20:24:47.915100 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Feb 13 20:24:47.915187 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Feb 13 20:24:47.915281 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Feb 13 20:24:47.915366 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Feb 13 20:24:47.915505 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Feb 13 20:24:47.915534 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Feb 13 20:24:47.915545 kernel: PCI: CLS 0 bytes, default 64 Feb 13 20:24:47.915556 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 13 20:24:47.915568 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Feb 13 20:24:47.915609 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 13 20:24:47.915621 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x21134f58f0d, max_idle_ns: 440795217993 ns Feb 13 20:24:47.915632 kernel: Initialise system trusted keyrings Feb 13 20:24:47.915642 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Feb 13 20:24:47.915653 kernel: Key type asymmetric registered Feb 13 20:24:47.915668 kernel: Asymmetric key parser 'x509' registered Feb 13 20:24:47.915678 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 20:24:47.915689 kernel: io scheduler mq-deadline registered Feb 13 20:24:47.915700 kernel: io scheduler kyber registered Feb 13 20:24:47.915711 kernel: io scheduler bfq registered Feb 13 20:24:47.915815 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Feb 13 20:24:47.915914 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Feb 13 20:24:47.916024 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 20:24:47.916127 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Feb 13 20:24:47.916222 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Feb 13 20:24:47.916317 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 20:24:47.916432 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Feb 13 20:24:47.916554 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Feb 13 20:24:47.916648 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 20:24:47.916753 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Feb 13 20:24:47.916858 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Feb 13 20:24:47.916951 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 20:24:47.917559 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Feb 13 20:24:47.917663 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Feb 13 20:24:47.917760 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 20:24:47.917861 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Feb 13 20:24:47.917957 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Feb 13 20:24:47.918073 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 20:24:47.918172 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Feb 13 20:24:47.918269 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Feb 13 20:24:47.918362 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 20:24:47.918464 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Feb 13 20:24:47.918569 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Feb 13 20:24:47.918664 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 20:24:47.918679 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 20:24:47.918691 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Feb 13 20:24:47.918702 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Feb 13 20:24:47.918713 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 20:24:47.918728 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 20:24:47.918739 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 13 20:24:47.918750 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 20:24:47.918761 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 20:24:47.918865 kernel: rtc_cmos 00:03: RTC can wake from S4 Feb 13 20:24:47.918880 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Feb 13 20:24:47.918964 kernel: rtc_cmos 00:03: registered as rtc0 Feb 13 20:24:47.919081 kernel: rtc_cmos 00:03: setting system clock to 2025-02-13T20:24:47 UTC (1739478287) Feb 13 20:24:47.919172 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Feb 13 20:24:47.919185 kernel: intel_pstate: CPU model not supported Feb 13 20:24:47.919196 kernel: NET: Registered PF_INET6 protocol family Feb 13 20:24:47.919207 kernel: Segment Routing with IPv6 Feb 13 20:24:47.919218 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 20:24:47.919229 kernel: NET: Registered PF_PACKET protocol family Feb 13 20:24:47.919240 kernel: Key type dns_resolver registered Feb 13 20:24:47.919250 kernel: IPI shorthand broadcast: enabled Feb 13 20:24:47.919261 kernel: sched_clock: Marking stable (966076957, 122710548)->(1181250889, -92463384) Feb 13 20:24:47.919275 kernel: registered taskstats version 1 Feb 13 20:24:47.919286 kernel: Loading compiled-in X.509 certificates Feb 13 20:24:47.919297 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 6c364ddae48101e091a28279a8d953535f596d53' Feb 13 20:24:47.919308 kernel: Key type .fscrypt registered Feb 13 20:24:47.919318 kernel: Key type fscrypt-provisioning registered Feb 13 20:24:47.919329 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 20:24:47.919340 kernel: ima: Allocated hash algorithm: sha1 Feb 13 20:24:47.919350 kernel: ima: No architecture policies found Feb 13 20:24:47.919361 kernel: clk: Disabling unused clocks Feb 13 20:24:47.919375 kernel: Freeing unused kernel image (initmem) memory: 43476K Feb 13 20:24:47.919386 kernel: Write protecting the kernel read-only data: 38912k Feb 13 20:24:47.919397 kernel: Freeing unused kernel image (rodata/data gap) memory: 1724K Feb 13 20:24:47.919407 kernel: Run /init as init process Feb 13 20:24:47.919418 kernel: with arguments: Feb 13 20:24:47.919429 kernel: /init Feb 13 20:24:47.919439 kernel: with environment: Feb 13 20:24:47.919449 kernel: HOME=/ Feb 13 20:24:47.919459 kernel: TERM=linux Feb 13 20:24:47.919472 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 20:24:47.919485 systemd[1]: Successfully made /usr/ read-only. Feb 13 20:24:47.919500 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Feb 13 20:24:47.919512 systemd[1]: Detected virtualization kvm. Feb 13 20:24:47.919522 systemd[1]: Detected architecture x86-64. Feb 13 20:24:47.919539 systemd[1]: Running in initrd. Feb 13 20:24:47.919550 systemd[1]: No hostname configured, using default hostname. Feb 13 20:24:47.919565 systemd[1]: Hostname set to . Feb 13 20:24:47.919576 systemd[1]: Initializing machine ID from VM UUID. Feb 13 20:24:47.919587 systemd[1]: Queued start job for default target initrd.target. Feb 13 20:24:47.919598 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:24:47.919609 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:24:47.919621 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 20:24:47.919632 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 20:24:47.919643 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 20:24:47.919659 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 20:24:47.919671 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 20:24:47.919683 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 20:24:47.919694 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:24:47.919705 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:24:47.919716 systemd[1]: Reached target paths.target - Path Units. Feb 13 20:24:47.919727 systemd[1]: Reached target slices.target - Slice Units. Feb 13 20:24:47.919741 systemd[1]: Reached target swap.target - Swaps. Feb 13 20:24:47.919753 systemd[1]: Reached target timers.target - Timer Units. Feb 13 20:24:47.919764 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:24:47.919775 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:24:47.919786 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 20:24:47.919797 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Feb 13 20:24:47.919808 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:24:47.919819 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 20:24:47.919831 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:24:47.919845 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 20:24:47.919856 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 20:24:47.919867 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 20:24:47.919878 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 20:24:47.919889 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 20:24:47.919900 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 20:24:47.919911 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 20:24:47.919922 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:24:47.919961 systemd-journald[201]: Collecting audit messages is disabled. Feb 13 20:24:47.919993 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 20:24:47.920004 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:24:47.920038 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 20:24:47.920050 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 20:24:47.920063 systemd-journald[201]: Journal started Feb 13 20:24:47.920094 systemd-journald[201]: Runtime Journal (/run/log/journal/2a63f83661114aa0a67b55c9a2e9c5d7) is 4.7M, max 37.9M, 33.2M free. Feb 13 20:24:47.911438 systemd-modules-load[203]: Inserted module 'overlay' Feb 13 20:24:47.925024 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 20:24:47.946057 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 20:24:47.947593 systemd-modules-load[203]: Inserted module 'br_netfilter' Feb 13 20:24:47.965383 kernel: Bridge firewalling registered Feb 13 20:24:47.965174 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 20:24:47.966389 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:24:47.967109 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:24:47.974146 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:24:47.976172 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:24:47.979106 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 20:24:47.983167 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 20:24:47.994110 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:24:48.004181 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 20:24:48.006172 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:24:48.008582 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:24:48.012624 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:24:48.015167 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 20:24:48.031073 dracut-cmdline[233]: dracut-dracut-053 Feb 13 20:24:48.036197 dracut-cmdline[233]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=f28373bbaddf11103b551b595069cf5faacb27d62f1aab4f9911393ba418b416 Feb 13 20:24:48.062139 systemd-resolved[239]: Positive Trust Anchors: Feb 13 20:24:48.062159 systemd-resolved[239]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 20:24:48.062200 systemd-resolved[239]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 20:24:48.065534 systemd-resolved[239]: Defaulting to hostname 'linux'. Feb 13 20:24:48.066708 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 20:24:48.067395 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:24:48.156082 kernel: SCSI subsystem initialized Feb 13 20:24:48.167061 kernel: Loading iSCSI transport class v2.0-870. Feb 13 20:24:48.178858 kernel: iscsi: registered transport (tcp) Feb 13 20:24:48.201075 kernel: iscsi: registered transport (qla4xxx) Feb 13 20:24:48.201200 kernel: QLogic iSCSI HBA Driver Feb 13 20:24:48.267328 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 20:24:48.272126 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 20:24:48.298100 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 20:24:48.298214 kernel: device-mapper: uevent: version 1.0.3 Feb 13 20:24:48.299196 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 20:24:48.349106 kernel: raid6: avx512x4 gen() 17362 MB/s Feb 13 20:24:48.366084 kernel: raid6: avx512x2 gen() 17903 MB/s Feb 13 20:24:48.383071 kernel: raid6: avx512x1 gen() 17740 MB/s Feb 13 20:24:48.400100 kernel: raid6: avx2x4 gen() 17676 MB/s Feb 13 20:24:48.417074 kernel: raid6: avx2x2 gen() 17682 MB/s Feb 13 20:24:48.434081 kernel: raid6: avx2x1 gen() 13324 MB/s Feb 13 20:24:48.434204 kernel: raid6: using algorithm avx512x2 gen() 17903 MB/s Feb 13 20:24:48.452125 kernel: raid6: .... xor() 23039 MB/s, rmw enabled Feb 13 20:24:48.452212 kernel: raid6: using avx512x2 recovery algorithm Feb 13 20:24:48.482047 kernel: xor: automatically using best checksumming function avx Feb 13 20:24:48.639072 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 20:24:48.655917 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:24:48.665220 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:24:48.683970 systemd-udevd[422]: Using default interface naming scheme 'v255'. Feb 13 20:24:48.690482 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:24:48.699305 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 20:24:48.717929 dracut-pre-trigger[428]: rd.md=0: removing MD RAID activation Feb 13 20:24:48.760143 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:24:48.777322 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 20:24:48.853740 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:24:48.863333 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 20:24:48.881169 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 20:24:48.883647 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:24:48.884974 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:24:48.885827 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 20:24:48.890146 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 20:24:48.910861 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:24:48.954524 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Feb 13 20:24:48.991139 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Feb 13 20:24:48.991276 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 20:24:48.991292 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 20:24:48.991305 kernel: GPT:17805311 != 125829119 Feb 13 20:24:48.991318 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 20:24:48.991331 kernel: GPT:17805311 != 125829119 Feb 13 20:24:48.991343 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 20:24:48.991356 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 20:24:48.991369 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 20:24:48.991382 kernel: AES CTR mode by8 optimization enabled Feb 13 20:24:48.970515 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:24:48.970633 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:24:48.971198 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:24:48.974043 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:24:48.974193 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:24:48.974751 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:24:48.988321 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:24:48.993779 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Feb 13 20:24:49.008029 kernel: ACPI: bus type USB registered Feb 13 20:24:49.009069 kernel: usbcore: registered new interface driver usbfs Feb 13 20:24:49.013030 kernel: usbcore: registered new interface driver hub Feb 13 20:24:49.013065 kernel: usbcore: registered new device driver usb Feb 13 20:24:49.030049 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (467) Feb 13 20:24:49.030109 kernel: BTRFS: device fsid 60f89c25-9096-4268-99ca-ef7992742f2b devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (472) Feb 13 20:24:49.061043 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Feb 13 20:24:49.064123 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Feb 13 20:24:49.064258 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Feb 13 20:24:49.064370 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Feb 13 20:24:49.064493 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Feb 13 20:24:49.064597 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Feb 13 20:24:49.064704 kernel: hub 1-0:1.0: USB hub found Feb 13 20:24:49.064836 kernel: hub 1-0:1.0: 4 ports detected Feb 13 20:24:49.064944 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Feb 13 20:24:49.067208 kernel: hub 2-0:1.0: USB hub found Feb 13 20:24:49.068077 kernel: hub 2-0:1.0: 4 ports detected Feb 13 20:24:49.065746 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 20:24:49.112460 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:24:49.114119 kernel: libata version 3.00 loaded. Feb 13 20:24:49.120277 kernel: ahci 0000:00:1f.2: version 3.0 Feb 13 20:24:49.140666 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Feb 13 20:24:49.140695 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Feb 13 20:24:49.140905 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Feb 13 20:24:49.141746 kernel: scsi host0: ahci Feb 13 20:24:49.141910 kernel: scsi host1: ahci Feb 13 20:24:49.142064 kernel: scsi host2: ahci Feb 13 20:24:49.142223 kernel: scsi host3: ahci Feb 13 20:24:49.142361 kernel: scsi host4: ahci Feb 13 20:24:49.142521 kernel: scsi host5: ahci Feb 13 20:24:49.142662 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 41 Feb 13 20:24:49.142681 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 41 Feb 13 20:24:49.142698 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 41 Feb 13 20:24:49.142714 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 41 Feb 13 20:24:49.142731 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 41 Feb 13 20:24:49.142748 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 41 Feb 13 20:24:49.123228 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 20:24:49.137148 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 20:24:49.142940 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 20:24:49.160984 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 20:24:49.170210 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 20:24:49.174159 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:24:49.178299 disk-uuid[568]: Primary Header is updated. Feb 13 20:24:49.178299 disk-uuid[568]: Secondary Entries is updated. Feb 13 20:24:49.178299 disk-uuid[568]: Secondary Header is updated. Feb 13 20:24:49.188062 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 20:24:49.195036 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 20:24:49.198923 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:24:49.303084 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Feb 13 20:24:49.451285 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 20:24:49.456538 kernel: ata6: SATA link down (SStatus 0 SControl 300) Feb 13 20:24:49.457694 kernel: ata3: SATA link down (SStatus 0 SControl 300) Feb 13 20:24:49.457763 kernel: ata1: SATA link down (SStatus 0 SControl 300) Feb 13 20:24:49.457801 kernel: ata4: SATA link down (SStatus 0 SControl 300) Feb 13 20:24:49.459488 kernel: ata5: SATA link down (SStatus 0 SControl 300) Feb 13 20:24:49.459555 kernel: ata2: SATA link down (SStatus 0 SControl 300) Feb 13 20:24:49.473462 kernel: usbcore: registered new interface driver usbhid Feb 13 20:24:49.473521 kernel: usbhid: USB HID core driver Feb 13 20:24:49.479987 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input3 Feb 13 20:24:49.480073 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Feb 13 20:24:50.205039 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 20:24:50.206898 disk-uuid[569]: The operation has completed successfully. Feb 13 20:24:50.252856 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 20:24:50.252972 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 20:24:50.300436 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 20:24:50.306889 sh[590]: Success Feb 13 20:24:50.324054 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 13 20:24:50.375980 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 20:24:50.386410 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 20:24:50.387637 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 20:24:50.402705 kernel: BTRFS info (device dm-0): first mount of filesystem 60f89c25-9096-4268-99ca-ef7992742f2b Feb 13 20:24:50.402751 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:24:50.402769 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 20:24:50.404349 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 20:24:50.405236 kernel: BTRFS info (device dm-0): using free space tree Feb 13 20:24:50.412960 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 20:24:50.415434 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 20:24:50.422231 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 20:24:50.428250 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 20:24:50.440034 kernel: BTRFS info (device vda6): first mount of filesystem 9d862461-eab1-477f-8790-b61f63b2958e Feb 13 20:24:50.442020 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:24:50.442046 kernel: BTRFS info (device vda6): using free space tree Feb 13 20:24:50.447086 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 20:24:50.456947 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 20:24:50.458059 kernel: BTRFS info (device vda6): last unmount of filesystem 9d862461-eab1-477f-8790-b61f63b2958e Feb 13 20:24:50.462467 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 20:24:50.472214 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 20:24:50.563724 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:24:50.572928 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 20:24:50.591871 ignition[680]: Ignition 2.20.0 Feb 13 20:24:50.592607 ignition[680]: Stage: fetch-offline Feb 13 20:24:50.593126 ignition[680]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:24:50.593563 ignition[680]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 13 20:24:50.593710 ignition[680]: parsed url from cmdline: "" Feb 13 20:24:50.593714 ignition[680]: no config URL provided Feb 13 20:24:50.593720 ignition[680]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 20:24:50.593729 ignition[680]: no config at "/usr/lib/ignition/user.ign" Feb 13 20:24:50.593735 ignition[680]: failed to fetch config: resource requires networking Feb 13 20:24:50.596159 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:24:50.593963 ignition[680]: Ignition finished successfully Feb 13 20:24:50.617795 systemd-networkd[778]: lo: Link UP Feb 13 20:24:50.617808 systemd-networkd[778]: lo: Gained carrier Feb 13 20:24:50.619496 systemd-networkd[778]: Enumeration completed Feb 13 20:24:50.619977 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:24:50.619982 systemd-networkd[778]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:24:50.620081 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 20:24:50.621323 systemd-networkd[778]: eth0: Link UP Feb 13 20:24:50.621327 systemd-networkd[778]: eth0: Gained carrier Feb 13 20:24:50.621335 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:24:50.622244 systemd[1]: Reached target network.target - Network. Feb 13 20:24:50.634067 systemd-networkd[778]: eth0: DHCPv4 address 10.244.92.114/30, gateway 10.244.92.113 acquired from 10.244.92.113 Feb 13 20:24:50.634229 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 20:24:50.651555 ignition[782]: Ignition 2.20.0 Feb 13 20:24:50.651566 ignition[782]: Stage: fetch Feb 13 20:24:50.651808 ignition[782]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:24:50.651819 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 13 20:24:50.651926 ignition[782]: parsed url from cmdline: "" Feb 13 20:24:50.651929 ignition[782]: no config URL provided Feb 13 20:24:50.651935 ignition[782]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 20:24:50.651946 ignition[782]: no config at "/usr/lib/ignition/user.ign" Feb 13 20:24:50.657318 ignition[782]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Feb 13 20:24:50.657356 ignition[782]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Feb 13 20:24:50.657465 ignition[782]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Feb 13 20:24:50.680763 ignition[782]: GET result: OK Feb 13 20:24:50.681089 ignition[782]: parsing config with SHA512: 18476d4784fda68d161fac7c7679ab792005ef7a00a8352fb5a2ee7b8c599d74251216d13c402f25e8376038b05992161d24dfc81448427cf7a7910d31470af9 Feb 13 20:24:50.693105 unknown[782]: fetched base config from "system" Feb 13 20:24:50.693584 ignition[782]: fetch: fetch complete Feb 13 20:24:50.693119 unknown[782]: fetched base config from "system" Feb 13 20:24:50.693590 ignition[782]: fetch: fetch passed Feb 13 20:24:50.693125 unknown[782]: fetched user config from "openstack" Feb 13 20:24:50.693640 ignition[782]: Ignition finished successfully Feb 13 20:24:50.697432 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 20:24:50.705190 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 20:24:50.734698 ignition[789]: Ignition 2.20.0 Feb 13 20:24:50.734710 ignition[789]: Stage: kargs Feb 13 20:24:50.734905 ignition[789]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:24:50.734916 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 13 20:24:50.735825 ignition[789]: kargs: kargs passed Feb 13 20:24:50.736891 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 20:24:50.735867 ignition[789]: Ignition finished successfully Feb 13 20:24:50.746228 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 20:24:50.757937 ignition[796]: Ignition 2.20.0 Feb 13 20:24:50.757947 ignition[796]: Stage: disks Feb 13 20:24:50.758145 ignition[796]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:24:50.758156 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 13 20:24:50.759023 ignition[796]: disks: disks passed Feb 13 20:24:50.759065 ignition[796]: Ignition finished successfully Feb 13 20:24:50.760815 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 20:24:50.761992 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 20:24:50.762474 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 20:24:50.763323 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 20:24:50.764142 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 20:24:50.764841 systemd[1]: Reached target basic.target - Basic System. Feb 13 20:24:50.772132 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 20:24:50.788069 systemd-fsck[805]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Feb 13 20:24:50.790974 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 20:24:51.407172 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 20:24:51.518026 kernel: EXT4-fs (vda9): mounted filesystem 157595f2-1515-4117-a2d1-73fe2ed647fc r/w with ordered data mode. Quota mode: none. Feb 13 20:24:51.518562 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 20:24:51.519600 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 20:24:51.530083 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:24:51.532120 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 20:24:51.533000 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 20:24:51.534594 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Feb 13 20:24:51.536693 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 20:24:51.543362 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (813) Feb 13 20:24:51.543387 kernel: BTRFS info (device vda6): first mount of filesystem 9d862461-eab1-477f-8790-b61f63b2958e Feb 13 20:24:51.543401 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:24:51.543414 kernel: BTRFS info (device vda6): using free space tree Feb 13 20:24:51.543427 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 20:24:51.536730 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:24:51.552061 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:24:51.553638 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 20:24:51.556196 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 20:24:51.629002 initrd-setup-root[841]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 20:24:51.634044 initrd-setup-root[849]: cut: /sysroot/etc/group: No such file or directory Feb 13 20:24:51.641116 initrd-setup-root[856]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 20:24:51.647289 initrd-setup-root[863]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 20:24:51.740899 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 20:24:51.746098 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 20:24:51.748185 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 20:24:51.757045 kernel: BTRFS info (device vda6): last unmount of filesystem 9d862461-eab1-477f-8790-b61f63b2958e Feb 13 20:24:51.779754 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 20:24:51.792919 ignition[930]: INFO : Ignition 2.20.0 Feb 13 20:24:51.794839 ignition[930]: INFO : Stage: mount Feb 13 20:24:51.794839 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:24:51.794839 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 13 20:24:51.794839 ignition[930]: INFO : mount: mount passed Feb 13 20:24:51.794839 ignition[930]: INFO : Ignition finished successfully Feb 13 20:24:51.797281 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 20:24:52.332593 systemd-networkd[778]: eth0: Gained IPv6LL Feb 13 20:24:52.403686 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 20:24:52.739137 systemd-networkd[778]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:171c:24:19ff:fef4:5c72/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:171c:24:19ff:fef4:5c72/64 assigned by NDisc. Feb 13 20:24:52.739159 systemd-networkd[778]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Feb 13 20:24:58.681054 coreos-metadata[815]: Feb 13 20:24:58.680 WARN failed to locate config-drive, using the metadata service API instead Feb 13 20:24:58.699115 coreos-metadata[815]: Feb 13 20:24:58.698 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Feb 13 20:24:58.715340 coreos-metadata[815]: Feb 13 20:24:58.715 INFO Fetch successful Feb 13 20:24:58.723693 coreos-metadata[815]: Feb 13 20:24:58.715 INFO wrote hostname srv-llv2e.gb1.brightbox.com to /sysroot/etc/hostname Feb 13 20:24:58.725972 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Feb 13 20:24:58.726217 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Feb 13 20:24:58.736088 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 20:24:58.744521 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:24:58.763059 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (948) Feb 13 20:24:58.768291 kernel: BTRFS info (device vda6): first mount of filesystem 9d862461-eab1-477f-8790-b61f63b2958e Feb 13 20:24:58.768378 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:24:58.771209 kernel: BTRFS info (device vda6): using free space tree Feb 13 20:24:58.777023 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 20:24:58.779099 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:24:58.800853 ignition[966]: INFO : Ignition 2.20.0 Feb 13 20:24:58.800853 ignition[966]: INFO : Stage: files Feb 13 20:24:58.801917 ignition[966]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:24:58.801917 ignition[966]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 13 20:24:58.801917 ignition[966]: DEBUG : files: compiled without relabeling support, skipping Feb 13 20:24:58.803454 ignition[966]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 20:24:58.803454 ignition[966]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 20:24:58.805267 ignition[966]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 20:24:58.805953 ignition[966]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 20:24:58.806515 ignition[966]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 20:24:58.806292 unknown[966]: wrote ssh authorized keys file for user: core Feb 13 20:24:58.810960 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Feb 13 20:24:58.810960 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Feb 13 20:24:58.972376 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 20:24:59.285738 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Feb 13 20:24:59.285738 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 20:24:59.285738 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Feb 13 20:24:59.874244 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 20:25:00.390518 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 20:25:00.390518 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Feb 13 20:25:00.390518 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 20:25:00.390518 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 20:25:00.390518 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 20:25:00.390518 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 20:25:00.399060 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 20:25:00.399060 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 20:25:00.399060 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 20:25:00.399060 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:25:00.399060 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:25:00.399060 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 20:25:00.399060 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 20:25:00.399060 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 20:25:00.399060 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Feb 13 20:25:00.866799 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 13 20:25:02.301537 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 20:25:02.305606 ignition[966]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Feb 13 20:25:02.305606 ignition[966]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 20:25:02.305606 ignition[966]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 20:25:02.305606 ignition[966]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Feb 13 20:25:02.305606 ignition[966]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Feb 13 20:25:02.305606 ignition[966]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 20:25:02.305606 ignition[966]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:25:02.305606 ignition[966]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:25:02.305606 ignition[966]: INFO : files: files passed Feb 13 20:25:02.318493 ignition[966]: INFO : Ignition finished successfully Feb 13 20:25:02.312033 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 20:25:02.320235 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 20:25:02.323819 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 20:25:02.324822 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 20:25:02.324921 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 20:25:02.359445 initrd-setup-root-after-ignition[999]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:25:02.360660 initrd-setup-root-after-ignition[995]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:25:02.360660 initrd-setup-root-after-ignition[995]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:25:02.363872 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:25:02.364532 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 20:25:02.372431 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 20:25:02.419357 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 20:25:02.419601 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 20:25:02.423204 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 20:25:02.424737 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 20:25:02.426291 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 20:25:02.433233 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 20:25:02.453314 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:25:02.460325 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 20:25:02.476600 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:25:02.477764 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:25:02.478770 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 20:25:02.479656 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 20:25:02.479780 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:25:02.481423 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 20:25:02.481876 systemd[1]: Stopped target basic.target - Basic System. Feb 13 20:25:02.482711 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 20:25:02.483190 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:25:02.484005 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 20:25:02.484857 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 20:25:02.485652 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:25:02.486500 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 20:25:02.487264 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 20:25:02.488053 systemd[1]: Stopped target swap.target - Swaps. Feb 13 20:25:02.490165 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 20:25:02.490297 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:25:02.491397 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:25:02.492393 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:25:02.493413 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 20:25:02.493543 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:25:02.494410 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 20:25:02.495898 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 20:25:02.497128 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 20:25:02.497257 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:25:02.501541 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 20:25:02.501655 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 20:25:02.513224 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 20:25:02.513976 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 20:25:02.514139 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:25:02.515987 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 20:25:02.519611 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 20:25:02.520361 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:25:02.521772 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 20:25:02.521879 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:25:02.536810 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 20:25:02.536963 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 20:25:02.543250 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 20:25:02.548023 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 20:25:02.548181 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 20:25:02.549351 ignition[1019]: INFO : Ignition 2.20.0 Feb 13 20:25:02.549351 ignition[1019]: INFO : Stage: umount Feb 13 20:25:02.549351 ignition[1019]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:25:02.549351 ignition[1019]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 13 20:25:02.552587 ignition[1019]: INFO : umount: umount passed Feb 13 20:25:02.552587 ignition[1019]: INFO : Ignition finished successfully Feb 13 20:25:02.551561 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 20:25:02.551661 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 20:25:02.553467 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 20:25:02.553569 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 20:25:02.554040 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 20:25:02.554084 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 20:25:02.554745 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 20:25:02.554785 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 20:25:02.555497 systemd[1]: Stopped target network.target - Network. Feb 13 20:25:02.556178 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 20:25:02.556229 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:25:02.556912 systemd[1]: Stopped target paths.target - Path Units. Feb 13 20:25:02.557591 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 20:25:02.563065 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:25:02.563548 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 20:25:02.564424 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 20:25:02.565164 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 20:25:02.565212 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:25:02.565829 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 20:25:02.565861 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:25:02.566522 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 20:25:02.566571 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 20:25:02.567269 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 20:25:02.567310 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 20:25:02.567948 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 20:25:02.567989 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 20:25:02.568842 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 20:25:02.569955 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 20:25:02.572097 systemd-networkd[778]: eth0: DHCPv6 lease lost Feb 13 20:25:02.578487 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 20:25:02.578592 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 20:25:02.580152 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Feb 13 20:25:02.580698 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 20:25:02.580765 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:25:02.586180 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 20:25:02.587017 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 20:25:02.587068 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:25:02.588461 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:25:02.589134 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 20:25:02.589246 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 20:25:02.593531 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Feb 13 20:25:02.597892 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 20:25:02.598002 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:25:02.599699 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 20:25:02.599746 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 20:25:02.601164 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 20:25:02.601212 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:25:02.603190 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 13 20:25:02.603257 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Feb 13 20:25:02.603576 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 20:25:02.603724 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:25:02.604915 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 20:25:02.605094 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 20:25:02.606720 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 20:25:02.606793 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 20:25:02.608112 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 20:25:02.608153 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:25:02.609551 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 20:25:02.609596 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:25:02.611862 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 20:25:02.611901 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 20:25:02.613465 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:25:02.613513 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:25:02.620164 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 20:25:02.620868 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 20:25:02.620921 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:25:02.621431 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:25:02.621481 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:25:02.622941 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 13 20:25:02.622998 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Feb 13 20:25:02.628312 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 20:25:02.628450 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 20:25:02.630603 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 20:25:02.643331 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 20:25:02.652666 systemd[1]: Switching root. Feb 13 20:25:02.686494 systemd-journald[201]: Journal stopped Feb 13 20:25:03.715297 systemd-journald[201]: Received SIGTERM from PID 1 (systemd). Feb 13 20:25:03.715367 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 20:25:03.715391 kernel: SELinux: policy capability open_perms=1 Feb 13 20:25:03.715432 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 20:25:03.715445 kernel: SELinux: policy capability always_check_network=0 Feb 13 20:25:03.715458 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 20:25:03.715471 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 20:25:03.715483 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 20:25:03.715496 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 20:25:03.715509 kernel: audit: type=1403 audit(1739478302.825:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 20:25:03.715523 systemd[1]: Successfully loaded SELinux policy in 39.006ms. Feb 13 20:25:03.715553 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 11.218ms. Feb 13 20:25:03.715575 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Feb 13 20:25:03.715590 systemd[1]: Detected virtualization kvm. Feb 13 20:25:03.715603 systemd[1]: Detected architecture x86-64. Feb 13 20:25:03.715618 systemd[1]: Detected first boot. Feb 13 20:25:03.715637 systemd[1]: Hostname set to . Feb 13 20:25:03.715650 systemd[1]: Initializing machine ID from VM UUID. Feb 13 20:25:03.715665 zram_generator::config[1064]: No configuration found. Feb 13 20:25:03.715682 kernel: Guest personality initialized and is inactive Feb 13 20:25:03.715695 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Feb 13 20:25:03.715708 kernel: Initialized host personality Feb 13 20:25:03.715720 kernel: NET: Registered PF_VSOCK protocol family Feb 13 20:25:03.715737 systemd[1]: Populated /etc with preset unit settings. Feb 13 20:25:03.715751 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Feb 13 20:25:03.715765 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 20:25:03.715778 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 20:25:03.715792 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 20:25:03.715812 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 20:25:03.715833 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 20:25:03.715849 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 20:25:03.715863 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 20:25:03.715878 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 20:25:03.715892 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 20:25:03.715909 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 20:25:03.715923 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 20:25:03.715938 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:25:03.715952 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:25:03.715970 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 20:25:03.715984 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 20:25:03.715998 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 20:25:03.720061 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 20:25:03.720083 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 20:25:03.720097 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:25:03.720110 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 20:25:03.720123 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 20:25:03.720135 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 20:25:03.720148 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 20:25:03.720161 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:25:03.720189 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 20:25:03.720202 systemd[1]: Reached target slices.target - Slice Units. Feb 13 20:25:03.720214 systemd[1]: Reached target swap.target - Swaps. Feb 13 20:25:03.720231 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 20:25:03.720244 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 20:25:03.720257 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Feb 13 20:25:03.720269 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:25:03.720283 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 20:25:03.720295 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:25:03.720311 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 20:25:03.720324 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 20:25:03.720337 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 20:25:03.720349 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 20:25:03.720362 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:25:03.720375 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 20:25:03.720388 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 20:25:03.720403 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 20:25:03.720417 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 20:25:03.720432 systemd[1]: Reached target machines.target - Containers. Feb 13 20:25:03.720445 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 20:25:03.720457 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:25:03.720472 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 20:25:03.720485 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 20:25:03.720498 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:25:03.720510 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 20:25:03.720523 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:25:03.720542 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 20:25:03.720555 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:25:03.720568 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 20:25:03.720581 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 20:25:03.720593 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 20:25:03.720606 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 20:25:03.720618 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 20:25:03.720631 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 20:25:03.720650 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 20:25:03.720663 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 20:25:03.720676 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 20:25:03.720688 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 20:25:03.720700 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Feb 13 20:25:03.720713 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 20:25:03.720726 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 20:25:03.720739 systemd[1]: Stopped verity-setup.service. Feb 13 20:25:03.720758 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:25:03.720772 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 20:25:03.720788 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 20:25:03.720801 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 20:25:03.720814 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 20:25:03.720827 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 20:25:03.720840 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 20:25:03.720853 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:25:03.720865 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 20:25:03.720878 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 20:25:03.720893 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:25:03.720922 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:25:03.720936 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:25:03.720953 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:25:03.720971 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 20:25:03.720985 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 20:25:03.721052 systemd-journald[1153]: Collecting audit messages is disabled. Feb 13 20:25:03.721082 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 20:25:03.721103 kernel: fuse: init (API version 7.39) Feb 13 20:25:03.721118 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 20:25:03.721132 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 20:25:03.721146 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Feb 13 20:25:03.721160 systemd-journald[1153]: Journal started Feb 13 20:25:03.721187 systemd-journald[1153]: Runtime Journal (/run/log/journal/2a63f83661114aa0a67b55c9a2e9c5d7) is 4.7M, max 37.9M, 33.2M free. Feb 13 20:25:03.457735 systemd[1]: Queued start job for default target multi-user.target. Feb 13 20:25:03.469643 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 20:25:03.470128 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 20:25:03.732179 kernel: loop: module loaded Feb 13 20:25:03.732231 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 20:25:03.740031 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 20:25:03.740081 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:25:03.748053 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 20:25:03.750122 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 20:25:03.759046 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 20:25:03.769041 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:25:03.772039 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 20:25:03.776051 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 20:25:03.778106 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 20:25:03.778741 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 20:25:03.779425 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:25:03.779586 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:25:03.780345 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 20:25:03.781263 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 20:25:03.781965 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 20:25:03.791778 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 20:25:03.812327 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Feb 13 20:25:03.817101 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 20:25:03.827192 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 20:25:03.827607 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 20:25:03.835167 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 20:25:03.840286 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 20:25:03.847146 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Feb 13 20:25:03.847614 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 20:25:03.854247 kernel: loop0: detected capacity change from 0 to 8 Feb 13 20:25:03.850790 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 20:25:03.852528 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 20:25:03.875067 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 20:25:03.878381 kernel: ACPI: bus type drm_connector registered Feb 13 20:25:03.877485 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:25:03.878924 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 20:25:03.879145 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 20:25:03.885449 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Feb 13 20:25:03.902242 systemd-journald[1153]: Time spent on flushing to /var/log/journal/2a63f83661114aa0a67b55c9a2e9c5d7 is 55.403ms for 1178 entries. Feb 13 20:25:03.902242 systemd-journald[1153]: System Journal (/var/log/journal/2a63f83661114aa0a67b55c9a2e9c5d7) is 8M, max 584.8M, 576.8M free. Feb 13 20:25:03.974807 systemd-journald[1153]: Received client request to flush runtime journal. Feb 13 20:25:03.974874 kernel: loop1: detected capacity change from 0 to 218376 Feb 13 20:25:03.974891 kernel: loop2: detected capacity change from 0 to 138176 Feb 13 20:25:03.954370 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 20:25:03.958631 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 20:25:03.983442 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 20:25:04.015056 kernel: loop3: detected capacity change from 0 to 147912 Feb 13 20:25:04.034971 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:25:04.044461 systemd-tmpfiles[1218]: ACLs are not supported, ignoring. Feb 13 20:25:04.045046 systemd-tmpfiles[1218]: ACLs are not supported, ignoring. Feb 13 20:25:04.050129 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 20:25:04.059543 udevadm[1225]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 20:25:04.066772 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:25:04.072026 kernel: loop4: detected capacity change from 0 to 8 Feb 13 20:25:04.074045 kernel: loop5: detected capacity change from 0 to 218376 Feb 13 20:25:04.109077 kernel: loop6: detected capacity change from 0 to 138176 Feb 13 20:25:04.137529 kernel: loop7: detected capacity change from 0 to 147912 Feb 13 20:25:04.170760 (sd-merge)[1228]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Feb 13 20:25:04.174808 (sd-merge)[1228]: Merged extensions into '/usr'. Feb 13 20:25:04.181939 systemd[1]: Reload requested from client PID 1182 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 20:25:04.182078 systemd[1]: Reloading... Feb 13 20:25:04.320052 zram_generator::config[1257]: No configuration found. Feb 13 20:25:04.391129 ldconfig[1175]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 20:25:04.476468 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:25:04.547059 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 20:25:04.547398 systemd[1]: Reloading finished in 364 ms. Feb 13 20:25:04.560307 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 20:25:04.561144 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 20:25:04.573198 systemd[1]: Starting ensure-sysext.service... Feb 13 20:25:04.575161 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 20:25:04.603607 systemd[1]: Reload requested from client PID 1312 ('systemctl') (unit ensure-sysext.service)... Feb 13 20:25:04.603629 systemd[1]: Reloading... Feb 13 20:25:04.629403 systemd-tmpfiles[1313]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 20:25:04.629702 systemd-tmpfiles[1313]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 20:25:04.630572 systemd-tmpfiles[1313]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 20:25:04.630839 systemd-tmpfiles[1313]: ACLs are not supported, ignoring. Feb 13 20:25:04.630905 systemd-tmpfiles[1313]: ACLs are not supported, ignoring. Feb 13 20:25:04.639653 systemd-tmpfiles[1313]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 20:25:04.639666 systemd-tmpfiles[1313]: Skipping /boot Feb 13 20:25:04.662249 systemd-tmpfiles[1313]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 20:25:04.662264 systemd-tmpfiles[1313]: Skipping /boot Feb 13 20:25:04.739094 zram_generator::config[1345]: No configuration found. Feb 13 20:25:04.881557 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:25:04.952785 systemd[1]: Reloading finished in 348 ms. Feb 13 20:25:04.964129 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 20:25:04.970364 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:25:04.996232 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 20:25:05.000263 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 20:25:05.003226 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 20:25:05.008133 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 20:25:05.011295 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:25:05.014276 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 20:25:05.019448 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:25:05.019645 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:25:05.027109 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:25:05.036302 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:25:05.039597 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:25:05.041168 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:25:05.041288 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 20:25:05.041388 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:25:05.053818 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 20:25:05.066620 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:25:05.067276 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:25:05.073514 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 20:25:05.074735 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:25:05.075144 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 20:25:05.083590 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 20:25:05.087731 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 20:25:05.088213 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:25:05.091082 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 20:25:05.091970 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:25:05.092255 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:25:05.093106 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:25:05.093271 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:25:05.094862 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:25:05.095200 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:25:05.096510 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 20:25:05.096669 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 20:25:05.110547 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 20:25:05.110763 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 20:25:05.110845 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 20:25:05.111582 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 20:25:05.113067 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 20:25:05.114159 systemd[1]: Finished ensure-sysext.service. Feb 13 20:25:05.119277 systemd-udevd[1405]: Using default interface naming scheme 'v255'. Feb 13 20:25:05.134167 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 20:25:05.149304 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:25:05.157212 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 20:25:05.170845 augenrules[1451]: No rules Feb 13 20:25:05.171488 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 20:25:05.172113 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 20:25:05.178689 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 20:25:05.255671 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 20:25:05.320061 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 20:25:05.339032 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Feb 13 20:25:05.345171 kernel: ACPI: button: Power Button [PWRF] Feb 13 20:25:05.369729 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 20:25:05.370310 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 20:25:05.372475 systemd-networkd[1441]: lo: Link UP Feb 13 20:25:05.372484 systemd-networkd[1441]: lo: Gained carrier Feb 13 20:25:05.373257 systemd-networkd[1441]: Enumeration completed Feb 13 20:25:05.373668 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 20:25:05.375164 systemd-timesyncd[1436]: No network connectivity, watching for changes. Feb 13 20:25:05.381208 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Feb 13 20:25:05.391048 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1446) Feb 13 20:25:05.391174 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 20:25:05.406445 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Feb 13 20:25:05.410909 systemd-resolved[1404]: Positive Trust Anchors: Feb 13 20:25:05.411529 systemd-resolved[1404]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 20:25:05.411655 systemd-resolved[1404]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 20:25:05.420598 systemd-resolved[1404]: Using system hostname 'srv-llv2e.gb1.brightbox.com'. Feb 13 20:25:05.423798 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 20:25:05.424642 systemd[1]: Reached target network.target - Network. Feb 13 20:25:05.425479 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:25:05.471880 systemd-networkd[1441]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:25:05.471890 systemd-networkd[1441]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:25:05.473208 systemd-networkd[1441]: eth0: Link UP Feb 13 20:25:05.473215 systemd-networkd[1441]: eth0: Gained carrier Feb 13 20:25:05.473234 systemd-networkd[1441]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:25:05.491767 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Feb 13 20:25:05.495777 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input5 Feb 13 20:25:05.495802 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Feb 13 20:25:05.495963 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Feb 13 20:25:05.500142 systemd-networkd[1441]: eth0: DHCPv4 address 10.244.92.114/30, gateway 10.244.92.113 acquired from 10.244.92.113 Feb 13 20:25:05.500887 systemd-timesyncd[1436]: Network configuration changed, trying to establish connection. Feb 13 20:25:05.542156 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 20:25:05.551201 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 20:25:05.558672 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:25:05.570285 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 20:25:05.705506 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 20:25:05.738192 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 20:25:05.738993 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:25:05.774516 lvm[1495]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 20:25:06.715591 systemd-resolved[1404]: Clock change detected. Flushing caches. Feb 13 20:25:06.715907 systemd-timesyncd[1436]: Contacted time server 139.143.5.30:123 (1.flatcar.pool.ntp.org). Feb 13 20:25:06.716771 systemd-timesyncd[1436]: Initial clock synchronization to Thu 2025-02-13 20:25:06.715447 UTC. Feb 13 20:25:06.720947 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 20:25:06.729115 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:25:06.729697 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 20:25:06.730324 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 20:25:06.730902 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 20:25:06.731712 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 20:25:06.732331 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 20:25:06.732882 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 20:25:06.733389 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 20:25:06.733427 systemd[1]: Reached target paths.target - Path Units. Feb 13 20:25:06.733965 systemd[1]: Reached target timers.target - Timer Units. Feb 13 20:25:06.735929 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 20:25:06.738401 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 20:25:06.742668 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Feb 13 20:25:06.743380 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Feb 13 20:25:06.743866 systemd[1]: Reached target ssh-access.target - SSH Access Available. Feb 13 20:25:06.747426 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 20:25:06.748283 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Feb 13 20:25:06.757927 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 20:25:06.759214 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 20:25:06.759819 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 20:25:06.760265 systemd[1]: Reached target basic.target - Basic System. Feb 13 20:25:06.760768 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 20:25:06.760806 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 20:25:06.763275 lvm[1500]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 20:25:06.763913 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 20:25:06.772975 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 20:25:06.774876 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 20:25:06.777879 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 20:25:06.781907 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 20:25:06.782836 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 20:25:06.786915 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 20:25:06.788853 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 20:25:06.792932 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 20:25:06.796966 jq[1504]: false Feb 13 20:25:06.802105 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 20:25:06.807228 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 20:25:06.809016 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 20:25:06.812091 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 20:25:06.819301 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 20:25:06.822060 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 20:25:06.823858 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 20:25:06.827118 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 20:25:06.827303 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 20:25:06.844029 dbus-daemon[1503]: [system] SELinux support is enabled Feb 13 20:25:06.846319 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 20:25:06.850573 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 20:25:06.850605 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 20:25:06.853421 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 20:25:06.853443 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 20:25:06.867239 extend-filesystems[1505]: Found loop4 Feb 13 20:25:06.867239 extend-filesystems[1505]: Found loop5 Feb 13 20:25:06.867239 extend-filesystems[1505]: Found loop6 Feb 13 20:25:06.867239 extend-filesystems[1505]: Found loop7 Feb 13 20:25:06.867239 extend-filesystems[1505]: Found vda Feb 13 20:25:06.867239 extend-filesystems[1505]: Found vda1 Feb 13 20:25:06.867239 extend-filesystems[1505]: Found vda2 Feb 13 20:25:06.867239 extend-filesystems[1505]: Found vda3 Feb 13 20:25:06.867239 extend-filesystems[1505]: Found usr Feb 13 20:25:06.867239 extend-filesystems[1505]: Found vda4 Feb 13 20:25:06.867239 extend-filesystems[1505]: Found vda6 Feb 13 20:25:06.867239 extend-filesystems[1505]: Found vda7 Feb 13 20:25:06.867239 extend-filesystems[1505]: Found vda9 Feb 13 20:25:06.867239 extend-filesystems[1505]: Checking size of /dev/vda9 Feb 13 20:25:06.867550 dbus-daemon[1503]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1441 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 13 20:25:06.911267 tar[1517]: linux-amd64/LICENSE Feb 13 20:25:06.911267 tar[1517]: linux-amd64/helm Feb 13 20:25:06.911474 update_engine[1514]: I20250213 20:25:06.883683 1514 main.cc:92] Flatcar Update Engine starting Feb 13 20:25:06.911474 update_engine[1514]: I20250213 20:25:06.890172 1514 update_check_scheduler.cc:74] Next update check in 6m41s Feb 13 20:25:06.874149 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Feb 13 20:25:06.876503 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 20:25:06.877815 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 20:25:06.911925 jq[1515]: true Feb 13 20:25:06.890031 systemd[1]: Started update-engine.service - Update Engine. Feb 13 20:25:06.906919 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 20:25:06.907898 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 20:25:06.908114 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 20:25:06.908577 (ntainerd)[1536]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 20:25:06.924465 extend-filesystems[1505]: Resized partition /dev/vda9 Feb 13 20:25:06.937441 jq[1537]: true Feb 13 20:25:06.949963 extend-filesystems[1544]: resize2fs 1.47.1 (20-May-2024) Feb 13 20:25:06.965864 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Feb 13 20:25:06.986138 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1468) Feb 13 20:25:07.125769 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Feb 13 20:25:07.153956 dbus-daemon[1503]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 13 20:25:07.154298 dbus-daemon[1503]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1534 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 13 20:25:07.159402 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Feb 13 20:25:07.160262 bash[1561]: Updated "/home/core/.ssh/authorized_keys" Feb 13 20:25:07.161474 extend-filesystems[1544]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 20:25:07.161474 extend-filesystems[1544]: old_desc_blocks = 1, new_desc_blocks = 8 Feb 13 20:25:07.161474 extend-filesystems[1544]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Feb 13 20:25:07.162108 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 20:25:07.162866 systemd-logind[1511]: Watching system buttons on /dev/input/event2 (Power Button) Feb 13 20:25:07.162890 systemd-logind[1511]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 20:25:07.163198 systemd-logind[1511]: New seat seat0. Feb 13 20:25:07.165497 extend-filesystems[1505]: Resized filesystem in /dev/vda9 Feb 13 20:25:07.167662 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 20:25:07.169144 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 20:25:07.169408 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 20:25:07.181879 systemd[1]: Starting polkit.service - Authorization Manager... Feb 13 20:25:07.186859 systemd[1]: Starting sshkeys.service... Feb 13 20:25:07.232733 polkitd[1566]: Started polkitd version 121 Feb 13 20:25:07.245015 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 20:25:07.254684 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 20:25:07.268480 polkitd[1566]: Loading rules from directory /etc/polkit-1/rules.d Feb 13 20:25:07.268567 polkitd[1566]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 13 20:25:07.277001 polkitd[1566]: Finished loading, compiling and executing 2 rules Feb 13 20:25:07.277592 systemd[1]: Started polkit.service - Authorization Manager. Feb 13 20:25:07.277457 dbus-daemon[1503]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 13 20:25:07.278933 polkitd[1566]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 13 20:25:07.323284 locksmithd[1538]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 20:25:07.325057 systemd-hostnamed[1534]: Hostname set to (static) Feb 13 20:25:07.393793 containerd[1536]: time="2025-02-13T20:25:07.393574566Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 20:25:07.460701 containerd[1536]: time="2025-02-13T20:25:07.460188492Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:25:07.467291 containerd[1536]: time="2025-02-13T20:25:07.466892596Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:25:07.467291 containerd[1536]: time="2025-02-13T20:25:07.466934602Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 20:25:07.467291 containerd[1536]: time="2025-02-13T20:25:07.466953192Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 20:25:07.467291 containerd[1536]: time="2025-02-13T20:25:07.467110399Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 20:25:07.467291 containerd[1536]: time="2025-02-13T20:25:07.467128368Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 20:25:07.467291 containerd[1536]: time="2025-02-13T20:25:07.467187344Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:25:07.467291 containerd[1536]: time="2025-02-13T20:25:07.467199411Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:25:07.467508 containerd[1536]: time="2025-02-13T20:25:07.467390623Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:25:07.467508 containerd[1536]: time="2025-02-13T20:25:07.467404271Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 20:25:07.467508 containerd[1536]: time="2025-02-13T20:25:07.467416532Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:25:07.467508 containerd[1536]: time="2025-02-13T20:25:07.467425621Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 20:25:07.467508 containerd[1536]: time="2025-02-13T20:25:07.467501623Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:25:07.468055 containerd[1536]: time="2025-02-13T20:25:07.467698287Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:25:07.470835 containerd[1536]: time="2025-02-13T20:25:07.470814558Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:25:07.470835 containerd[1536]: time="2025-02-13T20:25:07.470834508Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 20:25:07.470930 containerd[1536]: time="2025-02-13T20:25:07.470918700Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 20:25:07.471017 containerd[1536]: time="2025-02-13T20:25:07.470961306Z" level=info msg="metadata content store policy set" policy=shared Feb 13 20:25:07.473182 containerd[1536]: time="2025-02-13T20:25:07.473001551Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 20:25:07.473182 containerd[1536]: time="2025-02-13T20:25:07.473050560Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 20:25:07.473182 containerd[1536]: time="2025-02-13T20:25:07.473069400Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 20:25:07.473182 containerd[1536]: time="2025-02-13T20:25:07.473086357Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 20:25:07.473182 containerd[1536]: time="2025-02-13T20:25:07.473100176Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 20:25:07.473337 containerd[1536]: time="2025-02-13T20:25:07.473216130Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 20:25:07.473682 containerd[1536]: time="2025-02-13T20:25:07.473442305Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 20:25:07.473682 containerd[1536]: time="2025-02-13T20:25:07.473554095Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 20:25:07.473682 containerd[1536]: time="2025-02-13T20:25:07.473568828Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 20:25:07.473682 containerd[1536]: time="2025-02-13T20:25:07.473582276Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 20:25:07.473682 containerd[1536]: time="2025-02-13T20:25:07.473596112Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 20:25:07.473682 containerd[1536]: time="2025-02-13T20:25:07.473619792Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 20:25:07.473682 containerd[1536]: time="2025-02-13T20:25:07.473634705Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 20:25:07.473682 containerd[1536]: time="2025-02-13T20:25:07.473648496Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 20:25:07.473682 containerd[1536]: time="2025-02-13T20:25:07.473663458Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 20:25:07.473682 containerd[1536]: time="2025-02-13T20:25:07.473676380Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 20:25:07.473921 containerd[1536]: time="2025-02-13T20:25:07.473695865Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 20:25:07.473921 containerd[1536]: time="2025-02-13T20:25:07.473709473Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 20:25:07.473921 containerd[1536]: time="2025-02-13T20:25:07.473730713Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 20:25:07.473921 containerd[1536]: time="2025-02-13T20:25:07.473743500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 20:25:07.473921 containerd[1536]: time="2025-02-13T20:25:07.473773285Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 20:25:07.473921 containerd[1536]: time="2025-02-13T20:25:07.473786392Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 20:25:07.473921 containerd[1536]: time="2025-02-13T20:25:07.473797989Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 20:25:07.473921 containerd[1536]: time="2025-02-13T20:25:07.473823999Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 20:25:07.473921 containerd[1536]: time="2025-02-13T20:25:07.473837897Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 20:25:07.473921 containerd[1536]: time="2025-02-13T20:25:07.473850361Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 20:25:07.473921 containerd[1536]: time="2025-02-13T20:25:07.473862702Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 20:25:07.473921 containerd[1536]: time="2025-02-13T20:25:07.473877725Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 20:25:07.473921 containerd[1536]: time="2025-02-13T20:25:07.473888714Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 20:25:07.473921 containerd[1536]: time="2025-02-13T20:25:07.473900177Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 20:25:07.474266 containerd[1536]: time="2025-02-13T20:25:07.473913609Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 20:25:07.474266 containerd[1536]: time="2025-02-13T20:25:07.473928478Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 20:25:07.474266 containerd[1536]: time="2025-02-13T20:25:07.473949896Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 20:25:07.474266 containerd[1536]: time="2025-02-13T20:25:07.473962527Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 20:25:07.474266 containerd[1536]: time="2025-02-13T20:25:07.473972340Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 20:25:07.474266 containerd[1536]: time="2025-02-13T20:25:07.474022907Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 20:25:07.474266 containerd[1536]: time="2025-02-13T20:25:07.474038761Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 20:25:07.474266 containerd[1536]: time="2025-02-13T20:25:07.474049449Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 20:25:07.474266 containerd[1536]: time="2025-02-13T20:25:07.474061178Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 20:25:07.474266 containerd[1536]: time="2025-02-13T20:25:07.474070444Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 20:25:07.474266 containerd[1536]: time="2025-02-13T20:25:07.474082954Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 20:25:07.474266 containerd[1536]: time="2025-02-13T20:25:07.474095892Z" level=info msg="NRI interface is disabled by configuration." Feb 13 20:25:07.474266 containerd[1536]: time="2025-02-13T20:25:07.474107719Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 20:25:07.474556 containerd[1536]: time="2025-02-13T20:25:07.474432252Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 20:25:07.474556 containerd[1536]: time="2025-02-13T20:25:07.474479006Z" level=info msg="Connect containerd service" Feb 13 20:25:07.474556 containerd[1536]: time="2025-02-13T20:25:07.474550204Z" level=info msg="using legacy CRI server" Feb 13 20:25:07.474790 containerd[1536]: time="2025-02-13T20:25:07.474559895Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 20:25:07.474790 containerd[1536]: time="2025-02-13T20:25:07.474688448Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 20:25:07.482588 containerd[1536]: time="2025-02-13T20:25:07.480307051Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 20:25:07.482588 containerd[1536]: time="2025-02-13T20:25:07.480942852Z" level=info msg="Start subscribing containerd event" Feb 13 20:25:07.482588 containerd[1536]: time="2025-02-13T20:25:07.480991671Z" level=info msg="Start recovering state" Feb 13 20:25:07.482588 containerd[1536]: time="2025-02-13T20:25:07.481086608Z" level=info msg="Start event monitor" Feb 13 20:25:07.482588 containerd[1536]: time="2025-02-13T20:25:07.481103416Z" level=info msg="Start snapshots syncer" Feb 13 20:25:07.482588 containerd[1536]: time="2025-02-13T20:25:07.481112929Z" level=info msg="Start cni network conf syncer for default" Feb 13 20:25:07.482588 containerd[1536]: time="2025-02-13T20:25:07.481125769Z" level=info msg="Start streaming server" Feb 13 20:25:07.482588 containerd[1536]: time="2025-02-13T20:25:07.481303416Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 20:25:07.482588 containerd[1536]: time="2025-02-13T20:25:07.481352742Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 20:25:07.481497 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 20:25:07.483906 containerd[1536]: time="2025-02-13T20:25:07.483702890Z" level=info msg="containerd successfully booted in 0.092566s" Feb 13 20:25:07.582045 sshd_keygen[1532]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 20:25:07.608143 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 20:25:07.619030 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 20:25:07.624144 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 20:25:07.624380 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 20:25:07.632673 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 20:25:07.642244 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 20:25:07.650307 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 20:25:07.653968 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 20:25:07.654585 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 20:25:07.701735 tar[1517]: linux-amd64/README.md Feb 13 20:25:07.725918 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 20:25:08.408110 systemd-networkd[1441]: eth0: Gained IPv6LL Feb 13 20:25:08.412803 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 20:25:08.417659 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 20:25:08.427451 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:25:08.431856 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 20:25:08.466857 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 20:25:09.238394 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:25:09.243802 (kubelet)[1626]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:25:09.412402 systemd-networkd[1441]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:171c:24:19ff:fef4:5c72/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:171c:24:19ff:fef4:5c72/64 assigned by NDisc. Feb 13 20:25:09.412422 systemd-networkd[1441]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Feb 13 20:25:09.806573 kubelet[1626]: E0213 20:25:09.806406 1626 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:25:09.808970 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:25:09.809121 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:25:09.809488 systemd[1]: kubelet.service: Consumed 1.097s CPU time, 253.6M memory peak. Feb 13 20:25:12.723158 login[1603]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Feb 13 20:25:12.725728 login[1604]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 13 20:25:12.739578 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 20:25:12.749527 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 20:25:12.756160 systemd-logind[1511]: New session 2 of user core. Feb 13 20:25:12.765797 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 20:25:12.778071 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 20:25:12.781571 (systemd)[1642]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 20:25:12.784612 systemd-logind[1511]: New session c1 of user core. Feb 13 20:25:12.923099 systemd[1642]: Queued start job for default target default.target. Feb 13 20:25:12.931384 systemd[1642]: Created slice app.slice - User Application Slice. Feb 13 20:25:12.931550 systemd[1642]: Reached target paths.target - Paths. Feb 13 20:25:12.931600 systemd[1642]: Reached target timers.target - Timers. Feb 13 20:25:12.933140 systemd[1642]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 20:25:12.963714 systemd[1642]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 20:25:12.963979 systemd[1642]: Reached target sockets.target - Sockets. Feb 13 20:25:12.964059 systemd[1642]: Reached target basic.target - Basic System. Feb 13 20:25:12.964131 systemd[1642]: Reached target default.target - Main User Target. Feb 13 20:25:12.964143 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 20:25:12.964191 systemd[1642]: Startup finished in 172ms. Feb 13 20:25:12.969963 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 20:25:13.222457 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 20:25:13.237300 systemd[1]: Started sshd@0-10.244.92.114:22-139.178.89.65:35840.service - OpenSSH per-connection server daemon (139.178.89.65:35840). Feb 13 20:25:13.724180 login[1603]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 13 20:25:13.736483 systemd-logind[1511]: New session 1 of user core. Feb 13 20:25:13.744004 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 20:25:13.923234 coreos-metadata[1502]: Feb 13 20:25:13.923 WARN failed to locate config-drive, using the metadata service API instead Feb 13 20:25:13.951318 coreos-metadata[1502]: Feb 13 20:25:13.951 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Feb 13 20:25:13.958483 coreos-metadata[1502]: Feb 13 20:25:13.958 INFO Fetch failed with 404: resource not found Feb 13 20:25:13.958483 coreos-metadata[1502]: Feb 13 20:25:13.958 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Feb 13 20:25:13.959254 coreos-metadata[1502]: Feb 13 20:25:13.959 INFO Fetch successful Feb 13 20:25:13.959254 coreos-metadata[1502]: Feb 13 20:25:13.959 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Feb 13 20:25:13.971672 coreos-metadata[1502]: Feb 13 20:25:13.971 INFO Fetch successful Feb 13 20:25:13.971672 coreos-metadata[1502]: Feb 13 20:25:13.971 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Feb 13 20:25:13.985950 coreos-metadata[1502]: Feb 13 20:25:13.985 INFO Fetch successful Feb 13 20:25:13.985950 coreos-metadata[1502]: Feb 13 20:25:13.985 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Feb 13 20:25:14.000064 coreos-metadata[1502]: Feb 13 20:25:13.999 INFO Fetch successful Feb 13 20:25:14.000290 coreos-metadata[1502]: Feb 13 20:25:14.000 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Feb 13 20:25:14.019174 coreos-metadata[1502]: Feb 13 20:25:14.019 INFO Fetch successful Feb 13 20:25:14.054367 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 20:25:14.058745 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 20:25:14.150350 sshd[1661]: Accepted publickey for core from 139.178.89.65 port 35840 ssh2: RSA SHA256:1d/NPWzJh4p1csN6rw9jx6l57+TZuIaUuHeQZhkXldk Feb 13 20:25:14.153741 sshd-session[1661]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:25:14.163961 systemd-logind[1511]: New session 3 of user core. Feb 13 20:25:14.173004 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 20:25:14.406800 coreos-metadata[1578]: Feb 13 20:25:14.406 WARN failed to locate config-drive, using the metadata service API instead Feb 13 20:25:14.429456 coreos-metadata[1578]: Feb 13 20:25:14.429 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Feb 13 20:25:14.453505 coreos-metadata[1578]: Feb 13 20:25:14.453 INFO Fetch successful Feb 13 20:25:14.454003 coreos-metadata[1578]: Feb 13 20:25:14.453 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 13 20:25:14.480586 coreos-metadata[1578]: Feb 13 20:25:14.480 INFO Fetch successful Feb 13 20:25:14.483072 unknown[1578]: wrote ssh authorized keys file for user: core Feb 13 20:25:14.515999 update-ssh-keys[1684]: Updated "/home/core/.ssh/authorized_keys" Feb 13 20:25:14.519338 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 20:25:14.524925 systemd[1]: Finished sshkeys.service. Feb 13 20:25:14.526233 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 20:25:14.530875 systemd[1]: Startup finished in 1.102s (kernel) + 15.132s (initrd) + 10.836s (userspace) = 27.072s. Feb 13 20:25:14.925302 systemd[1]: Started sshd@1-10.244.92.114:22-139.178.89.65:55380.service - OpenSSH per-connection server daemon (139.178.89.65:55380). Feb 13 20:25:15.826969 sshd[1689]: Accepted publickey for core from 139.178.89.65 port 55380 ssh2: RSA SHA256:1d/NPWzJh4p1csN6rw9jx6l57+TZuIaUuHeQZhkXldk Feb 13 20:25:15.828841 sshd-session[1689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:25:15.838635 systemd-logind[1511]: New session 4 of user core. Feb 13 20:25:15.839932 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 20:25:16.454403 sshd[1691]: Connection closed by 139.178.89.65 port 55380 Feb 13 20:25:16.454177 sshd-session[1689]: pam_unix(sshd:session): session closed for user core Feb 13 20:25:16.463284 systemd[1]: sshd@1-10.244.92.114:22-139.178.89.65:55380.service: Deactivated successfully. Feb 13 20:25:16.467497 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 20:25:16.468939 systemd-logind[1511]: Session 4 logged out. Waiting for processes to exit. Feb 13 20:25:16.470121 systemd-logind[1511]: Removed session 4. Feb 13 20:25:16.614389 systemd[1]: Started sshd@2-10.244.92.114:22-139.178.89.65:55396.service - OpenSSH per-connection server daemon (139.178.89.65:55396). Feb 13 20:25:17.512744 sshd[1697]: Accepted publickey for core from 139.178.89.65 port 55396 ssh2: RSA SHA256:1d/NPWzJh4p1csN6rw9jx6l57+TZuIaUuHeQZhkXldk Feb 13 20:25:17.516287 sshd-session[1697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:25:17.527355 systemd-logind[1511]: New session 5 of user core. Feb 13 20:25:17.536020 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 20:25:18.130058 sshd[1699]: Connection closed by 139.178.89.65 port 55396 Feb 13 20:25:18.131633 sshd-session[1697]: pam_unix(sshd:session): session closed for user core Feb 13 20:25:18.139819 systemd[1]: sshd@2-10.244.92.114:22-139.178.89.65:55396.service: Deactivated successfully. Feb 13 20:25:18.143545 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 20:25:18.145243 systemd-logind[1511]: Session 5 logged out. Waiting for processes to exit. Feb 13 20:25:18.146798 systemd-logind[1511]: Removed session 5. Feb 13 20:25:18.298243 systemd[1]: Started sshd@3-10.244.92.114:22-139.178.89.65:55400.service - OpenSSH per-connection server daemon (139.178.89.65:55400). Feb 13 20:25:19.212083 sshd[1705]: Accepted publickey for core from 139.178.89.65 port 55400 ssh2: RSA SHA256:1d/NPWzJh4p1csN6rw9jx6l57+TZuIaUuHeQZhkXldk Feb 13 20:25:19.215737 sshd-session[1705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:25:19.227711 systemd-logind[1511]: New session 6 of user core. Feb 13 20:25:19.240014 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 20:25:19.820678 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 20:25:19.829193 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:25:19.834953 sshd[1707]: Connection closed by 139.178.89.65 port 55400 Feb 13 20:25:19.837016 sshd-session[1705]: pam_unix(sshd:session): session closed for user core Feb 13 20:25:19.848974 systemd-logind[1511]: Session 6 logged out. Waiting for processes to exit. Feb 13 20:25:19.849713 systemd[1]: sshd@3-10.244.92.114:22-139.178.89.65:55400.service: Deactivated successfully. Feb 13 20:25:19.857149 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 20:25:19.860697 systemd-logind[1511]: Removed session 6. Feb 13 20:25:19.991000 systemd[1]: Started sshd@4-10.244.92.114:22-139.178.89.65:55408.service - OpenSSH per-connection server daemon (139.178.89.65:55408). Feb 13 20:25:20.001209 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:25:20.002503 (kubelet)[1722]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:25:20.066290 kubelet[1722]: E0213 20:25:20.066209 1722 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:25:20.072229 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:25:20.072428 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:25:20.072771 systemd[1]: kubelet.service: Consumed 199ms CPU time, 103.2M memory peak. Feb 13 20:25:20.896584 sshd[1720]: Accepted publickey for core from 139.178.89.65 port 55408 ssh2: RSA SHA256:1d/NPWzJh4p1csN6rw9jx6l57+TZuIaUuHeQZhkXldk Feb 13 20:25:20.899545 sshd-session[1720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:25:20.907509 systemd-logind[1511]: New session 7 of user core. Feb 13 20:25:20.919187 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 20:25:21.386023 sudo[1731]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 20:25:21.386377 sudo[1731]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:25:21.402738 sudo[1731]: pam_unix(sudo:session): session closed for user root Feb 13 20:25:21.546912 sshd[1730]: Connection closed by 139.178.89.65 port 55408 Feb 13 20:25:21.548000 sshd-session[1720]: pam_unix(sshd:session): session closed for user core Feb 13 20:25:21.554443 systemd[1]: sshd@4-10.244.92.114:22-139.178.89.65:55408.service: Deactivated successfully. Feb 13 20:25:21.558451 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 20:25:21.560696 systemd-logind[1511]: Session 7 logged out. Waiting for processes to exit. Feb 13 20:25:21.563582 systemd-logind[1511]: Removed session 7. Feb 13 20:25:21.716076 systemd[1]: Started sshd@5-10.244.92.114:22-139.178.89.65:55416.service - OpenSSH per-connection server daemon (139.178.89.65:55416). Feb 13 20:25:22.617917 sshd[1737]: Accepted publickey for core from 139.178.89.65 port 55416 ssh2: RSA SHA256:1d/NPWzJh4p1csN6rw9jx6l57+TZuIaUuHeQZhkXldk Feb 13 20:25:22.621471 sshd-session[1737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:25:22.633359 systemd-logind[1511]: New session 8 of user core. Feb 13 20:25:22.640010 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 20:25:23.103972 sudo[1741]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 20:25:23.104546 sudo[1741]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:25:23.111488 sudo[1741]: pam_unix(sudo:session): session closed for user root Feb 13 20:25:23.121330 sudo[1740]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 20:25:23.121693 sudo[1740]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:25:23.142161 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 20:25:23.179858 augenrules[1763]: No rules Feb 13 20:25:23.181833 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 20:25:23.182083 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 20:25:23.184126 sudo[1740]: pam_unix(sudo:session): session closed for user root Feb 13 20:25:23.328309 sshd[1739]: Connection closed by 139.178.89.65 port 55416 Feb 13 20:25:23.328978 sshd-session[1737]: pam_unix(sshd:session): session closed for user core Feb 13 20:25:23.333485 systemd[1]: sshd@5-10.244.92.114:22-139.178.89.65:55416.service: Deactivated successfully. Feb 13 20:25:23.336053 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 20:25:23.338029 systemd-logind[1511]: Session 8 logged out. Waiting for processes to exit. Feb 13 20:25:23.339842 systemd-logind[1511]: Removed session 8. Feb 13 20:25:23.502488 systemd[1]: Started sshd@6-10.244.92.114:22-139.178.89.65:55432.service - OpenSSH per-connection server daemon (139.178.89.65:55432). Feb 13 20:25:24.412167 sshd[1772]: Accepted publickey for core from 139.178.89.65 port 55432 ssh2: RSA SHA256:1d/NPWzJh4p1csN6rw9jx6l57+TZuIaUuHeQZhkXldk Feb 13 20:25:24.415451 sshd-session[1772]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:25:24.425933 systemd-logind[1511]: New session 9 of user core. Feb 13 20:25:24.434937 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 20:25:24.895829 sudo[1775]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 20:25:24.896124 sudo[1775]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:25:25.360215 (dockerd)[1793]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 20:25:25.360524 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 20:25:25.806391 dockerd[1793]: time="2025-02-13T20:25:25.806236032Z" level=info msg="Starting up" Feb 13 20:25:25.884352 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3126149643-merged.mount: Deactivated successfully. Feb 13 20:25:25.909105 dockerd[1793]: time="2025-02-13T20:25:25.908368643Z" level=info msg="Loading containers: start." Feb 13 20:25:26.072819 kernel: Initializing XFRM netlink socket Feb 13 20:25:26.165794 systemd-networkd[1441]: docker0: Link UP Feb 13 20:25:26.189784 dockerd[1793]: time="2025-02-13T20:25:26.189468398Z" level=info msg="Loading containers: done." Feb 13 20:25:26.213002 dockerd[1793]: time="2025-02-13T20:25:26.211742015Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 20:25:26.213002 dockerd[1793]: time="2025-02-13T20:25:26.211885204Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Feb 13 20:25:26.213002 dockerd[1793]: time="2025-02-13T20:25:26.212013215Z" level=info msg="Daemon has completed initialization" Feb 13 20:25:26.241219 dockerd[1793]: time="2025-02-13T20:25:26.241007553Z" level=info msg="API listen on /run/docker.sock" Feb 13 20:25:26.241561 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 20:25:27.071049 containerd[1536]: time="2025-02-13T20:25:27.070956859Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.2\"" Feb 13 20:25:27.841888 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3017297505.mount: Deactivated successfully. Feb 13 20:25:29.315136 containerd[1536]: time="2025-02-13T20:25:29.315083772Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:25:29.317597 containerd[1536]: time="2025-02-13T20:25:29.316104986Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.2: active requests=0, bytes read=28673939" Feb 13 20:25:29.317597 containerd[1536]: time="2025-02-13T20:25:29.316314045Z" level=info msg="ImageCreate event name:\"sha256:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:25:29.320714 containerd[1536]: time="2025-02-13T20:25:29.320636006Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:25:29.321810 containerd[1536]: time="2025-02-13T20:25:29.321768080Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.2\" with image id \"sha256:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f\", size \"28670731\" in 2.25071612s" Feb 13 20:25:29.321810 containerd[1536]: time="2025-02-13T20:25:29.321814804Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.2\" returns image reference \"sha256:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef\"" Feb 13 20:25:29.323067 containerd[1536]: time="2025-02-13T20:25:29.322983329Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.2\"" Feb 13 20:25:30.319151 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 20:25:30.327933 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:25:30.470908 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:25:30.472729 (kubelet)[2048]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:25:30.538496 kubelet[2048]: E0213 20:25:30.538414 2048 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:25:30.540512 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:25:30.540683 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:25:30.541081 systemd[1]: kubelet.service: Consumed 169ms CPU time, 103.8M memory peak. Feb 13 20:25:31.229854 containerd[1536]: time="2025-02-13T20:25:31.228838506Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:25:31.229854 containerd[1536]: time="2025-02-13T20:25:31.229523883Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.2: active requests=0, bytes read=24771792" Feb 13 20:25:31.229854 containerd[1536]: time="2025-02-13T20:25:31.229808647Z" level=info msg="ImageCreate event name:\"sha256:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:25:31.236460 containerd[1536]: time="2025-02-13T20:25:31.236429265Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:25:31.236662 containerd[1536]: time="2025-02-13T20:25:31.236638698Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.2\" with image id \"sha256:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90\", size \"26259392\" in 1.913398619s" Feb 13 20:25:31.236734 containerd[1536]: time="2025-02-13T20:25:31.236722027Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.2\" returns image reference \"sha256:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389\"" Feb 13 20:25:31.237284 containerd[1536]: time="2025-02-13T20:25:31.237259155Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.2\"" Feb 13 20:25:32.580924 containerd[1536]: time="2025-02-13T20:25:32.580861125Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:25:32.582038 containerd[1536]: time="2025-02-13T20:25:32.581991820Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.2: active requests=0, bytes read=19170284" Feb 13 20:25:32.583807 containerd[1536]: time="2025-02-13T20:25:32.582479439Z" level=info msg="ImageCreate event name:\"sha256:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:25:32.585320 containerd[1536]: time="2025-02-13T20:25:32.585286780Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:25:32.586498 containerd[1536]: time="2025-02-13T20:25:32.586466886Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.2\" with image id \"sha256:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76\", size \"20657902\" in 1.349177325s" Feb 13 20:25:32.586557 containerd[1536]: time="2025-02-13T20:25:32.586504811Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.2\" returns image reference \"sha256:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d\"" Feb 13 20:25:32.587781 containerd[1536]: time="2025-02-13T20:25:32.587743704Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\"" Feb 13 20:25:33.864529 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2425703050.mount: Deactivated successfully. Feb 13 20:25:34.353573 containerd[1536]: time="2025-02-13T20:25:34.353520893Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:25:34.354634 containerd[1536]: time="2025-02-13T20:25:34.354600538Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.2: active requests=0, bytes read=30908847" Feb 13 20:25:34.355183 containerd[1536]: time="2025-02-13T20:25:34.355160463Z" level=info msg="ImageCreate event name:\"sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:25:34.357052 containerd[1536]: time="2025-02-13T20:25:34.356767569Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:25:34.357727 containerd[1536]: time="2025-02-13T20:25:34.357699364Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.2\" with image id \"sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5\", repo tag \"registry.k8s.io/kube-proxy:v1.32.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\", size \"30907858\" in 1.769837432s" Feb 13 20:25:34.357805 containerd[1536]: time="2025-02-13T20:25:34.357732638Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\" returns image reference \"sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5\"" Feb 13 20:25:34.358374 containerd[1536]: time="2025-02-13T20:25:34.358352892Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Feb 13 20:25:34.970970 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2104719553.mount: Deactivated successfully. Feb 13 20:25:36.002877 containerd[1536]: time="2025-02-13T20:25:36.002815666Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:25:36.005205 containerd[1536]: time="2025-02-13T20:25:36.004432421Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" Feb 13 20:25:36.006019 containerd[1536]: time="2025-02-13T20:25:36.005983091Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:25:36.013058 containerd[1536]: time="2025-02-13T20:25:36.013015694Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:25:36.015722 containerd[1536]: time="2025-02-13T20:25:36.015683461Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.657219136s" Feb 13 20:25:36.015872 containerd[1536]: time="2025-02-13T20:25:36.015852307Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Feb 13 20:25:36.017015 containerd[1536]: time="2025-02-13T20:25:36.016982367Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Feb 13 20:25:36.566491 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3531976396.mount: Deactivated successfully. Feb 13 20:25:36.567415 containerd[1536]: time="2025-02-13T20:25:36.567361476Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:25:36.568832 containerd[1536]: time="2025-02-13T20:25:36.568625231Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Feb 13 20:25:36.569496 containerd[1536]: time="2025-02-13T20:25:36.569461072Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:25:36.572448 containerd[1536]: time="2025-02-13T20:25:36.571395601Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:25:36.572448 containerd[1536]: time="2025-02-13T20:25:36.572319473Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 555.299829ms" Feb 13 20:25:36.572448 containerd[1536]: time="2025-02-13T20:25:36.572351948Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Feb 13 20:25:36.573216 containerd[1536]: time="2025-02-13T20:25:36.573168966Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Feb 13 20:25:37.234191 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount109799216.mount: Deactivated successfully. Feb 13 20:25:39.455359 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 13 20:25:40.570170 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Feb 13 20:25:40.584233 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:25:40.746070 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:25:40.761400 (kubelet)[2186]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:25:40.822300 kubelet[2186]: E0213 20:25:40.821809 2186 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:25:40.824721 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:25:40.824941 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:25:40.825733 systemd[1]: kubelet.service: Consumed 189ms CPU time, 102.7M memory peak. Feb 13 20:25:42.309038 containerd[1536]: time="2025-02-13T20:25:42.308984429Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:25:42.310382 containerd[1536]: time="2025-02-13T20:25:42.310300888Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551328" Feb 13 20:25:42.311801 containerd[1536]: time="2025-02-13T20:25:42.310881378Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:25:42.314328 containerd[1536]: time="2025-02-13T20:25:42.314302877Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:25:42.315540 containerd[1536]: time="2025-02-13T20:25:42.315513318Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 5.742283419s" Feb 13 20:25:42.315630 containerd[1536]: time="2025-02-13T20:25:42.315615745Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Feb 13 20:25:44.948168 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:25:44.948328 systemd[1]: kubelet.service: Consumed 189ms CPU time, 102.7M memory peak. Feb 13 20:25:44.958331 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:25:44.995766 systemd[1]: Reload requested from client PID 2228 ('systemctl') (unit session-9.scope)... Feb 13 20:25:44.995802 systemd[1]: Reloading... Feb 13 20:25:45.130819 zram_generator::config[2274]: No configuration found. Feb 13 20:25:45.264665 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:25:45.367414 systemd[1]: Reloading finished in 371 ms. Feb 13 20:25:45.429631 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:25:45.437228 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 20:25:45.437480 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:25:45.437532 systemd[1]: kubelet.service: Consumed 104ms CPU time, 91.6M memory peak. Feb 13 20:25:45.445075 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:25:45.587885 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:25:45.593112 (kubelet)[2342]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 20:25:45.647703 kubelet[2342]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:25:45.648772 kubelet[2342]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Feb 13 20:25:45.648772 kubelet[2342]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:25:45.648772 kubelet[2342]: I0213 20:25:45.648328 2342 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 20:25:46.108439 kubelet[2342]: I0213 20:25:46.108387 2342 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Feb 13 20:25:46.108439 kubelet[2342]: I0213 20:25:46.108420 2342 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 20:25:46.108743 kubelet[2342]: I0213 20:25:46.108720 2342 server.go:954] "Client rotation is on, will bootstrap in background" Feb 13 20:25:46.144422 kubelet[2342]: E0213 20:25:46.143787 2342 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.244.92.114:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.244.92.114:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:25:46.145402 kubelet[2342]: I0213 20:25:46.144811 2342 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 20:25:46.159971 kubelet[2342]: E0213 20:25:46.159921 2342 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 20:25:46.159971 kubelet[2342]: I0213 20:25:46.159965 2342 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 20:25:46.165995 kubelet[2342]: I0213 20:25:46.165975 2342 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 20:25:46.169289 kubelet[2342]: I0213 20:25:46.169168 2342 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 20:25:46.169432 kubelet[2342]: I0213 20:25:46.169218 2342 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-llv2e.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 20:25:46.169578 kubelet[2342]: I0213 20:25:46.169436 2342 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 20:25:46.169578 kubelet[2342]: I0213 20:25:46.169447 2342 container_manager_linux.go:304] "Creating device plugin manager" Feb 13 20:25:46.169650 kubelet[2342]: I0213 20:25:46.169596 2342 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:25:46.172689 kubelet[2342]: I0213 20:25:46.172659 2342 kubelet.go:446] "Attempting to sync node with API server" Feb 13 20:25:46.172831 kubelet[2342]: I0213 20:25:46.172811 2342 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 20:25:46.172872 kubelet[2342]: I0213 20:25:46.172844 2342 kubelet.go:352] "Adding apiserver pod source" Feb 13 20:25:46.172872 kubelet[2342]: I0213 20:25:46.172856 2342 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 20:25:46.180867 kubelet[2342]: W0213 20:25:46.180659 2342 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.244.92.114:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-llv2e.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.244.92.114:6443: connect: connection refused Feb 13 20:25:46.181704 kubelet[2342]: E0213 20:25:46.181219 2342 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.244.92.114:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-llv2e.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.244.92.114:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:25:46.181704 kubelet[2342]: I0213 20:25:46.181439 2342 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 20:25:46.188166 kubelet[2342]: I0213 20:25:46.188143 2342 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 20:25:46.190763 kubelet[2342]: W0213 20:25:46.189413 2342 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 20:25:46.191673 kubelet[2342]: W0213 20:25:46.191624 2342 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.244.92.114:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.244.92.114:6443: connect: connection refused Feb 13 20:25:46.192111 kubelet[2342]: E0213 20:25:46.191685 2342 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.244.92.114:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.244.92.114:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:25:46.195325 kubelet[2342]: I0213 20:25:46.195303 2342 watchdog_linux.go:99] "Systemd watchdog is not enabled" Feb 13 20:25:46.195595 kubelet[2342]: I0213 20:25:46.195558 2342 server.go:1287] "Started kubelet" Feb 13 20:25:46.197070 kubelet[2342]: I0213 20:25:46.196673 2342 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 20:25:46.197070 kubelet[2342]: I0213 20:25:46.196947 2342 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 20:25:46.198072 kubelet[2342]: I0213 20:25:46.197276 2342 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 20:25:46.200063 kubelet[2342]: I0213 20:25:46.200045 2342 server.go:490] "Adding debug handlers to kubelet server" Feb 13 20:25:46.202651 kubelet[2342]: I0213 20:25:46.202636 2342 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 20:25:46.203363 kubelet[2342]: I0213 20:25:46.203346 2342 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 20:25:46.208133 kubelet[2342]: I0213 20:25:46.207936 2342 volume_manager.go:297] "Starting Kubelet Volume Manager" Feb 13 20:25:46.213780 kubelet[2342]: E0213 20:25:46.212912 2342 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"srv-llv2e.gb1.brightbox.com\" not found" Feb 13 20:25:46.214763 kubelet[2342]: I0213 20:25:46.214702 2342 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 20:25:46.215778 kubelet[2342]: I0213 20:25:46.215652 2342 reconciler.go:26] "Reconciler: start to sync state" Feb 13 20:25:46.215887 kubelet[2342]: E0213 20:25:46.203613 2342 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.244.92.114:6443/api/v1/namespaces/default/events\": dial tcp 10.244.92.114:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-llv2e.gb1.brightbox.com.1823de56f777d207 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-llv2e.gb1.brightbox.com,UID:srv-llv2e.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-llv2e.gb1.brightbox.com,},FirstTimestamp:2025-02-13 20:25:46.195522055 +0000 UTC m=+0.597909454,LastTimestamp:2025-02-13 20:25:46.195522055 +0000 UTC m=+0.597909454,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-llv2e.gb1.brightbox.com,}" Feb 13 20:25:46.217230 kubelet[2342]: E0213 20:25:46.217183 2342 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.92.114:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-llv2e.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.92.114:6443: connect: connection refused" interval="200ms" Feb 13 20:25:46.222841 kubelet[2342]: I0213 20:25:46.222820 2342 factory.go:221] Registration of the systemd container factory successfully Feb 13 20:25:46.223120 kubelet[2342]: I0213 20:25:46.223023 2342 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 20:25:46.224358 kubelet[2342]: I0213 20:25:46.224327 2342 factory.go:221] Registration of the containerd container factory successfully Feb 13 20:25:46.234634 kubelet[2342]: W0213 20:25:46.224534 2342 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.244.92.114:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.244.92.114:6443: connect: connection refused Feb 13 20:25:46.234634 kubelet[2342]: E0213 20:25:46.234596 2342 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.244.92.114:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.244.92.114:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:25:46.246574 kubelet[2342]: I0213 20:25:46.244875 2342 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 20:25:46.246574 kubelet[2342]: I0213 20:25:46.245965 2342 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 20:25:46.246574 kubelet[2342]: I0213 20:25:46.245999 2342 status_manager.go:227] "Starting to sync pod status with apiserver" Feb 13 20:25:46.246574 kubelet[2342]: I0213 20:25:46.246021 2342 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Feb 13 20:25:46.246574 kubelet[2342]: I0213 20:25:46.246033 2342 kubelet.go:2388] "Starting kubelet main sync loop" Feb 13 20:25:46.246574 kubelet[2342]: E0213 20:25:46.246083 2342 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 20:25:46.251855 kubelet[2342]: W0213 20:25:46.251813 2342 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.244.92.114:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.244.92.114:6443: connect: connection refused Feb 13 20:25:46.251975 kubelet[2342]: E0213 20:25:46.251867 2342 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.244.92.114:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.244.92.114:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:25:46.255905 kubelet[2342]: I0213 20:25:46.255883 2342 cpu_manager.go:221] "Starting CPU manager" policy="none" Feb 13 20:25:46.255905 kubelet[2342]: I0213 20:25:46.255900 2342 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Feb 13 20:25:46.256052 kubelet[2342]: I0213 20:25:46.255917 2342 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:25:46.257442 kubelet[2342]: I0213 20:25:46.257412 2342 policy_none.go:49] "None policy: Start" Feb 13 20:25:46.257516 kubelet[2342]: I0213 20:25:46.257455 2342 memory_manager.go:186] "Starting memorymanager" policy="None" Feb 13 20:25:46.257516 kubelet[2342]: I0213 20:25:46.257469 2342 state_mem.go:35] "Initializing new in-memory state store" Feb 13 20:25:46.264181 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 20:25:46.277903 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 20:25:46.286836 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 20:25:46.296469 kubelet[2342]: I0213 20:25:46.296434 2342 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 20:25:46.297610 kubelet[2342]: I0213 20:25:46.296871 2342 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 20:25:46.297610 kubelet[2342]: I0213 20:25:46.296896 2342 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 20:25:46.297610 kubelet[2342]: I0213 20:25:46.297486 2342 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 20:25:46.299421 kubelet[2342]: E0213 20:25:46.299381 2342 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Feb 13 20:25:46.299533 kubelet[2342]: E0213 20:25:46.299429 2342 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-llv2e.gb1.brightbox.com\" not found" Feb 13 20:25:46.369285 systemd[1]: Created slice kubepods-burstable-pod62af59e3f89f2107d7da060fae07de99.slice - libcontainer container kubepods-burstable-pod62af59e3f89f2107d7da060fae07de99.slice. Feb 13 20:25:46.393321 kubelet[2342]: E0213 20:25:46.393280 2342 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-llv2e.gb1.brightbox.com\" not found" node="srv-llv2e.gb1.brightbox.com" Feb 13 20:25:46.398295 systemd[1]: Created slice kubepods-burstable-pod42594ea6a8889085c1a463897ac054f6.slice - libcontainer container kubepods-burstable-pod42594ea6a8889085c1a463897ac054f6.slice. Feb 13 20:25:46.401189 kubelet[2342]: I0213 20:25:46.401084 2342 kubelet_node_status.go:76] "Attempting to register node" node="srv-llv2e.gb1.brightbox.com" Feb 13 20:25:46.402297 kubelet[2342]: E0213 20:25:46.402009 2342 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.244.92.114:6443/api/v1/nodes\": dial tcp 10.244.92.114:6443: connect: connection refused" node="srv-llv2e.gb1.brightbox.com" Feb 13 20:25:46.402490 kubelet[2342]: E0213 20:25:46.402471 2342 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-llv2e.gb1.brightbox.com\" not found" node="srv-llv2e.gb1.brightbox.com" Feb 13 20:25:46.404439 systemd[1]: Created slice kubepods-burstable-pod00be59dc3345dba7321294145ae1e91b.slice - libcontainer container kubepods-burstable-pod00be59dc3345dba7321294145ae1e91b.slice. Feb 13 20:25:46.406472 kubelet[2342]: E0213 20:25:46.406454 2342 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-llv2e.gb1.brightbox.com\" not found" node="srv-llv2e.gb1.brightbox.com" Feb 13 20:25:46.417194 kubelet[2342]: I0213 20:25:46.417067 2342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/62af59e3f89f2107d7da060fae07de99-ca-certs\") pod \"kube-apiserver-srv-llv2e.gb1.brightbox.com\" (UID: \"62af59e3f89f2107d7da060fae07de99\") " pod="kube-system/kube-apiserver-srv-llv2e.gb1.brightbox.com" Feb 13 20:25:46.417194 kubelet[2342]: I0213 20:25:46.417152 2342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/62af59e3f89f2107d7da060fae07de99-k8s-certs\") pod \"kube-apiserver-srv-llv2e.gb1.brightbox.com\" (UID: \"62af59e3f89f2107d7da060fae07de99\") " pod="kube-system/kube-apiserver-srv-llv2e.gb1.brightbox.com" Feb 13 20:25:46.417194 kubelet[2342]: I0213 20:25:46.417199 2342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/42594ea6a8889085c1a463897ac054f6-flexvolume-dir\") pod \"kube-controller-manager-srv-llv2e.gb1.brightbox.com\" (UID: \"42594ea6a8889085c1a463897ac054f6\") " pod="kube-system/kube-controller-manager-srv-llv2e.gb1.brightbox.com" Feb 13 20:25:46.418178 kubelet[2342]: I0213 20:25:46.417246 2342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/42594ea6a8889085c1a463897ac054f6-kubeconfig\") pod \"kube-controller-manager-srv-llv2e.gb1.brightbox.com\" (UID: \"42594ea6a8889085c1a463897ac054f6\") " pod="kube-system/kube-controller-manager-srv-llv2e.gb1.brightbox.com" Feb 13 20:25:46.418178 kubelet[2342]: I0213 20:25:46.417332 2342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/42594ea6a8889085c1a463897ac054f6-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-llv2e.gb1.brightbox.com\" (UID: \"42594ea6a8889085c1a463897ac054f6\") " pod="kube-system/kube-controller-manager-srv-llv2e.gb1.brightbox.com" Feb 13 20:25:46.418178 kubelet[2342]: I0213 20:25:46.417418 2342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/00be59dc3345dba7321294145ae1e91b-kubeconfig\") pod \"kube-scheduler-srv-llv2e.gb1.brightbox.com\" (UID: \"00be59dc3345dba7321294145ae1e91b\") " pod="kube-system/kube-scheduler-srv-llv2e.gb1.brightbox.com" Feb 13 20:25:46.418178 kubelet[2342]: I0213 20:25:46.417462 2342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/62af59e3f89f2107d7da060fae07de99-usr-share-ca-certificates\") pod \"kube-apiserver-srv-llv2e.gb1.brightbox.com\" (UID: \"62af59e3f89f2107d7da060fae07de99\") " pod="kube-system/kube-apiserver-srv-llv2e.gb1.brightbox.com" Feb 13 20:25:46.418178 kubelet[2342]: I0213 20:25:46.417512 2342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/42594ea6a8889085c1a463897ac054f6-ca-certs\") pod \"kube-controller-manager-srv-llv2e.gb1.brightbox.com\" (UID: \"42594ea6a8889085c1a463897ac054f6\") " pod="kube-system/kube-controller-manager-srv-llv2e.gb1.brightbox.com" Feb 13 20:25:46.418581 kubelet[2342]: I0213 20:25:46.417550 2342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/42594ea6a8889085c1a463897ac054f6-k8s-certs\") pod \"kube-controller-manager-srv-llv2e.gb1.brightbox.com\" (UID: \"42594ea6a8889085c1a463897ac054f6\") " pod="kube-system/kube-controller-manager-srv-llv2e.gb1.brightbox.com" Feb 13 20:25:46.419140 kubelet[2342]: E0213 20:25:46.419044 2342 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.92.114:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-llv2e.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.92.114:6443: connect: connection refused" interval="400ms" Feb 13 20:25:46.606791 kubelet[2342]: I0213 20:25:46.606655 2342 kubelet_node_status.go:76] "Attempting to register node" node="srv-llv2e.gb1.brightbox.com" Feb 13 20:25:46.607715 kubelet[2342]: E0213 20:25:46.607643 2342 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.244.92.114:6443/api/v1/nodes\": dial tcp 10.244.92.114:6443: connect: connection refused" node="srv-llv2e.gb1.brightbox.com" Feb 13 20:25:46.700332 containerd[1536]: time="2025-02-13T20:25:46.700082488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-llv2e.gb1.brightbox.com,Uid:62af59e3f89f2107d7da060fae07de99,Namespace:kube-system,Attempt:0,}" Feb 13 20:25:46.705206 containerd[1536]: time="2025-02-13T20:25:46.705127553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-llv2e.gb1.brightbox.com,Uid:42594ea6a8889085c1a463897ac054f6,Namespace:kube-system,Attempt:0,}" Feb 13 20:25:46.708539 containerd[1536]: time="2025-02-13T20:25:46.708126062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-llv2e.gb1.brightbox.com,Uid:00be59dc3345dba7321294145ae1e91b,Namespace:kube-system,Attempt:0,}" Feb 13 20:25:46.821043 kubelet[2342]: E0213 20:25:46.820929 2342 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.92.114:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-llv2e.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.92.114:6443: connect: connection refused" interval="800ms" Feb 13 20:25:47.012610 kubelet[2342]: I0213 20:25:47.011824 2342 kubelet_node_status.go:76] "Attempting to register node" node="srv-llv2e.gb1.brightbox.com" Feb 13 20:25:47.012610 kubelet[2342]: E0213 20:25:47.012504 2342 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.244.92.114:6443/api/v1/nodes\": dial tcp 10.244.92.114:6443: connect: connection refused" node="srv-llv2e.gb1.brightbox.com" Feb 13 20:25:47.073957 kubelet[2342]: W0213 20:25:47.073791 2342 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.244.92.114:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.244.92.114:6443: connect: connection refused Feb 13 20:25:47.073957 kubelet[2342]: E0213 20:25:47.073954 2342 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.244.92.114:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.244.92.114:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:25:47.216170 kubelet[2342]: E0213 20:25:47.215920 2342 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.244.92.114:6443/api/v1/namespaces/default/events\": dial tcp 10.244.92.114:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-llv2e.gb1.brightbox.com.1823de56f777d207 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-llv2e.gb1.brightbox.com,UID:srv-llv2e.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-llv2e.gb1.brightbox.com,},FirstTimestamp:2025-02-13 20:25:46.195522055 +0000 UTC m=+0.597909454,LastTimestamp:2025-02-13 20:25:46.195522055 +0000 UTC m=+0.597909454,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-llv2e.gb1.brightbox.com,}" Feb 13 20:25:47.279831 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1303089897.mount: Deactivated successfully. Feb 13 20:25:47.284792 containerd[1536]: time="2025-02-13T20:25:47.284742650Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:25:47.285789 containerd[1536]: time="2025-02-13T20:25:47.285680766Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:25:47.287840 containerd[1536]: time="2025-02-13T20:25:47.287737377Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Feb 13 20:25:47.288613 containerd[1536]: time="2025-02-13T20:25:47.288518671Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 20:25:47.290792 containerd[1536]: time="2025-02-13T20:25:47.289602629Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:25:47.290792 containerd[1536]: time="2025-02-13T20:25:47.290393152Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 20:25:47.293789 containerd[1536]: time="2025-02-13T20:25:47.293638760Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:25:47.296585 containerd[1536]: time="2025-02-13T20:25:47.295439510Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:25:47.297214 containerd[1536]: time="2025-02-13T20:25:47.297169384Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 591.890016ms" Feb 13 20:25:47.302041 containerd[1536]: time="2025-02-13T20:25:47.302001321Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 593.759768ms" Feb 13 20:25:47.302627 containerd[1536]: time="2025-02-13T20:25:47.302595544Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 601.565859ms" Feb 13 20:25:47.403490 kubelet[2342]: W0213 20:25:47.397604 2342 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.244.92.114:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-llv2e.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.244.92.114:6443: connect: connection refused Feb 13 20:25:47.403490 kubelet[2342]: E0213 20:25:47.397733 2342 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.244.92.114:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-llv2e.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.244.92.114:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:25:47.450204 kubelet[2342]: W0213 20:25:47.449630 2342 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.244.92.114:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.244.92.114:6443: connect: connection refused Feb 13 20:25:47.450204 kubelet[2342]: W0213 20:25:47.450052 2342 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.244.92.114:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.244.92.114:6443: connect: connection refused Feb 13 20:25:47.450204 kubelet[2342]: E0213 20:25:47.450101 2342 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.244.92.114:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.244.92.114:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:25:47.450204 kubelet[2342]: E0213 20:25:47.449730 2342 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.244.92.114:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.244.92.114:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:25:47.528310 containerd[1536]: time="2025-02-13T20:25:47.528099253Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:25:47.528310 containerd[1536]: time="2025-02-13T20:25:47.528191683Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:25:47.528310 containerd[1536]: time="2025-02-13T20:25:47.528209055Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:25:47.528856 containerd[1536]: time="2025-02-13T20:25:47.528455711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:25:47.540610 containerd[1536]: time="2025-02-13T20:25:47.540227720Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:25:47.540610 containerd[1536]: time="2025-02-13T20:25:47.540301485Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:25:47.540610 containerd[1536]: time="2025-02-13T20:25:47.540346714Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:25:47.540610 containerd[1536]: time="2025-02-13T20:25:47.540423845Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:25:47.548333 containerd[1536]: time="2025-02-13T20:25:47.548111783Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:25:47.548333 containerd[1536]: time="2025-02-13T20:25:47.548164561Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:25:47.548333 containerd[1536]: time="2025-02-13T20:25:47.548177387Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:25:47.548333 containerd[1536]: time="2025-02-13T20:25:47.548254902Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:25:47.554924 systemd[1]: Started cri-containerd-7fb072983b96ae6785eea006f1a1d8f04e8baada6151ee82c690d99d9032c4e3.scope - libcontainer container 7fb072983b96ae6785eea006f1a1d8f04e8baada6151ee82c690d99d9032c4e3. Feb 13 20:25:47.586907 systemd[1]: Started cri-containerd-2efefaff083090688bb49fabf5ced5717f34472a74a6754c5b734aa44f7574d4.scope - libcontainer container 2efefaff083090688bb49fabf5ced5717f34472a74a6754c5b734aa44f7574d4. Feb 13 20:25:47.588191 systemd[1]: Started cri-containerd-70e6070d072fe8828f9444ffc28c91508a9b0fd0017382680bd3c43f81535b09.scope - libcontainer container 70e6070d072fe8828f9444ffc28c91508a9b0fd0017382680bd3c43f81535b09. Feb 13 20:25:47.622397 kubelet[2342]: E0213 20:25:47.622363 2342 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.92.114:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-llv2e.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.92.114:6443: connect: connection refused" interval="1.6s" Feb 13 20:25:47.633239 containerd[1536]: time="2025-02-13T20:25:47.633069814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-llv2e.gb1.brightbox.com,Uid:62af59e3f89f2107d7da060fae07de99,Namespace:kube-system,Attempt:0,} returns sandbox id \"7fb072983b96ae6785eea006f1a1d8f04e8baada6151ee82c690d99d9032c4e3\"" Feb 13 20:25:47.638592 containerd[1536]: time="2025-02-13T20:25:47.638396642Z" level=info msg="CreateContainer within sandbox \"7fb072983b96ae6785eea006f1a1d8f04e8baada6151ee82c690d99d9032c4e3\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 20:25:47.660878 containerd[1536]: time="2025-02-13T20:25:47.660846965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-llv2e.gb1.brightbox.com,Uid:42594ea6a8889085c1a463897ac054f6,Namespace:kube-system,Attempt:0,} returns sandbox id \"2efefaff083090688bb49fabf5ced5717f34472a74a6754c5b734aa44f7574d4\"" Feb 13 20:25:47.664294 containerd[1536]: time="2025-02-13T20:25:47.664167456Z" level=info msg="CreateContainer within sandbox \"2efefaff083090688bb49fabf5ced5717f34472a74a6754c5b734aa44f7574d4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 20:25:47.665147 containerd[1536]: time="2025-02-13T20:25:47.665123939Z" level=info msg="CreateContainer within sandbox \"7fb072983b96ae6785eea006f1a1d8f04e8baada6151ee82c690d99d9032c4e3\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"51cc4e1b58c5ddd56d45a772ecff477e30ae10a25c345e073d1b4e1e7231fe4c\"" Feb 13 20:25:47.666781 containerd[1536]: time="2025-02-13T20:25:47.665824222Z" level=info msg="StartContainer for \"51cc4e1b58c5ddd56d45a772ecff477e30ae10a25c345e073d1b4e1e7231fe4c\"" Feb 13 20:25:47.670486 containerd[1536]: time="2025-02-13T20:25:47.670460662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-llv2e.gb1.brightbox.com,Uid:00be59dc3345dba7321294145ae1e91b,Namespace:kube-system,Attempt:0,} returns sandbox id \"70e6070d072fe8828f9444ffc28c91508a9b0fd0017382680bd3c43f81535b09\"" Feb 13 20:25:47.673309 containerd[1536]: time="2025-02-13T20:25:47.673278347Z" level=info msg="CreateContainer within sandbox \"70e6070d072fe8828f9444ffc28c91508a9b0fd0017382680bd3c43f81535b09\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 20:25:47.676842 containerd[1536]: time="2025-02-13T20:25:47.676813382Z" level=info msg="CreateContainer within sandbox \"2efefaff083090688bb49fabf5ced5717f34472a74a6754c5b734aa44f7574d4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a0225c63464df5a97634a0ac205df9f29a6b7a179f2ce1099ec5bf5bd4e70d02\"" Feb 13 20:25:47.677210 containerd[1536]: time="2025-02-13T20:25:47.677188570Z" level=info msg="StartContainer for \"a0225c63464df5a97634a0ac205df9f29a6b7a179f2ce1099ec5bf5bd4e70d02\"" Feb 13 20:25:47.685561 containerd[1536]: time="2025-02-13T20:25:47.685536147Z" level=info msg="CreateContainer within sandbox \"70e6070d072fe8828f9444ffc28c91508a9b0fd0017382680bd3c43f81535b09\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ba05247316eb17167d248da3f6da9c332705bccd17a45f4986f4c07c261a385c\"" Feb 13 20:25:47.687191 containerd[1536]: time="2025-02-13T20:25:47.687171110Z" level=info msg="StartContainer for \"ba05247316eb17167d248da3f6da9c332705bccd17a45f4986f4c07c261a385c\"" Feb 13 20:25:47.706919 systemd[1]: Started cri-containerd-51cc4e1b58c5ddd56d45a772ecff477e30ae10a25c345e073d1b4e1e7231fe4c.scope - libcontainer container 51cc4e1b58c5ddd56d45a772ecff477e30ae10a25c345e073d1b4e1e7231fe4c. Feb 13 20:25:47.727917 systemd[1]: Started cri-containerd-a0225c63464df5a97634a0ac205df9f29a6b7a179f2ce1099ec5bf5bd4e70d02.scope - libcontainer container a0225c63464df5a97634a0ac205df9f29a6b7a179f2ce1099ec5bf5bd4e70d02. Feb 13 20:25:47.738883 systemd[1]: Started cri-containerd-ba05247316eb17167d248da3f6da9c332705bccd17a45f4986f4c07c261a385c.scope - libcontainer container ba05247316eb17167d248da3f6da9c332705bccd17a45f4986f4c07c261a385c. Feb 13 20:25:47.782094 containerd[1536]: time="2025-02-13T20:25:47.781857380Z" level=info msg="StartContainer for \"51cc4e1b58c5ddd56d45a772ecff477e30ae10a25c345e073d1b4e1e7231fe4c\" returns successfully" Feb 13 20:25:47.816520 containerd[1536]: time="2025-02-13T20:25:47.815362521Z" level=info msg="StartContainer for \"a0225c63464df5a97634a0ac205df9f29a6b7a179f2ce1099ec5bf5bd4e70d02\" returns successfully" Feb 13 20:25:47.819021 kubelet[2342]: I0213 20:25:47.818333 2342 kubelet_node_status.go:76] "Attempting to register node" node="srv-llv2e.gb1.brightbox.com" Feb 13 20:25:47.819021 kubelet[2342]: E0213 20:25:47.818635 2342 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.244.92.114:6443/api/v1/nodes\": dial tcp 10.244.92.114:6443: connect: connection refused" node="srv-llv2e.gb1.brightbox.com" Feb 13 20:25:47.826651 containerd[1536]: time="2025-02-13T20:25:47.826623134Z" level=info msg="StartContainer for \"ba05247316eb17167d248da3f6da9c332705bccd17a45f4986f4c07c261a385c\" returns successfully" Feb 13 20:25:48.262394 kubelet[2342]: E0213 20:25:48.262301 2342 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-llv2e.gb1.brightbox.com\" not found" node="srv-llv2e.gb1.brightbox.com" Feb 13 20:25:48.273770 kubelet[2342]: E0213 20:25:48.269336 2342 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-llv2e.gb1.brightbox.com\" not found" node="srv-llv2e.gb1.brightbox.com" Feb 13 20:25:48.273770 kubelet[2342]: E0213 20:25:48.270046 2342 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-llv2e.gb1.brightbox.com\" not found" node="srv-llv2e.gb1.brightbox.com" Feb 13 20:25:49.272048 kubelet[2342]: E0213 20:25:49.271287 2342 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-llv2e.gb1.brightbox.com\" not found" node="srv-llv2e.gb1.brightbox.com" Feb 13 20:25:49.272048 kubelet[2342]: E0213 20:25:49.271802 2342 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-llv2e.gb1.brightbox.com\" not found" node="srv-llv2e.gb1.brightbox.com" Feb 13 20:25:49.421141 kubelet[2342]: I0213 20:25:49.421092 2342 kubelet_node_status.go:76] "Attempting to register node" node="srv-llv2e.gb1.brightbox.com" Feb 13 20:25:49.865795 kubelet[2342]: I0213 20:25:49.865609 2342 kubelet_node_status.go:79] "Successfully registered node" node="srv-llv2e.gb1.brightbox.com" Feb 13 20:25:49.865795 kubelet[2342]: E0213 20:25:49.865640 2342 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"srv-llv2e.gb1.brightbox.com\": node \"srv-llv2e.gb1.brightbox.com\" not found" Feb 13 20:25:49.874106 kubelet[2342]: E0213 20:25:49.874069 2342 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"srv-llv2e.gb1.brightbox.com\" not found" Feb 13 20:25:49.927660 kubelet[2342]: E0213 20:25:49.927601 2342 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="3.2s" Feb 13 20:25:49.975075 kubelet[2342]: E0213 20:25:49.975001 2342 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"srv-llv2e.gb1.brightbox.com\" not found" Feb 13 20:25:50.075897 kubelet[2342]: E0213 20:25:50.075821 2342 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"srv-llv2e.gb1.brightbox.com\" not found" Feb 13 20:25:50.176741 kubelet[2342]: E0213 20:25:50.176424 2342 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"srv-llv2e.gb1.brightbox.com\" not found" Feb 13 20:25:50.277496 kubelet[2342]: E0213 20:25:50.277428 2342 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"srv-llv2e.gb1.brightbox.com\" not found" Feb 13 20:25:50.314877 kubelet[2342]: I0213 20:25:50.314319 2342 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-llv2e.gb1.brightbox.com" Feb 13 20:25:50.324977 kubelet[2342]: E0213 20:25:50.324928 2342 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-llv2e.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-llv2e.gb1.brightbox.com" Feb 13 20:25:50.324977 kubelet[2342]: I0213 20:25:50.324977 2342 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-llv2e.gb1.brightbox.com" Feb 13 20:25:50.327827 kubelet[2342]: E0213 20:25:50.327715 2342 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-llv2e.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-llv2e.gb1.brightbox.com" Feb 13 20:25:50.327827 kubelet[2342]: I0213 20:25:50.327772 2342 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-llv2e.gb1.brightbox.com" Feb 13 20:25:50.330638 kubelet[2342]: E0213 20:25:50.330572 2342 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-llv2e.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-llv2e.gb1.brightbox.com" Feb 13 20:25:50.339001 kubelet[2342]: I0213 20:25:50.338967 2342 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-llv2e.gb1.brightbox.com" Feb 13 20:25:50.343233 kubelet[2342]: E0213 20:25:50.343188 2342 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-llv2e.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-llv2e.gb1.brightbox.com" Feb 13 20:25:51.194125 kubelet[2342]: I0213 20:25:51.193938 2342 apiserver.go:52] "Watching apiserver" Feb 13 20:25:51.215345 kubelet[2342]: I0213 20:25:51.215219 2342 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 20:25:52.061720 systemd[1]: Reload requested from client PID 2618 ('systemctl') (unit session-9.scope)... Feb 13 20:25:52.061737 systemd[1]: Reloading... Feb 13 20:25:52.160886 zram_generator::config[2662]: No configuration found. Feb 13 20:25:52.329107 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:25:52.451940 systemd[1]: Reloading finished in 389 ms. Feb 13 20:25:52.468765 update_engine[1514]: I20250213 20:25:52.465939 1514 update_attempter.cc:509] Updating boot flags... Feb 13 20:25:52.485206 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:25:52.517482 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 20:25:52.517737 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:25:52.518231 systemd[1]: kubelet.service: Consumed 1.026s CPU time, 125.3M memory peak. Feb 13 20:25:52.541908 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2726) Feb 13 20:25:52.554084 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:25:52.595799 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2725) Feb 13 20:25:52.786205 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:25:52.791129 (kubelet)[2742]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 20:25:52.879217 kubelet[2742]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:25:52.879217 kubelet[2742]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Feb 13 20:25:52.879217 kubelet[2742]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:25:52.880224 kubelet[2742]: I0213 20:25:52.880098 2742 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 20:25:52.899717 kubelet[2742]: I0213 20:25:52.899650 2742 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Feb 13 20:25:52.899717 kubelet[2742]: I0213 20:25:52.899704 2742 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 20:25:52.900404 kubelet[2742]: I0213 20:25:52.900367 2742 server.go:954] "Client rotation is on, will bootstrap in background" Feb 13 20:25:52.908785 kubelet[2742]: I0213 20:25:52.908023 2742 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 20:25:52.914849 kubelet[2742]: I0213 20:25:52.914790 2742 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 20:25:52.935615 kubelet[2742]: E0213 20:25:52.935554 2742 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 20:25:52.935879 kubelet[2742]: I0213 20:25:52.935866 2742 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 20:25:52.940611 kubelet[2742]: I0213 20:25:52.940581 2742 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 20:25:52.941110 kubelet[2742]: I0213 20:25:52.941075 2742 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 20:25:52.942355 kubelet[2742]: I0213 20:25:52.941480 2742 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-llv2e.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 20:25:52.942355 kubelet[2742]: I0213 20:25:52.942076 2742 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 20:25:52.942355 kubelet[2742]: I0213 20:25:52.942109 2742 container_manager_linux.go:304] "Creating device plugin manager" Feb 13 20:25:52.944164 kubelet[2742]: I0213 20:25:52.944016 2742 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:25:52.944804 kubelet[2742]: I0213 20:25:52.944646 2742 kubelet.go:446] "Attempting to sync node with API server" Feb 13 20:25:52.944804 kubelet[2742]: I0213 20:25:52.944710 2742 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 20:25:52.945226 kubelet[2742]: I0213 20:25:52.945075 2742 kubelet.go:352] "Adding apiserver pod source" Feb 13 20:25:52.945226 kubelet[2742]: I0213 20:25:52.945112 2742 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 20:25:52.949581 kubelet[2742]: I0213 20:25:52.949338 2742 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 20:25:52.951684 kubelet[2742]: I0213 20:25:52.951560 2742 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 20:25:52.953364 kubelet[2742]: I0213 20:25:52.953338 2742 watchdog_linux.go:99] "Systemd watchdog is not enabled" Feb 13 20:25:52.953443 kubelet[2742]: I0213 20:25:52.953385 2742 server.go:1287] "Started kubelet" Feb 13 20:25:52.971780 kubelet[2742]: I0213 20:25:52.971607 2742 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 20:25:52.983907 kubelet[2742]: I0213 20:25:52.983775 2742 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 20:25:52.986741 kubelet[2742]: I0213 20:25:52.986722 2742 server.go:490] "Adding debug handlers to kubelet server" Feb 13 20:25:52.990775 kubelet[2742]: I0213 20:25:52.990735 2742 volume_manager.go:297] "Starting Kubelet Volume Manager" Feb 13 20:25:52.990924 kubelet[2742]: I0213 20:25:52.990842 2742 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 20:25:52.991183 kubelet[2742]: E0213 20:25:52.991162 2742 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"srv-llv2e.gb1.brightbox.com\" not found" Feb 13 20:25:52.991491 kubelet[2742]: I0213 20:25:52.991476 2742 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 20:25:52.991676 kubelet[2742]: I0213 20:25:52.991666 2742 reconciler.go:26] "Reconciler: start to sync state" Feb 13 20:25:52.992718 kubelet[2742]: I0213 20:25:52.992689 2742 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 20:25:52.994803 kubelet[2742]: I0213 20:25:52.994777 2742 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 20:25:53.004272 kubelet[2742]: I0213 20:25:53.004204 2742 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 20:25:53.007602 kubelet[2742]: I0213 20:25:53.007557 2742 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 20:25:53.008187 kubelet[2742]: E0213 20:25:53.008024 2742 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 20:25:53.010915 kubelet[2742]: I0213 20:25:53.010400 2742 status_manager.go:227] "Starting to sync pod status with apiserver" Feb 13 20:25:53.010915 kubelet[2742]: I0213 20:25:53.010478 2742 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Feb 13 20:25:53.010915 kubelet[2742]: I0213 20:25:53.010497 2742 kubelet.go:2388] "Starting kubelet main sync loop" Feb 13 20:25:53.010915 kubelet[2742]: E0213 20:25:53.010612 2742 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 20:25:53.012338 kubelet[2742]: I0213 20:25:53.012300 2742 factory.go:221] Registration of the containerd container factory successfully Feb 13 20:25:53.012338 kubelet[2742]: I0213 20:25:53.012327 2742 factory.go:221] Registration of the systemd container factory successfully Feb 13 20:25:53.012498 kubelet[2742]: I0213 20:25:53.012411 2742 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 20:25:53.079564 sudo[2772]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 13 20:25:53.079980 sudo[2772]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Feb 13 20:25:53.105784 kubelet[2742]: I0213 20:25:53.103854 2742 cpu_manager.go:221] "Starting CPU manager" policy="none" Feb 13 20:25:53.105784 kubelet[2742]: I0213 20:25:53.103879 2742 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Feb 13 20:25:53.105784 kubelet[2742]: I0213 20:25:53.103901 2742 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:25:53.105784 kubelet[2742]: I0213 20:25:53.104162 2742 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 20:25:53.105784 kubelet[2742]: I0213 20:25:53.104174 2742 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 20:25:53.105784 kubelet[2742]: I0213 20:25:53.104202 2742 policy_none.go:49] "None policy: Start" Feb 13 20:25:53.105784 kubelet[2742]: I0213 20:25:53.104215 2742 memory_manager.go:186] "Starting memorymanager" policy="None" Feb 13 20:25:53.105784 kubelet[2742]: I0213 20:25:53.104225 2742 state_mem.go:35] "Initializing new in-memory state store" Feb 13 20:25:53.105784 kubelet[2742]: I0213 20:25:53.104377 2742 state_mem.go:75] "Updated machine memory state" Feb 13 20:25:53.111742 kubelet[2742]: E0213 20:25:53.111137 2742 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 20:25:53.118039 kubelet[2742]: I0213 20:25:53.118003 2742 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 20:25:53.118271 kubelet[2742]: I0213 20:25:53.118252 2742 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 20:25:53.118765 kubelet[2742]: I0213 20:25:53.118342 2742 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 20:25:53.121783 kubelet[2742]: I0213 20:25:53.119167 2742 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 20:25:53.126244 kubelet[2742]: E0213 20:25:53.126220 2742 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Feb 13 20:25:53.245378 kubelet[2742]: I0213 20:25:53.245298 2742 kubelet_node_status.go:76] "Attempting to register node" node="srv-llv2e.gb1.brightbox.com" Feb 13 20:25:53.266592 kubelet[2742]: I0213 20:25:53.266518 2742 kubelet_node_status.go:125] "Node was previously registered" node="srv-llv2e.gb1.brightbox.com" Feb 13 20:25:53.267414 kubelet[2742]: I0213 20:25:53.267388 2742 kubelet_node_status.go:79] "Successfully registered node" node="srv-llv2e.gb1.brightbox.com" Feb 13 20:25:53.315534 kubelet[2742]: I0213 20:25:53.315174 2742 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-llv2e.gb1.brightbox.com" Feb 13 20:25:53.316086 kubelet[2742]: I0213 20:25:53.316056 2742 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-llv2e.gb1.brightbox.com" Feb 13 20:25:53.316409 kubelet[2742]: I0213 20:25:53.316226 2742 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-llv2e.gb1.brightbox.com" Feb 13 20:25:53.324609 kubelet[2742]: W0213 20:25:53.324540 2742 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 20:25:53.325267 kubelet[2742]: W0213 20:25:53.324967 2742 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 20:25:53.326144 kubelet[2742]: W0213 20:25:53.325851 2742 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 20:25:53.397060 kubelet[2742]: I0213 20:25:53.395790 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/62af59e3f89f2107d7da060fae07de99-k8s-certs\") pod \"kube-apiserver-srv-llv2e.gb1.brightbox.com\" (UID: \"62af59e3f89f2107d7da060fae07de99\") " pod="kube-system/kube-apiserver-srv-llv2e.gb1.brightbox.com" Feb 13 20:25:53.397060 kubelet[2742]: I0213 20:25:53.395844 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/62af59e3f89f2107d7da060fae07de99-usr-share-ca-certificates\") pod \"kube-apiserver-srv-llv2e.gb1.brightbox.com\" (UID: \"62af59e3f89f2107d7da060fae07de99\") " pod="kube-system/kube-apiserver-srv-llv2e.gb1.brightbox.com" Feb 13 20:25:53.397060 kubelet[2742]: I0213 20:25:53.395875 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/42594ea6a8889085c1a463897ac054f6-kubeconfig\") pod \"kube-controller-manager-srv-llv2e.gb1.brightbox.com\" (UID: \"42594ea6a8889085c1a463897ac054f6\") " pod="kube-system/kube-controller-manager-srv-llv2e.gb1.brightbox.com" Feb 13 20:25:53.397060 kubelet[2742]: I0213 20:25:53.395904 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/00be59dc3345dba7321294145ae1e91b-kubeconfig\") pod \"kube-scheduler-srv-llv2e.gb1.brightbox.com\" (UID: \"00be59dc3345dba7321294145ae1e91b\") " pod="kube-system/kube-scheduler-srv-llv2e.gb1.brightbox.com" Feb 13 20:25:53.397060 kubelet[2742]: I0213 20:25:53.395934 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/62af59e3f89f2107d7da060fae07de99-ca-certs\") pod \"kube-apiserver-srv-llv2e.gb1.brightbox.com\" (UID: \"62af59e3f89f2107d7da060fae07de99\") " pod="kube-system/kube-apiserver-srv-llv2e.gb1.brightbox.com" Feb 13 20:25:53.397441 kubelet[2742]: I0213 20:25:53.395962 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/42594ea6a8889085c1a463897ac054f6-ca-certs\") pod \"kube-controller-manager-srv-llv2e.gb1.brightbox.com\" (UID: \"42594ea6a8889085c1a463897ac054f6\") " pod="kube-system/kube-controller-manager-srv-llv2e.gb1.brightbox.com" Feb 13 20:25:53.397441 kubelet[2742]: I0213 20:25:53.395994 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/42594ea6a8889085c1a463897ac054f6-flexvolume-dir\") pod \"kube-controller-manager-srv-llv2e.gb1.brightbox.com\" (UID: \"42594ea6a8889085c1a463897ac054f6\") " pod="kube-system/kube-controller-manager-srv-llv2e.gb1.brightbox.com" Feb 13 20:25:53.397441 kubelet[2742]: I0213 20:25:53.396018 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/42594ea6a8889085c1a463897ac054f6-k8s-certs\") pod \"kube-controller-manager-srv-llv2e.gb1.brightbox.com\" (UID: \"42594ea6a8889085c1a463897ac054f6\") " pod="kube-system/kube-controller-manager-srv-llv2e.gb1.brightbox.com" Feb 13 20:25:53.397441 kubelet[2742]: I0213 20:25:53.396045 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/42594ea6a8889085c1a463897ac054f6-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-llv2e.gb1.brightbox.com\" (UID: \"42594ea6a8889085c1a463897ac054f6\") " pod="kube-system/kube-controller-manager-srv-llv2e.gb1.brightbox.com" Feb 13 20:25:53.759267 sudo[2772]: pam_unix(sudo:session): session closed for user root Feb 13 20:25:53.956856 kubelet[2742]: I0213 20:25:53.956586 2742 apiserver.go:52] "Watching apiserver" Feb 13 20:25:53.992071 kubelet[2742]: I0213 20:25:53.992030 2742 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 20:25:54.099468 kubelet[2742]: I0213 20:25:54.099391 2742 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-llv2e.gb1.brightbox.com" podStartSLOduration=1.09936333 podStartE2EDuration="1.09936333s" podCreationTimestamp="2025-02-13 20:25:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:25:54.087512475 +0000 UTC m=+1.289544608" watchObservedRunningTime="2025-02-13 20:25:54.09936333 +0000 UTC m=+1.301395455" Feb 13 20:25:54.120761 kubelet[2742]: I0213 20:25:54.119509 2742 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-llv2e.gb1.brightbox.com" podStartSLOduration=1.119468911 podStartE2EDuration="1.119468911s" podCreationTimestamp="2025-02-13 20:25:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:25:54.100156173 +0000 UTC m=+1.302188283" watchObservedRunningTime="2025-02-13 20:25:54.119468911 +0000 UTC m=+1.321501113" Feb 13 20:25:54.140107 kubelet[2742]: I0213 20:25:54.139297 2742 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-llv2e.gb1.brightbox.com" podStartSLOduration=1.13927483 podStartE2EDuration="1.13927483s" podCreationTimestamp="2025-02-13 20:25:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:25:54.121135824 +0000 UTC m=+1.323167991" watchObservedRunningTime="2025-02-13 20:25:54.13927483 +0000 UTC m=+1.341306955" Feb 13 20:25:55.206159 sudo[1775]: pam_unix(sudo:session): session closed for user root Feb 13 20:25:55.349921 sshd[1774]: Connection closed by 139.178.89.65 port 55432 Feb 13 20:25:55.353000 sshd-session[1772]: pam_unix(sshd:session): session closed for user core Feb 13 20:25:55.362109 systemd[1]: sshd@6-10.244.92.114:22-139.178.89.65:55432.service: Deactivated successfully. Feb 13 20:25:55.366306 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 20:25:55.366613 systemd[1]: session-9.scope: Consumed 4.930s CPU time, 208.1M memory peak. Feb 13 20:25:55.369388 systemd-logind[1511]: Session 9 logged out. Waiting for processes to exit. Feb 13 20:25:55.371586 systemd-logind[1511]: Removed session 9. Feb 13 20:25:58.488870 kubelet[2742]: I0213 20:25:58.488667 2742 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 20:25:58.491942 containerd[1536]: time="2025-02-13T20:25:58.491440652Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 20:25:58.494192 kubelet[2742]: I0213 20:25:58.491961 2742 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 20:25:58.644890 systemd[1]: Created slice kubepods-besteffort-pod34934454_7bbb_4c12_8716_3c0b37f17472.slice - libcontainer container kubepods-besteffort-pod34934454_7bbb_4c12_8716_3c0b37f17472.slice. Feb 13 20:25:58.668064 systemd[1]: Created slice kubepods-burstable-pod6f77539a_ba4f_4aed_a331_cdc49c9d4779.slice - libcontainer container kubepods-burstable-pod6f77539a_ba4f_4aed_a331_cdc49c9d4779.slice. Feb 13 20:25:58.734390 kubelet[2742]: I0213 20:25:58.734259 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpgsg\" (UniqueName: \"kubernetes.io/projected/34934454-7bbb-4c12-8716-3c0b37f17472-kube-api-access-lpgsg\") pod \"kube-proxy-wtr6z\" (UID: \"34934454-7bbb-4c12-8716-3c0b37f17472\") " pod="kube-system/kube-proxy-wtr6z" Feb 13 20:25:58.734390 kubelet[2742]: I0213 20:25:58.734386 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6f77539a-ba4f-4aed-a331-cdc49c9d4779-bpf-maps\") pod \"cilium-zmtcx\" (UID: \"6f77539a-ba4f-4aed-a331-cdc49c9d4779\") " pod="kube-system/cilium-zmtcx" Feb 13 20:25:58.734949 kubelet[2742]: I0213 20:25:58.734428 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6f77539a-ba4f-4aed-a331-cdc49c9d4779-hostproc\") pod \"cilium-zmtcx\" (UID: \"6f77539a-ba4f-4aed-a331-cdc49c9d4779\") " pod="kube-system/cilium-zmtcx" Feb 13 20:25:58.734949 kubelet[2742]: I0213 20:25:58.734479 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6f77539a-ba4f-4aed-a331-cdc49c9d4779-cilium-cgroup\") pod \"cilium-zmtcx\" (UID: \"6f77539a-ba4f-4aed-a331-cdc49c9d4779\") " pod="kube-system/cilium-zmtcx" Feb 13 20:25:58.734949 kubelet[2742]: I0213 20:25:58.734520 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6f77539a-ba4f-4aed-a331-cdc49c9d4779-cni-path\") pod \"cilium-zmtcx\" (UID: \"6f77539a-ba4f-4aed-a331-cdc49c9d4779\") " pod="kube-system/cilium-zmtcx" Feb 13 20:25:58.734949 kubelet[2742]: I0213 20:25:58.734560 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6f77539a-ba4f-4aed-a331-cdc49c9d4779-etc-cni-netd\") pod \"cilium-zmtcx\" (UID: \"6f77539a-ba4f-4aed-a331-cdc49c9d4779\") " pod="kube-system/cilium-zmtcx" Feb 13 20:25:58.734949 kubelet[2742]: I0213 20:25:58.734598 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/34934454-7bbb-4c12-8716-3c0b37f17472-lib-modules\") pod \"kube-proxy-wtr6z\" (UID: \"34934454-7bbb-4c12-8716-3c0b37f17472\") " pod="kube-system/kube-proxy-wtr6z" Feb 13 20:25:58.734949 kubelet[2742]: I0213 20:25:58.734651 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6f77539a-ba4f-4aed-a331-cdc49c9d4779-xtables-lock\") pod \"cilium-zmtcx\" (UID: \"6f77539a-ba4f-4aed-a331-cdc49c9d4779\") " pod="kube-system/cilium-zmtcx" Feb 13 20:25:58.735349 kubelet[2742]: I0213 20:25:58.734691 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/34934454-7bbb-4c12-8716-3c0b37f17472-kube-proxy\") pod \"kube-proxy-wtr6z\" (UID: \"34934454-7bbb-4c12-8716-3c0b37f17472\") " pod="kube-system/kube-proxy-wtr6z" Feb 13 20:25:58.735349 kubelet[2742]: I0213 20:25:58.734734 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/34934454-7bbb-4c12-8716-3c0b37f17472-xtables-lock\") pod \"kube-proxy-wtr6z\" (UID: \"34934454-7bbb-4c12-8716-3c0b37f17472\") " pod="kube-system/kube-proxy-wtr6z" Feb 13 20:25:58.735349 kubelet[2742]: I0213 20:25:58.734861 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6f77539a-ba4f-4aed-a331-cdc49c9d4779-clustermesh-secrets\") pod \"cilium-zmtcx\" (UID: \"6f77539a-ba4f-4aed-a331-cdc49c9d4779\") " pod="kube-system/cilium-zmtcx" Feb 13 20:25:58.735349 kubelet[2742]: I0213 20:25:58.734904 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6f77539a-ba4f-4aed-a331-cdc49c9d4779-host-proc-sys-net\") pod \"cilium-zmtcx\" (UID: \"6f77539a-ba4f-4aed-a331-cdc49c9d4779\") " pod="kube-system/cilium-zmtcx" Feb 13 20:25:58.735349 kubelet[2742]: I0213 20:25:58.734947 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6f77539a-ba4f-4aed-a331-cdc49c9d4779-cilium-config-path\") pod \"cilium-zmtcx\" (UID: \"6f77539a-ba4f-4aed-a331-cdc49c9d4779\") " pod="kube-system/cilium-zmtcx" Feb 13 20:25:58.735561 kubelet[2742]: I0213 20:25:58.734996 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6f77539a-ba4f-4aed-a331-cdc49c9d4779-cilium-run\") pod \"cilium-zmtcx\" (UID: \"6f77539a-ba4f-4aed-a331-cdc49c9d4779\") " pod="kube-system/cilium-zmtcx" Feb 13 20:25:58.735561 kubelet[2742]: I0213 20:25:58.735038 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6f77539a-ba4f-4aed-a331-cdc49c9d4779-lib-modules\") pod \"cilium-zmtcx\" (UID: \"6f77539a-ba4f-4aed-a331-cdc49c9d4779\") " pod="kube-system/cilium-zmtcx" Feb 13 20:25:58.735561 kubelet[2742]: I0213 20:25:58.735077 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzh99\" (UniqueName: \"kubernetes.io/projected/6f77539a-ba4f-4aed-a331-cdc49c9d4779-kube-api-access-mzh99\") pod \"cilium-zmtcx\" (UID: \"6f77539a-ba4f-4aed-a331-cdc49c9d4779\") " pod="kube-system/cilium-zmtcx" Feb 13 20:25:58.735561 kubelet[2742]: I0213 20:25:58.735121 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6f77539a-ba4f-4aed-a331-cdc49c9d4779-host-proc-sys-kernel\") pod \"cilium-zmtcx\" (UID: \"6f77539a-ba4f-4aed-a331-cdc49c9d4779\") " pod="kube-system/cilium-zmtcx" Feb 13 20:25:58.735561 kubelet[2742]: I0213 20:25:58.735158 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6f77539a-ba4f-4aed-a331-cdc49c9d4779-hubble-tls\") pod \"cilium-zmtcx\" (UID: \"6f77539a-ba4f-4aed-a331-cdc49c9d4779\") " pod="kube-system/cilium-zmtcx" Feb 13 20:25:58.954846 containerd[1536]: time="2025-02-13T20:25:58.954699153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wtr6z,Uid:34934454-7bbb-4c12-8716-3c0b37f17472,Namespace:kube-system,Attempt:0,}" Feb 13 20:25:58.979731 containerd[1536]: time="2025-02-13T20:25:58.978643906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zmtcx,Uid:6f77539a-ba4f-4aed-a331-cdc49c9d4779,Namespace:kube-system,Attempt:0,}" Feb 13 20:25:59.000607 containerd[1536]: time="2025-02-13T20:25:58.999222123Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:25:59.000607 containerd[1536]: time="2025-02-13T20:25:58.999309572Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:25:59.000607 containerd[1536]: time="2025-02-13T20:25:58.999465430Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:25:59.001607 containerd[1536]: time="2025-02-13T20:25:59.001467067Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:25:59.030217 containerd[1536]: time="2025-02-13T20:25:59.030132413Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:25:59.030534 containerd[1536]: time="2025-02-13T20:25:59.030512370Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:25:59.030631 containerd[1536]: time="2025-02-13T20:25:59.030610129Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:25:59.030863 containerd[1536]: time="2025-02-13T20:25:59.030813531Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:25:59.036979 systemd[1]: Started cri-containerd-aa11ea48b31980efb837bb0ad176d5c4fb758569686eb9bd8b8759e53e4fbbc9.scope - libcontainer container aa11ea48b31980efb837bb0ad176d5c4fb758569686eb9bd8b8759e53e4fbbc9. Feb 13 20:25:59.064453 systemd[1]: Started cri-containerd-8d3b6a7306ce4a4b89d977a12f1e648fa7f976e4ab00d72f54c2dd35f7bd8407.scope - libcontainer container 8d3b6a7306ce4a4b89d977a12f1e648fa7f976e4ab00d72f54c2dd35f7bd8407. Feb 13 20:25:59.145196 containerd[1536]: time="2025-02-13T20:25:59.145149937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wtr6z,Uid:34934454-7bbb-4c12-8716-3c0b37f17472,Namespace:kube-system,Attempt:0,} returns sandbox id \"aa11ea48b31980efb837bb0ad176d5c4fb758569686eb9bd8b8759e53e4fbbc9\"" Feb 13 20:25:59.154416 containerd[1536]: time="2025-02-13T20:25:59.154369518Z" level=info msg="CreateContainer within sandbox \"aa11ea48b31980efb837bb0ad176d5c4fb758569686eb9bd8b8759e53e4fbbc9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 20:25:59.176425 containerd[1536]: time="2025-02-13T20:25:59.176279671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zmtcx,Uid:6f77539a-ba4f-4aed-a331-cdc49c9d4779,Namespace:kube-system,Attempt:0,} returns sandbox id \"8d3b6a7306ce4a4b89d977a12f1e648fa7f976e4ab00d72f54c2dd35f7bd8407\"" Feb 13 20:25:59.181950 containerd[1536]: time="2025-02-13T20:25:59.181482724Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 20:25:59.191336 containerd[1536]: time="2025-02-13T20:25:59.191084707Z" level=info msg="CreateContainer within sandbox \"aa11ea48b31980efb837bb0ad176d5c4fb758569686eb9bd8b8759e53e4fbbc9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a21a3fcc1e8d5505cfb51fe52149f70ebc096571ace8ef8561de390c9262479b\"" Feb 13 20:25:59.193094 containerd[1536]: time="2025-02-13T20:25:59.191728765Z" level=info msg="StartContainer for \"a21a3fcc1e8d5505cfb51fe52149f70ebc096571ace8ef8561de390c9262479b\"" Feb 13 20:25:59.241970 systemd[1]: Started cri-containerd-a21a3fcc1e8d5505cfb51fe52149f70ebc096571ace8ef8561de390c9262479b.scope - libcontainer container a21a3fcc1e8d5505cfb51fe52149f70ebc096571ace8ef8561de390c9262479b. Feb 13 20:25:59.283104 containerd[1536]: time="2025-02-13T20:25:59.283027202Z" level=info msg="StartContainer for \"a21a3fcc1e8d5505cfb51fe52149f70ebc096571ace8ef8561de390c9262479b\" returns successfully" Feb 13 20:25:59.604076 systemd[1]: Created slice kubepods-besteffort-podc710d067_c616_4320_8001_fe3879354682.slice - libcontainer container kubepods-besteffort-podc710d067_c616_4320_8001_fe3879354682.slice. Feb 13 20:25:59.641731 kubelet[2742]: I0213 20:25:59.641617 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfzx7\" (UniqueName: \"kubernetes.io/projected/c710d067-c616-4320-8001-fe3879354682-kube-api-access-jfzx7\") pod \"cilium-operator-6c4d7847fc-mx8t4\" (UID: \"c710d067-c616-4320-8001-fe3879354682\") " pod="kube-system/cilium-operator-6c4d7847fc-mx8t4" Feb 13 20:25:59.641731 kubelet[2742]: I0213 20:25:59.641700 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c710d067-c616-4320-8001-fe3879354682-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-mx8t4\" (UID: \"c710d067-c616-4320-8001-fe3879354682\") " pod="kube-system/cilium-operator-6c4d7847fc-mx8t4" Feb 13 20:25:59.907703 containerd[1536]: time="2025-02-13T20:25:59.907596448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-mx8t4,Uid:c710d067-c616-4320-8001-fe3879354682,Namespace:kube-system,Attempt:0,}" Feb 13 20:25:59.939474 containerd[1536]: time="2025-02-13T20:25:59.939035166Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:25:59.939474 containerd[1536]: time="2025-02-13T20:25:59.939087780Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:25:59.939474 containerd[1536]: time="2025-02-13T20:25:59.939099347Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:25:59.939474 containerd[1536]: time="2025-02-13T20:25:59.939167461Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:25:59.971927 systemd[1]: Started cri-containerd-79ed401a671d5958c08cd139d72dd8f8648eb710891ccd838d75221721b4a6a7.scope - libcontainer container 79ed401a671d5958c08cd139d72dd8f8648eb710891ccd838d75221721b4a6a7. Feb 13 20:26:00.024367 containerd[1536]: time="2025-02-13T20:26:00.024021220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-mx8t4,Uid:c710d067-c616-4320-8001-fe3879354682,Namespace:kube-system,Attempt:0,} returns sandbox id \"79ed401a671d5958c08cd139d72dd8f8648eb710891ccd838d75221721b4a6a7\"" Feb 13 20:26:00.099848 kubelet[2742]: I0213 20:26:00.099724 2742 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-wtr6z" podStartSLOduration=2.09969113 podStartE2EDuration="2.09969113s" podCreationTimestamp="2025-02-13 20:25:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:26:00.098471942 +0000 UTC m=+7.300504076" watchObservedRunningTime="2025-02-13 20:26:00.09969113 +0000 UTC m=+7.301723348" Feb 13 20:26:05.485332 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1075781838.mount: Deactivated successfully. Feb 13 20:26:07.545409 containerd[1536]: time="2025-02-13T20:26:07.545237173Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:26:07.546573 containerd[1536]: time="2025-02-13T20:26:07.546403800Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Feb 13 20:26:07.547831 containerd[1536]: time="2025-02-13T20:26:07.547805807Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:26:07.549343 containerd[1536]: time="2025-02-13T20:26:07.549317551Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.367792835s" Feb 13 20:26:07.549419 containerd[1536]: time="2025-02-13T20:26:07.549348501Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 13 20:26:07.551446 containerd[1536]: time="2025-02-13T20:26:07.551425212Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 13 20:26:07.554785 containerd[1536]: time="2025-02-13T20:26:07.554302424Z" level=info msg="CreateContainer within sandbox \"8d3b6a7306ce4a4b89d977a12f1e648fa7f976e4ab00d72f54c2dd35f7bd8407\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 20:26:07.624258 containerd[1536]: time="2025-02-13T20:26:07.624201732Z" level=info msg="CreateContainer within sandbox \"8d3b6a7306ce4a4b89d977a12f1e648fa7f976e4ab00d72f54c2dd35f7bd8407\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"62630eec0eae38777d272c2355b37a492fd28b6f5e5d518cfc27482e122d18d2\"" Feb 13 20:26:07.625831 containerd[1536]: time="2025-02-13T20:26:07.625796109Z" level=info msg="StartContainer for \"62630eec0eae38777d272c2355b37a492fd28b6f5e5d518cfc27482e122d18d2\"" Feb 13 20:26:07.734290 systemd[1]: Started cri-containerd-62630eec0eae38777d272c2355b37a492fd28b6f5e5d518cfc27482e122d18d2.scope - libcontainer container 62630eec0eae38777d272c2355b37a492fd28b6f5e5d518cfc27482e122d18d2. Feb 13 20:26:07.791446 containerd[1536]: time="2025-02-13T20:26:07.789402540Z" level=info msg="StartContainer for \"62630eec0eae38777d272c2355b37a492fd28b6f5e5d518cfc27482e122d18d2\" returns successfully" Feb 13 20:26:07.814901 systemd[1]: cri-containerd-62630eec0eae38777d272c2355b37a492fd28b6f5e5d518cfc27482e122d18d2.scope: Deactivated successfully. Feb 13 20:26:08.001704 containerd[1536]: time="2025-02-13T20:26:07.988902523Z" level=info msg="shim disconnected" id=62630eec0eae38777d272c2355b37a492fd28b6f5e5d518cfc27482e122d18d2 namespace=k8s.io Feb 13 20:26:08.001704 containerd[1536]: time="2025-02-13T20:26:08.001707590Z" level=warning msg="cleaning up after shim disconnected" id=62630eec0eae38777d272c2355b37a492fd28b6f5e5d518cfc27482e122d18d2 namespace=k8s.io Feb 13 20:26:08.002032 containerd[1536]: time="2025-02-13T20:26:08.001725863Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:26:08.141867 containerd[1536]: time="2025-02-13T20:26:08.141793214Z" level=info msg="CreateContainer within sandbox \"8d3b6a7306ce4a4b89d977a12f1e648fa7f976e4ab00d72f54c2dd35f7bd8407\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 20:26:08.158191 containerd[1536]: time="2025-02-13T20:26:08.158138301Z" level=info msg="CreateContainer within sandbox \"8d3b6a7306ce4a4b89d977a12f1e648fa7f976e4ab00d72f54c2dd35f7bd8407\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"992750fcfb2c82f766815b1aada5c5279bd4a7002ea61e11af7767e12018c7f5\"" Feb 13 20:26:08.159523 containerd[1536]: time="2025-02-13T20:26:08.159489223Z" level=info msg="StartContainer for \"992750fcfb2c82f766815b1aada5c5279bd4a7002ea61e11af7767e12018c7f5\"" Feb 13 20:26:08.187923 systemd[1]: Started cri-containerd-992750fcfb2c82f766815b1aada5c5279bd4a7002ea61e11af7767e12018c7f5.scope - libcontainer container 992750fcfb2c82f766815b1aada5c5279bd4a7002ea61e11af7767e12018c7f5. Feb 13 20:26:08.221906 containerd[1536]: time="2025-02-13T20:26:08.221869678Z" level=info msg="StartContainer for \"992750fcfb2c82f766815b1aada5c5279bd4a7002ea61e11af7767e12018c7f5\" returns successfully" Feb 13 20:26:08.237660 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 20:26:08.237963 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:26:08.238164 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:26:08.245407 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:26:08.248249 systemd[1]: cri-containerd-992750fcfb2c82f766815b1aada5c5279bd4a7002ea61e11af7767e12018c7f5.scope: Deactivated successfully. Feb 13 20:26:08.272769 containerd[1536]: time="2025-02-13T20:26:08.272674596Z" level=info msg="shim disconnected" id=992750fcfb2c82f766815b1aada5c5279bd4a7002ea61e11af7767e12018c7f5 namespace=k8s.io Feb 13 20:26:08.272769 containerd[1536]: time="2025-02-13T20:26:08.272744015Z" level=warning msg="cleaning up after shim disconnected" id=992750fcfb2c82f766815b1aada5c5279bd4a7002ea61e11af7767e12018c7f5 namespace=k8s.io Feb 13 20:26:08.272987 containerd[1536]: time="2025-02-13T20:26:08.272781271Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:26:08.283244 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:26:08.616431 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-62630eec0eae38777d272c2355b37a492fd28b6f5e5d518cfc27482e122d18d2-rootfs.mount: Deactivated successfully. Feb 13 20:26:09.148379 containerd[1536]: time="2025-02-13T20:26:09.148108100Z" level=info msg="CreateContainer within sandbox \"8d3b6a7306ce4a4b89d977a12f1e648fa7f976e4ab00d72f54c2dd35f7bd8407\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 20:26:09.177561 containerd[1536]: time="2025-02-13T20:26:09.177521687Z" level=info msg="CreateContainer within sandbox \"8d3b6a7306ce4a4b89d977a12f1e648fa7f976e4ab00d72f54c2dd35f7bd8407\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5f425656f4795f8afcfffd1711977569c0aaeacf56e7067bff570683a1f322aa\"" Feb 13 20:26:09.178137 containerd[1536]: time="2025-02-13T20:26:09.178116005Z" level=info msg="StartContainer for \"5f425656f4795f8afcfffd1711977569c0aaeacf56e7067bff570683a1f322aa\"" Feb 13 20:26:09.224907 systemd[1]: Started cri-containerd-5f425656f4795f8afcfffd1711977569c0aaeacf56e7067bff570683a1f322aa.scope - libcontainer container 5f425656f4795f8afcfffd1711977569c0aaeacf56e7067bff570683a1f322aa. Feb 13 20:26:09.259074 containerd[1536]: time="2025-02-13T20:26:09.258989702Z" level=info msg="StartContainer for \"5f425656f4795f8afcfffd1711977569c0aaeacf56e7067bff570683a1f322aa\" returns successfully" Feb 13 20:26:09.265550 systemd[1]: cri-containerd-5f425656f4795f8afcfffd1711977569c0aaeacf56e7067bff570683a1f322aa.scope: Deactivated successfully. Feb 13 20:26:09.290622 containerd[1536]: time="2025-02-13T20:26:09.290560589Z" level=info msg="shim disconnected" id=5f425656f4795f8afcfffd1711977569c0aaeacf56e7067bff570683a1f322aa namespace=k8s.io Feb 13 20:26:09.290622 containerd[1536]: time="2025-02-13T20:26:09.290617275Z" level=warning msg="cleaning up after shim disconnected" id=5f425656f4795f8afcfffd1711977569c0aaeacf56e7067bff570683a1f322aa namespace=k8s.io Feb 13 20:26:09.290622 containerd[1536]: time="2025-02-13T20:26:09.290625981Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:26:09.613467 systemd[1]: run-containerd-runc-k8s.io-5f425656f4795f8afcfffd1711977569c0aaeacf56e7067bff570683a1f322aa-runc.mMRr3m.mount: Deactivated successfully. Feb 13 20:26:09.613586 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5f425656f4795f8afcfffd1711977569c0aaeacf56e7067bff570683a1f322aa-rootfs.mount: Deactivated successfully. Feb 13 20:26:09.683268 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount600166370.mount: Deactivated successfully. Feb 13 20:26:10.148019 containerd[1536]: time="2025-02-13T20:26:10.147878881Z" level=info msg="CreateContainer within sandbox \"8d3b6a7306ce4a4b89d977a12f1e648fa7f976e4ab00d72f54c2dd35f7bd8407\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 20:26:10.214057 containerd[1536]: time="2025-02-13T20:26:10.213992783Z" level=info msg="CreateContainer within sandbox \"8d3b6a7306ce4a4b89d977a12f1e648fa7f976e4ab00d72f54c2dd35f7bd8407\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5dc59c33d675bebce020ae783516eb4b89f17b5654e3e724b27eb3429c059c53\"" Feb 13 20:26:10.216521 containerd[1536]: time="2025-02-13T20:26:10.216285690Z" level=info msg="StartContainer for \"5dc59c33d675bebce020ae783516eb4b89f17b5654e3e724b27eb3429c059c53\"" Feb 13 20:26:10.264907 systemd[1]: Started cri-containerd-5dc59c33d675bebce020ae783516eb4b89f17b5654e3e724b27eb3429c059c53.scope - libcontainer container 5dc59c33d675bebce020ae783516eb4b89f17b5654e3e724b27eb3429c059c53. Feb 13 20:26:10.304167 systemd[1]: cri-containerd-5dc59c33d675bebce020ae783516eb4b89f17b5654e3e724b27eb3429c059c53.scope: Deactivated successfully. Feb 13 20:26:10.306185 containerd[1536]: time="2025-02-13T20:26:10.306155970Z" level=info msg="StartContainer for \"5dc59c33d675bebce020ae783516eb4b89f17b5654e3e724b27eb3429c059c53\" returns successfully" Feb 13 20:26:10.390624 containerd[1536]: time="2025-02-13T20:26:10.390392271Z" level=info msg="shim disconnected" id=5dc59c33d675bebce020ae783516eb4b89f17b5654e3e724b27eb3429c059c53 namespace=k8s.io Feb 13 20:26:10.390624 containerd[1536]: time="2025-02-13T20:26:10.390618665Z" level=warning msg="cleaning up after shim disconnected" id=5dc59c33d675bebce020ae783516eb4b89f17b5654e3e724b27eb3429c059c53 namespace=k8s.io Feb 13 20:26:10.390944 containerd[1536]: time="2025-02-13T20:26:10.390636206Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:26:10.415457 containerd[1536]: time="2025-02-13T20:26:10.415358499Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:26:10.416459 containerd[1536]: time="2025-02-13T20:26:10.416420583Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Feb 13 20:26:10.417087 containerd[1536]: time="2025-02-13T20:26:10.416873261Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:26:10.418240 containerd[1536]: time="2025-02-13T20:26:10.418148767Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.866074437s" Feb 13 20:26:10.418240 containerd[1536]: time="2025-02-13T20:26:10.418177571Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 13 20:26:10.423682 containerd[1536]: time="2025-02-13T20:26:10.422141033Z" level=info msg="CreateContainer within sandbox \"79ed401a671d5958c08cd139d72dd8f8648eb710891ccd838d75221721b4a6a7\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 13 20:26:10.443274 containerd[1536]: time="2025-02-13T20:26:10.443157879Z" level=info msg="CreateContainer within sandbox \"79ed401a671d5958c08cd139d72dd8f8648eb710891ccd838d75221721b4a6a7\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"9c54a7464ee7c303111b46f9220e13e6092d69dd1b4fed4f9c57b9d4e56cedd0\"" Feb 13 20:26:10.446222 containerd[1536]: time="2025-02-13T20:26:10.444643492Z" level=info msg="StartContainer for \"9c54a7464ee7c303111b46f9220e13e6092d69dd1b4fed4f9c57b9d4e56cedd0\"" Feb 13 20:26:10.480988 systemd[1]: Started cri-containerd-9c54a7464ee7c303111b46f9220e13e6092d69dd1b4fed4f9c57b9d4e56cedd0.scope - libcontainer container 9c54a7464ee7c303111b46f9220e13e6092d69dd1b4fed4f9c57b9d4e56cedd0. Feb 13 20:26:10.510649 containerd[1536]: time="2025-02-13T20:26:10.509394277Z" level=info msg="StartContainer for \"9c54a7464ee7c303111b46f9220e13e6092d69dd1b4fed4f9c57b9d4e56cedd0\" returns successfully" Feb 13 20:26:11.157680 containerd[1536]: time="2025-02-13T20:26:11.157617032Z" level=info msg="CreateContainer within sandbox \"8d3b6a7306ce4a4b89d977a12f1e648fa7f976e4ab00d72f54c2dd35f7bd8407\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 20:26:11.170950 containerd[1536]: time="2025-02-13T20:26:11.170703414Z" level=info msg="CreateContainer within sandbox \"8d3b6a7306ce4a4b89d977a12f1e648fa7f976e4ab00d72f54c2dd35f7bd8407\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"71354c7de9cccdc84fa1a4e449dc404e4ecd941826516d981ea1b726507cf906\"" Feb 13 20:26:11.171956 containerd[1536]: time="2025-02-13T20:26:11.171106101Z" level=info msg="StartContainer for \"71354c7de9cccdc84fa1a4e449dc404e4ecd941826516d981ea1b726507cf906\"" Feb 13 20:26:11.221905 systemd[1]: Started cri-containerd-71354c7de9cccdc84fa1a4e449dc404e4ecd941826516d981ea1b726507cf906.scope - libcontainer container 71354c7de9cccdc84fa1a4e449dc404e4ecd941826516d981ea1b726507cf906. Feb 13 20:26:11.314702 containerd[1536]: time="2025-02-13T20:26:11.314640711Z" level=info msg="StartContainer for \"71354c7de9cccdc84fa1a4e449dc404e4ecd941826516d981ea1b726507cf906\" returns successfully" Feb 13 20:26:11.429232 kubelet[2742]: I0213 20:26:11.428538 2742 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-mx8t4" podStartSLOduration=2.03353026 podStartE2EDuration="12.428489245s" podCreationTimestamp="2025-02-13 20:25:59 +0000 UTC" firstStartedPulling="2025-02-13 20:26:00.025990233 +0000 UTC m=+7.228022342" lastFinishedPulling="2025-02-13 20:26:10.420949213 +0000 UTC m=+17.622981327" observedRunningTime="2025-02-13 20:26:11.314214263 +0000 UTC m=+18.516246392" watchObservedRunningTime="2025-02-13 20:26:11.428489245 +0000 UTC m=+18.630521376" Feb 13 20:26:11.556813 kubelet[2742]: I0213 20:26:11.556491 2742 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Feb 13 20:26:11.609530 kubelet[2742]: I0213 20:26:11.609381 2742 status_manager.go:890] "Failed to get status for pod" podUID="b0292629-d179-4d9b-a2ee-bd7ab1f89480" pod="kube-system/coredns-668d6bf9bc-pzqwl" err="pods \"coredns-668d6bf9bc-pzqwl\" is forbidden: User \"system:node:srv-llv2e.gb1.brightbox.com\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'srv-llv2e.gb1.brightbox.com' and this object" Feb 13 20:26:11.615239 systemd[1]: run-containerd-runc-k8s.io-71354c7de9cccdc84fa1a4e449dc404e4ecd941826516d981ea1b726507cf906-runc.G1qJ3C.mount: Deactivated successfully. Feb 13 20:26:11.620854 systemd[1]: Created slice kubepods-burstable-podb0292629_d179_4d9b_a2ee_bd7ab1f89480.slice - libcontainer container kubepods-burstable-podb0292629_d179_4d9b_a2ee_bd7ab1f89480.slice. Feb 13 20:26:11.628117 systemd[1]: Created slice kubepods-burstable-pod8f94e70f_c6fa_4205_9db2_4526e8ece30a.slice - libcontainer container kubepods-burstable-pod8f94e70f_c6fa_4205_9db2_4526e8ece30a.slice. Feb 13 20:26:11.632145 kubelet[2742]: I0213 20:26:11.631502 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvbg4\" (UniqueName: \"kubernetes.io/projected/8f94e70f-c6fa-4205-9db2-4526e8ece30a-kube-api-access-mvbg4\") pod \"coredns-668d6bf9bc-278p4\" (UID: \"8f94e70f-c6fa-4205-9db2-4526e8ece30a\") " pod="kube-system/coredns-668d6bf9bc-278p4" Feb 13 20:26:11.632806 kubelet[2742]: I0213 20:26:11.632612 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4msd4\" (UniqueName: \"kubernetes.io/projected/b0292629-d179-4d9b-a2ee-bd7ab1f89480-kube-api-access-4msd4\") pod \"coredns-668d6bf9bc-pzqwl\" (UID: \"b0292629-d179-4d9b-a2ee-bd7ab1f89480\") " pod="kube-system/coredns-668d6bf9bc-pzqwl" Feb 13 20:26:11.633228 kubelet[2742]: I0213 20:26:11.633196 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b0292629-d179-4d9b-a2ee-bd7ab1f89480-config-volume\") pod \"coredns-668d6bf9bc-pzqwl\" (UID: \"b0292629-d179-4d9b-a2ee-bd7ab1f89480\") " pod="kube-system/coredns-668d6bf9bc-pzqwl" Feb 13 20:26:11.633627 kubelet[2742]: I0213 20:26:11.633508 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8f94e70f-c6fa-4205-9db2-4526e8ece30a-config-volume\") pod \"coredns-668d6bf9bc-278p4\" (UID: \"8f94e70f-c6fa-4205-9db2-4526e8ece30a\") " pod="kube-system/coredns-668d6bf9bc-278p4" Feb 13 20:26:11.926493 containerd[1536]: time="2025-02-13T20:26:11.926449268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pzqwl,Uid:b0292629-d179-4d9b-a2ee-bd7ab1f89480,Namespace:kube-system,Attempt:0,}" Feb 13 20:26:11.933286 containerd[1536]: time="2025-02-13T20:26:11.933254654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-278p4,Uid:8f94e70f-c6fa-4205-9db2-4526e8ece30a,Namespace:kube-system,Attempt:0,}" Feb 13 20:26:13.838800 systemd-networkd[1441]: cilium_host: Link UP Feb 13 20:26:13.839026 systemd-networkd[1441]: cilium_net: Link UP Feb 13 20:26:13.839219 systemd-networkd[1441]: cilium_net: Gained carrier Feb 13 20:26:13.839364 systemd-networkd[1441]: cilium_host: Gained carrier Feb 13 20:26:14.012906 systemd-networkd[1441]: cilium_vxlan: Link UP Feb 13 20:26:14.012916 systemd-networkd[1441]: cilium_vxlan: Gained carrier Feb 13 20:26:14.351086 kernel: NET: Registered PF_ALG protocol family Feb 13 20:26:14.391988 systemd-networkd[1441]: cilium_host: Gained IPv6LL Feb 13 20:26:14.520022 systemd-networkd[1441]: cilium_net: Gained IPv6LL Feb 13 20:26:15.119626 systemd-networkd[1441]: lxc_health: Link UP Feb 13 20:26:15.123360 systemd-networkd[1441]: lxc_health: Gained carrier Feb 13 20:26:15.161171 systemd-networkd[1441]: cilium_vxlan: Gained IPv6LL Feb 13 20:26:15.539219 kernel: eth0: renamed from tmpf5d48 Feb 13 20:26:15.544292 systemd-networkd[1441]: lxc99182759d516: Link UP Feb 13 20:26:15.544552 systemd-networkd[1441]: lxc99182759d516: Gained carrier Feb 13 20:26:15.598649 systemd-networkd[1441]: lxc37a74a8d714f: Link UP Feb 13 20:26:15.601830 kernel: eth0: renamed from tmp7b690 Feb 13 20:26:15.608014 systemd-networkd[1441]: lxc37a74a8d714f: Gained carrier Feb 13 20:26:16.631961 systemd-networkd[1441]: lxc_health: Gained IPv6LL Feb 13 20:26:16.761042 systemd-networkd[1441]: lxc37a74a8d714f: Gained IPv6LL Feb 13 20:26:17.021974 kubelet[2742]: I0213 20:26:17.019414 2742 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-zmtcx" podStartSLOduration=10.648122106 podStartE2EDuration="19.019382433s" podCreationTimestamp="2025-02-13 20:25:58 +0000 UTC" firstStartedPulling="2025-02-13 20:25:59.180020068 +0000 UTC m=+6.382052182" lastFinishedPulling="2025-02-13 20:26:07.551280399 +0000 UTC m=+14.753312509" observedRunningTime="2025-02-13 20:26:12.193936721 +0000 UTC m=+19.395968863" watchObservedRunningTime="2025-02-13 20:26:17.019382433 +0000 UTC m=+24.221414597" Feb 13 20:26:17.529253 systemd-networkd[1441]: lxc99182759d516: Gained IPv6LL Feb 13 20:26:19.669174 containerd[1536]: time="2025-02-13T20:26:19.668991568Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:26:19.669174 containerd[1536]: time="2025-02-13T20:26:19.669075045Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:26:19.669174 containerd[1536]: time="2025-02-13T20:26:19.669114904Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:26:19.671477 containerd[1536]: time="2025-02-13T20:26:19.669693772Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:26:19.683813 containerd[1536]: time="2025-02-13T20:26:19.682366673Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:26:19.683813 containerd[1536]: time="2025-02-13T20:26:19.682435567Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:26:19.683813 containerd[1536]: time="2025-02-13T20:26:19.682451510Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:26:19.683813 containerd[1536]: time="2025-02-13T20:26:19.682571671Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:26:19.720917 systemd[1]: Started cri-containerd-f5d483a75c52ef9bcac4fd069b6cf82ee98c7372b8b627bcd8b516336dcd27dd.scope - libcontainer container f5d483a75c52ef9bcac4fd069b6cf82ee98c7372b8b627bcd8b516336dcd27dd. Feb 13 20:26:19.738640 systemd[1]: run-containerd-runc-k8s.io-7b6909b345cdcaac856e6ffd16b057421ee2eb641ff2b9495a9288e2f5ee0b49-runc.inDsjF.mount: Deactivated successfully. Feb 13 20:26:19.746890 systemd[1]: Started cri-containerd-7b6909b345cdcaac856e6ffd16b057421ee2eb641ff2b9495a9288e2f5ee0b49.scope - libcontainer container 7b6909b345cdcaac856e6ffd16b057421ee2eb641ff2b9495a9288e2f5ee0b49. Feb 13 20:26:19.812924 containerd[1536]: time="2025-02-13T20:26:19.812176804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-278p4,Uid:8f94e70f-c6fa-4205-9db2-4526e8ece30a,Namespace:kube-system,Attempt:0,} returns sandbox id \"f5d483a75c52ef9bcac4fd069b6cf82ee98c7372b8b627bcd8b516336dcd27dd\"" Feb 13 20:26:19.817325 containerd[1536]: time="2025-02-13T20:26:19.817213809Z" level=info msg="CreateContainer within sandbox \"f5d483a75c52ef9bcac4fd069b6cf82ee98c7372b8b627bcd8b516336dcd27dd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 20:26:19.834036 containerd[1536]: time="2025-02-13T20:26:19.833847872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pzqwl,Uid:b0292629-d179-4d9b-a2ee-bd7ab1f89480,Namespace:kube-system,Attempt:0,} returns sandbox id \"7b6909b345cdcaac856e6ffd16b057421ee2eb641ff2b9495a9288e2f5ee0b49\"" Feb 13 20:26:19.836639 containerd[1536]: time="2025-02-13T20:26:19.836602585Z" level=info msg="CreateContainer within sandbox \"7b6909b345cdcaac856e6ffd16b057421ee2eb641ff2b9495a9288e2f5ee0b49\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 20:26:19.846359 containerd[1536]: time="2025-02-13T20:26:19.846276916Z" level=info msg="CreateContainer within sandbox \"f5d483a75c52ef9bcac4fd069b6cf82ee98c7372b8b627bcd8b516336dcd27dd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"12005ba9878f0c2ca316611d8ef26ea92b449091f04bac65795139a1067a7b4f\"" Feb 13 20:26:19.847597 containerd[1536]: time="2025-02-13T20:26:19.846915563Z" level=info msg="StartContainer for \"12005ba9878f0c2ca316611d8ef26ea92b449091f04bac65795139a1067a7b4f\"" Feb 13 20:26:19.849251 containerd[1536]: time="2025-02-13T20:26:19.849168050Z" level=info msg="CreateContainer within sandbox \"7b6909b345cdcaac856e6ffd16b057421ee2eb641ff2b9495a9288e2f5ee0b49\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b44b7f57f8c309ff0780ff7a01d569d95e7020969dc3388d063cba0a8cf900a2\"" Feb 13 20:26:19.851074 containerd[1536]: time="2025-02-13T20:26:19.851051306Z" level=info msg="StartContainer for \"b44b7f57f8c309ff0780ff7a01d569d95e7020969dc3388d063cba0a8cf900a2\"" Feb 13 20:26:19.888906 systemd[1]: Started cri-containerd-b44b7f57f8c309ff0780ff7a01d569d95e7020969dc3388d063cba0a8cf900a2.scope - libcontainer container b44b7f57f8c309ff0780ff7a01d569d95e7020969dc3388d063cba0a8cf900a2. Feb 13 20:26:19.892807 systemd[1]: Started cri-containerd-12005ba9878f0c2ca316611d8ef26ea92b449091f04bac65795139a1067a7b4f.scope - libcontainer container 12005ba9878f0c2ca316611d8ef26ea92b449091f04bac65795139a1067a7b4f. Feb 13 20:26:19.935410 containerd[1536]: time="2025-02-13T20:26:19.934325207Z" level=info msg="StartContainer for \"12005ba9878f0c2ca316611d8ef26ea92b449091f04bac65795139a1067a7b4f\" returns successfully" Feb 13 20:26:19.941774 containerd[1536]: time="2025-02-13T20:26:19.941717685Z" level=info msg="StartContainer for \"b44b7f57f8c309ff0780ff7a01d569d95e7020969dc3388d063cba0a8cf900a2\" returns successfully" Feb 13 20:26:20.209852 kubelet[2742]: I0213 20:26:20.209716 2742 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-278p4" podStartSLOduration=21.20969891 podStartE2EDuration="21.20969891s" podCreationTimestamp="2025-02-13 20:25:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:26:20.208271285 +0000 UTC m=+27.410303419" watchObservedRunningTime="2025-02-13 20:26:20.20969891 +0000 UTC m=+27.411731038" Feb 13 20:26:20.243396 kubelet[2742]: I0213 20:26:20.243043 2742 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-pzqwl" podStartSLOduration=21.243022581 podStartE2EDuration="21.243022581s" podCreationTimestamp="2025-02-13 20:25:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:26:20.225812517 +0000 UTC m=+27.427844670" watchObservedRunningTime="2025-02-13 20:26:20.243022581 +0000 UTC m=+27.445054706" Feb 13 20:27:05.024278 systemd[1]: Started sshd@7-10.244.92.114:22-139.178.89.65:34168.service - OpenSSH per-connection server daemon (139.178.89.65:34168). Feb 13 20:27:05.952980 sshd[4137]: Accepted publickey for core from 139.178.89.65 port 34168 ssh2: RSA SHA256:1d/NPWzJh4p1csN6rw9jx6l57+TZuIaUuHeQZhkXldk Feb 13 20:27:05.956679 sshd-session[4137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:27:05.969381 systemd-logind[1511]: New session 10 of user core. Feb 13 20:27:05.975030 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 20:27:07.060895 sshd[4139]: Connection closed by 139.178.89.65 port 34168 Feb 13 20:27:07.062291 sshd-session[4137]: pam_unix(sshd:session): session closed for user core Feb 13 20:27:07.072232 systemd[1]: sshd@7-10.244.92.114:22-139.178.89.65:34168.service: Deactivated successfully. Feb 13 20:27:07.079737 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 20:27:07.080634 systemd-logind[1511]: Session 10 logged out. Waiting for processes to exit. Feb 13 20:27:07.081733 systemd-logind[1511]: Removed session 10. Feb 13 20:27:12.230133 systemd[1]: Started sshd@8-10.244.92.114:22-139.178.89.65:34184.service - OpenSSH per-connection server daemon (139.178.89.65:34184). Feb 13 20:27:13.158625 sshd[4151]: Accepted publickey for core from 139.178.89.65 port 34184 ssh2: RSA SHA256:1d/NPWzJh4p1csN6rw9jx6l57+TZuIaUuHeQZhkXldk Feb 13 20:27:13.161050 sshd-session[4151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:27:13.170548 systemd-logind[1511]: New session 11 of user core. Feb 13 20:27:13.175952 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 20:27:13.869674 sshd[4153]: Connection closed by 139.178.89.65 port 34184 Feb 13 20:27:13.871055 sshd-session[4151]: pam_unix(sshd:session): session closed for user core Feb 13 20:27:13.879306 systemd[1]: sshd@8-10.244.92.114:22-139.178.89.65:34184.service: Deactivated successfully. Feb 13 20:27:13.884172 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 20:27:13.885628 systemd-logind[1511]: Session 11 logged out. Waiting for processes to exit. Feb 13 20:27:13.886701 systemd-logind[1511]: Removed session 11. Feb 13 20:27:19.036221 systemd[1]: Started sshd@9-10.244.92.114:22-139.178.89.65:56334.service - OpenSSH per-connection server daemon (139.178.89.65:56334). Feb 13 20:27:19.959825 sshd[4168]: Accepted publickey for core from 139.178.89.65 port 56334 ssh2: RSA SHA256:1d/NPWzJh4p1csN6rw9jx6l57+TZuIaUuHeQZhkXldk Feb 13 20:27:19.962507 sshd-session[4168]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:27:19.971912 systemd-logind[1511]: New session 12 of user core. Feb 13 20:27:19.978987 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 20:27:20.670902 sshd[4170]: Connection closed by 139.178.89.65 port 56334 Feb 13 20:27:20.672726 sshd-session[4168]: pam_unix(sshd:session): session closed for user core Feb 13 20:27:20.681409 systemd[1]: sshd@9-10.244.92.114:22-139.178.89.65:56334.service: Deactivated successfully. Feb 13 20:27:20.685437 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 20:27:20.687860 systemd-logind[1511]: Session 12 logged out. Waiting for processes to exit. Feb 13 20:27:20.689221 systemd-logind[1511]: Removed session 12. Feb 13 20:27:25.836862 systemd[1]: Started sshd@10-10.244.92.114:22-139.178.89.65:47540.service - OpenSSH per-connection server daemon (139.178.89.65:47540). Feb 13 20:27:26.737118 sshd[4184]: Accepted publickey for core from 139.178.89.65 port 47540 ssh2: RSA SHA256:1d/NPWzJh4p1csN6rw9jx6l57+TZuIaUuHeQZhkXldk Feb 13 20:27:26.740545 sshd-session[4184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:27:26.751338 systemd-logind[1511]: New session 13 of user core. Feb 13 20:27:26.754297 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 20:27:27.463853 sshd[4186]: Connection closed by 139.178.89.65 port 47540 Feb 13 20:27:27.465287 sshd-session[4184]: pam_unix(sshd:session): session closed for user core Feb 13 20:27:27.482211 systemd[1]: sshd@10-10.244.92.114:22-139.178.89.65:47540.service: Deactivated successfully. Feb 13 20:27:27.488594 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 20:27:27.489990 systemd-logind[1511]: Session 13 logged out. Waiting for processes to exit. Feb 13 20:27:27.490962 systemd-logind[1511]: Removed session 13. Feb 13 20:27:27.635438 systemd[1]: Started sshd@11-10.244.92.114:22-139.178.89.65:47550.service - OpenSSH per-connection server daemon (139.178.89.65:47550). Feb 13 20:27:28.562916 sshd[4198]: Accepted publickey for core from 139.178.89.65 port 47550 ssh2: RSA SHA256:1d/NPWzJh4p1csN6rw9jx6l57+TZuIaUuHeQZhkXldk Feb 13 20:27:28.565406 sshd-session[4198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:27:28.579739 systemd-logind[1511]: New session 14 of user core. Feb 13 20:27:28.588010 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 20:27:29.351062 sshd[4200]: Connection closed by 139.178.89.65 port 47550 Feb 13 20:27:29.352049 sshd-session[4198]: pam_unix(sshd:session): session closed for user core Feb 13 20:27:29.366638 systemd[1]: sshd@11-10.244.92.114:22-139.178.89.65:47550.service: Deactivated successfully. Feb 13 20:27:29.370021 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 20:27:29.371576 systemd-logind[1511]: Session 14 logged out. Waiting for processes to exit. Feb 13 20:27:29.373225 systemd-logind[1511]: Removed session 14. Feb 13 20:27:29.524137 systemd[1]: Started sshd@12-10.244.92.114:22-139.178.89.65:47564.service - OpenSSH per-connection server daemon (139.178.89.65:47564). Feb 13 20:27:30.439809 sshd[4210]: Accepted publickey for core from 139.178.89.65 port 47564 ssh2: RSA SHA256:1d/NPWzJh4p1csN6rw9jx6l57+TZuIaUuHeQZhkXldk Feb 13 20:27:30.442422 sshd-session[4210]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:27:30.453311 systemd-logind[1511]: New session 15 of user core. Feb 13 20:27:30.465960 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 20:27:31.135299 sshd[4215]: Connection closed by 139.178.89.65 port 47564 Feb 13 20:27:31.136614 sshd-session[4210]: pam_unix(sshd:session): session closed for user core Feb 13 20:27:31.144853 systemd[1]: sshd@12-10.244.92.114:22-139.178.89.65:47564.service: Deactivated successfully. Feb 13 20:27:31.148059 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 20:27:31.149563 systemd-logind[1511]: Session 15 logged out. Waiting for processes to exit. Feb 13 20:27:31.151604 systemd-logind[1511]: Removed session 15. Feb 13 20:27:36.304104 systemd[1]: Started sshd@13-10.244.92.114:22-139.178.89.65:42042.service - OpenSSH per-connection server daemon (139.178.89.65:42042). Feb 13 20:27:37.251858 sshd[4226]: Accepted publickey for core from 139.178.89.65 port 42042 ssh2: RSA SHA256:1d/NPWzJh4p1csN6rw9jx6l57+TZuIaUuHeQZhkXldk Feb 13 20:27:37.253369 sshd-session[4226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:27:37.263126 systemd-logind[1511]: New session 16 of user core. Feb 13 20:27:37.270211 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 20:27:37.952883 sshd[4228]: Connection closed by 139.178.89.65 port 42042 Feb 13 20:27:37.954131 sshd-session[4226]: pam_unix(sshd:session): session closed for user core Feb 13 20:27:37.963618 systemd[1]: sshd@13-10.244.92.114:22-139.178.89.65:42042.service: Deactivated successfully. Feb 13 20:27:37.967228 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 20:27:37.969000 systemd-logind[1511]: Session 16 logged out. Waiting for processes to exit. Feb 13 20:27:37.971193 systemd-logind[1511]: Removed session 16. Feb 13 20:27:43.120092 systemd[1]: Started sshd@14-10.244.92.114:22-139.178.89.65:42050.service - OpenSSH per-connection server daemon (139.178.89.65:42050). Feb 13 20:27:44.017126 sshd[4240]: Accepted publickey for core from 139.178.89.65 port 42050 ssh2: RSA SHA256:1d/NPWzJh4p1csN6rw9jx6l57+TZuIaUuHeQZhkXldk Feb 13 20:27:44.020611 sshd-session[4240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:27:44.032413 systemd-logind[1511]: New session 17 of user core. Feb 13 20:27:44.043060 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 20:27:44.741008 sshd[4242]: Connection closed by 139.178.89.65 port 42050 Feb 13 20:27:44.743062 sshd-session[4240]: pam_unix(sshd:session): session closed for user core Feb 13 20:27:44.750960 systemd-logind[1511]: Session 17 logged out. Waiting for processes to exit. Feb 13 20:27:44.752185 systemd[1]: sshd@14-10.244.92.114:22-139.178.89.65:42050.service: Deactivated successfully. Feb 13 20:27:44.755984 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 20:27:44.759293 systemd-logind[1511]: Removed session 17. Feb 13 20:27:44.912279 systemd[1]: Started sshd@15-10.244.92.114:22-139.178.89.65:57662.service - OpenSSH per-connection server daemon (139.178.89.65:57662). Feb 13 20:27:45.824409 sshd[4254]: Accepted publickey for core from 139.178.89.65 port 57662 ssh2: RSA SHA256:1d/NPWzJh4p1csN6rw9jx6l57+TZuIaUuHeQZhkXldk Feb 13 20:27:45.828429 sshd-session[4254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:27:45.841171 systemd-logind[1511]: New session 18 of user core. Feb 13 20:27:45.844958 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 20:27:46.773623 sshd[4256]: Connection closed by 139.178.89.65 port 57662 Feb 13 20:27:46.776559 sshd-session[4254]: pam_unix(sshd:session): session closed for user core Feb 13 20:27:46.786826 systemd[1]: sshd@15-10.244.92.114:22-139.178.89.65:57662.service: Deactivated successfully. Feb 13 20:27:46.790529 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 20:27:46.793706 systemd-logind[1511]: Session 18 logged out. Waiting for processes to exit. Feb 13 20:27:46.794942 systemd-logind[1511]: Removed session 18. Feb 13 20:27:46.944138 systemd[1]: Started sshd@16-10.244.92.114:22-139.178.89.65:57678.service - OpenSSH per-connection server daemon (139.178.89.65:57678). Feb 13 20:27:47.857307 sshd[4266]: Accepted publickey for core from 139.178.89.65 port 57678 ssh2: RSA SHA256:1d/NPWzJh4p1csN6rw9jx6l57+TZuIaUuHeQZhkXldk Feb 13 20:27:47.859555 sshd-session[4266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:27:47.867444 systemd-logind[1511]: New session 19 of user core. Feb 13 20:27:47.874109 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 20:27:49.662488 sshd[4268]: Connection closed by 139.178.89.65 port 57678 Feb 13 20:27:49.665028 sshd-session[4266]: pam_unix(sshd:session): session closed for user core Feb 13 20:27:49.674442 systemd[1]: sshd@16-10.244.92.114:22-139.178.89.65:57678.service: Deactivated successfully. Feb 13 20:27:49.681532 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 20:27:49.684895 systemd-logind[1511]: Session 19 logged out. Waiting for processes to exit. Feb 13 20:27:49.687052 systemd-logind[1511]: Removed session 19. Feb 13 20:27:49.830423 systemd[1]: Started sshd@17-10.244.92.114:22-139.178.89.65:57684.service - OpenSSH per-connection server daemon (139.178.89.65:57684). Feb 13 20:27:50.732109 sshd[4286]: Accepted publickey for core from 139.178.89.65 port 57684 ssh2: RSA SHA256:1d/NPWzJh4p1csN6rw9jx6l57+TZuIaUuHeQZhkXldk Feb 13 20:27:50.736791 sshd-session[4286]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:27:50.748441 systemd-logind[1511]: New session 20 of user core. Feb 13 20:27:50.754993 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 20:27:51.594299 sshd[4288]: Connection closed by 139.178.89.65 port 57684 Feb 13 20:27:51.594073 sshd-session[4286]: pam_unix(sshd:session): session closed for user core Feb 13 20:27:51.602025 systemd-logind[1511]: Session 20 logged out. Waiting for processes to exit. Feb 13 20:27:51.602350 systemd[1]: sshd@17-10.244.92.114:22-139.178.89.65:57684.service: Deactivated successfully. Feb 13 20:27:51.605624 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 20:27:51.608404 systemd-logind[1511]: Removed session 20. Feb 13 20:27:51.763491 systemd[1]: Started sshd@18-10.244.92.114:22-139.178.89.65:57694.service - OpenSSH per-connection server daemon (139.178.89.65:57694). Feb 13 20:27:52.659272 sshd[4298]: Accepted publickey for core from 139.178.89.65 port 57694 ssh2: RSA SHA256:1d/NPWzJh4p1csN6rw9jx6l57+TZuIaUuHeQZhkXldk Feb 13 20:27:52.663042 sshd-session[4298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:27:52.674390 systemd-logind[1511]: New session 21 of user core. Feb 13 20:27:52.684929 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 20:27:53.374377 sshd[4300]: Connection closed by 139.178.89.65 port 57694 Feb 13 20:27:53.374126 sshd-session[4298]: pam_unix(sshd:session): session closed for user core Feb 13 20:27:53.382484 systemd[1]: sshd@18-10.244.92.114:22-139.178.89.65:57694.service: Deactivated successfully. Feb 13 20:27:53.386582 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 20:27:53.388338 systemd-logind[1511]: Session 21 logged out. Waiting for processes to exit. Feb 13 20:27:53.390025 systemd-logind[1511]: Removed session 21. Feb 13 20:27:58.534790 systemd[1]: Started sshd@19-10.244.92.114:22-139.178.89.65:57480.service - OpenSSH per-connection server daemon (139.178.89.65:57480). Feb 13 20:27:59.436638 sshd[4316]: Accepted publickey for core from 139.178.89.65 port 57480 ssh2: RSA SHA256:1d/NPWzJh4p1csN6rw9jx6l57+TZuIaUuHeQZhkXldk Feb 13 20:27:59.439903 sshd-session[4316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:27:59.449841 systemd-logind[1511]: New session 22 of user core. Feb 13 20:27:59.455986 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 20:28:00.134390 sshd[4318]: Connection closed by 139.178.89.65 port 57480 Feb 13 20:28:00.135443 sshd-session[4316]: pam_unix(sshd:session): session closed for user core Feb 13 20:28:00.142768 systemd[1]: sshd@19-10.244.92.114:22-139.178.89.65:57480.service: Deactivated successfully. Feb 13 20:28:00.146473 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 20:28:00.147791 systemd-logind[1511]: Session 22 logged out. Waiting for processes to exit. Feb 13 20:28:00.149058 systemd-logind[1511]: Removed session 22. Feb 13 20:28:05.304069 systemd[1]: Started sshd@20-10.244.92.114:22-139.178.89.65:39446.service - OpenSSH per-connection server daemon (139.178.89.65:39446). Feb 13 20:28:06.224196 sshd[4331]: Accepted publickey for core from 139.178.89.65 port 39446 ssh2: RSA SHA256:1d/NPWzJh4p1csN6rw9jx6l57+TZuIaUuHeQZhkXldk Feb 13 20:28:06.227695 sshd-session[4331]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:28:06.241762 systemd-logind[1511]: New session 23 of user core. Feb 13 20:28:06.247918 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 20:28:06.963113 sshd[4333]: Connection closed by 139.178.89.65 port 39446 Feb 13 20:28:06.964867 sshd-session[4331]: pam_unix(sshd:session): session closed for user core Feb 13 20:28:06.975042 systemd[1]: sshd@20-10.244.92.114:22-139.178.89.65:39446.service: Deactivated successfully. Feb 13 20:28:06.979112 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 20:28:06.980558 systemd-logind[1511]: Session 23 logged out. Waiting for processes to exit. Feb 13 20:28:06.982439 systemd-logind[1511]: Removed session 23. Feb 13 20:28:12.129152 systemd[1]: Started sshd@21-10.244.92.114:22-139.178.89.65:39460.service - OpenSSH per-connection server daemon (139.178.89.65:39460). Feb 13 20:28:13.048661 sshd[4345]: Accepted publickey for core from 139.178.89.65 port 39460 ssh2: RSA SHA256:1d/NPWzJh4p1csN6rw9jx6l57+TZuIaUuHeQZhkXldk Feb 13 20:28:13.053740 sshd-session[4345]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:28:13.065597 systemd-logind[1511]: New session 24 of user core. Feb 13 20:28:13.067984 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 20:28:13.758204 sshd[4347]: Connection closed by 139.178.89.65 port 39460 Feb 13 20:28:13.760209 sshd-session[4345]: pam_unix(sshd:session): session closed for user core Feb 13 20:28:13.769043 systemd[1]: sshd@21-10.244.92.114:22-139.178.89.65:39460.service: Deactivated successfully. Feb 13 20:28:13.771926 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 20:28:13.773064 systemd-logind[1511]: Session 24 logged out. Waiting for processes to exit. Feb 13 20:28:13.774233 systemd-logind[1511]: Removed session 24. Feb 13 20:28:13.928058 systemd[1]: Started sshd@22-10.244.92.114:22-139.178.89.65:39472.service - OpenSSH per-connection server daemon (139.178.89.65:39472). Feb 13 20:28:14.828490 sshd[4358]: Accepted publickey for core from 139.178.89.65 port 39472 ssh2: RSA SHA256:1d/NPWzJh4p1csN6rw9jx6l57+TZuIaUuHeQZhkXldk Feb 13 20:28:14.832265 sshd-session[4358]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:28:14.846308 systemd-logind[1511]: New session 25 of user core. Feb 13 20:28:14.851910 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 20:28:16.671149 systemd[1]: run-containerd-runc-k8s.io-71354c7de9cccdc84fa1a4e449dc404e4ecd941826516d981ea1b726507cf906-runc.1gB7hs.mount: Deactivated successfully. Feb 13 20:28:16.696288 containerd[1536]: time="2025-02-13T20:28:16.696242442Z" level=info msg="StopContainer for \"9c54a7464ee7c303111b46f9220e13e6092d69dd1b4fed4f9c57b9d4e56cedd0\" with timeout 30 (s)" Feb 13 20:28:16.699378 containerd[1536]: time="2025-02-13T20:28:16.698642761Z" level=info msg="Stop container \"9c54a7464ee7c303111b46f9220e13e6092d69dd1b4fed4f9c57b9d4e56cedd0\" with signal terminated" Feb 13 20:28:16.707152 containerd[1536]: time="2025-02-13T20:28:16.707051876Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 20:28:16.709556 containerd[1536]: time="2025-02-13T20:28:16.709413546Z" level=info msg="StopContainer for \"71354c7de9cccdc84fa1a4e449dc404e4ecd941826516d981ea1b726507cf906\" with timeout 2 (s)" Feb 13 20:28:16.709900 containerd[1536]: time="2025-02-13T20:28:16.709866444Z" level=info msg="Stop container \"71354c7de9cccdc84fa1a4e449dc404e4ecd941826516d981ea1b726507cf906\" with signal terminated" Feb 13 20:28:16.716009 systemd[1]: cri-containerd-9c54a7464ee7c303111b46f9220e13e6092d69dd1b4fed4f9c57b9d4e56cedd0.scope: Deactivated successfully. Feb 13 20:28:16.724308 systemd-networkd[1441]: lxc_health: Link DOWN Feb 13 20:28:16.725670 systemd-networkd[1441]: lxc_health: Lost carrier Feb 13 20:28:16.742377 systemd[1]: cri-containerd-71354c7de9cccdc84fa1a4e449dc404e4ecd941826516d981ea1b726507cf906.scope: Deactivated successfully. Feb 13 20:28:16.742859 systemd[1]: cri-containerd-71354c7de9cccdc84fa1a4e449dc404e4ecd941826516d981ea1b726507cf906.scope: Consumed 7.687s CPU time, 195.7M memory peak, 72M read from disk, 13.3M written to disk. Feb 13 20:28:16.760301 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9c54a7464ee7c303111b46f9220e13e6092d69dd1b4fed4f9c57b9d4e56cedd0-rootfs.mount: Deactivated successfully. Feb 13 20:28:16.762578 containerd[1536]: time="2025-02-13T20:28:16.762500367Z" level=info msg="shim disconnected" id=9c54a7464ee7c303111b46f9220e13e6092d69dd1b4fed4f9c57b9d4e56cedd0 namespace=k8s.io Feb 13 20:28:16.762578 containerd[1536]: time="2025-02-13T20:28:16.762572270Z" level=warning msg="cleaning up after shim disconnected" id=9c54a7464ee7c303111b46f9220e13e6092d69dd1b4fed4f9c57b9d4e56cedd0 namespace=k8s.io Feb 13 20:28:16.762823 containerd[1536]: time="2025-02-13T20:28:16.762581760Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:28:16.777707 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-71354c7de9cccdc84fa1a4e449dc404e4ecd941826516d981ea1b726507cf906-rootfs.mount: Deactivated successfully. Feb 13 20:28:16.782893 containerd[1536]: time="2025-02-13T20:28:16.782763525Z" level=info msg="shim disconnected" id=71354c7de9cccdc84fa1a4e449dc404e4ecd941826516d981ea1b726507cf906 namespace=k8s.io Feb 13 20:28:16.782893 containerd[1536]: time="2025-02-13T20:28:16.782819782Z" level=warning msg="cleaning up after shim disconnected" id=71354c7de9cccdc84fa1a4e449dc404e4ecd941826516d981ea1b726507cf906 namespace=k8s.io Feb 13 20:28:16.782893 containerd[1536]: time="2025-02-13T20:28:16.782830284Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:28:16.793155 containerd[1536]: time="2025-02-13T20:28:16.793017968Z" level=info msg="StopContainer for \"9c54a7464ee7c303111b46f9220e13e6092d69dd1b4fed4f9c57b9d4e56cedd0\" returns successfully" Feb 13 20:28:16.801532 containerd[1536]: time="2025-02-13T20:28:16.801505259Z" level=info msg="StopPodSandbox for \"79ed401a671d5958c08cd139d72dd8f8648eb710891ccd838d75221721b4a6a7\"" Feb 13 20:28:16.803612 containerd[1536]: time="2025-02-13T20:28:16.803121172Z" level=info msg="Container to stop \"9c54a7464ee7c303111b46f9220e13e6092d69dd1b4fed4f9c57b9d4e56cedd0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 20:28:16.808419 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-79ed401a671d5958c08cd139d72dd8f8648eb710891ccd838d75221721b4a6a7-shm.mount: Deactivated successfully. Feb 13 20:28:16.813136 systemd[1]: cri-containerd-79ed401a671d5958c08cd139d72dd8f8648eb710891ccd838d75221721b4a6a7.scope: Deactivated successfully. Feb 13 20:28:16.815602 containerd[1536]: time="2025-02-13T20:28:16.815575624Z" level=info msg="StopContainer for \"71354c7de9cccdc84fa1a4e449dc404e4ecd941826516d981ea1b726507cf906\" returns successfully" Feb 13 20:28:16.816202 containerd[1536]: time="2025-02-13T20:28:16.816149017Z" level=info msg="StopPodSandbox for \"8d3b6a7306ce4a4b89d977a12f1e648fa7f976e4ab00d72f54c2dd35f7bd8407\"" Feb 13 20:28:16.816202 containerd[1536]: time="2025-02-13T20:28:16.816178073Z" level=info msg="Container to stop \"5dc59c33d675bebce020ae783516eb4b89f17b5654e3e724b27eb3429c059c53\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 20:28:16.816299 containerd[1536]: time="2025-02-13T20:28:16.816209756Z" level=info msg="Container to stop \"71354c7de9cccdc84fa1a4e449dc404e4ecd941826516d981ea1b726507cf906\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 20:28:16.816299 containerd[1536]: time="2025-02-13T20:28:16.816219704Z" level=info msg="Container to stop \"62630eec0eae38777d272c2355b37a492fd28b6f5e5d518cfc27482e122d18d2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 20:28:16.816299 containerd[1536]: time="2025-02-13T20:28:16.816228510Z" level=info msg="Container to stop \"5f425656f4795f8afcfffd1711977569c0aaeacf56e7067bff570683a1f322aa\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 20:28:16.816299 containerd[1536]: time="2025-02-13T20:28:16.816235929Z" level=info msg="Container to stop \"992750fcfb2c82f766815b1aada5c5279bd4a7002ea61e11af7767e12018c7f5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 20:28:16.825420 systemd[1]: cri-containerd-8d3b6a7306ce4a4b89d977a12f1e648fa7f976e4ab00d72f54c2dd35f7bd8407.scope: Deactivated successfully. Feb 13 20:28:16.854540 containerd[1536]: time="2025-02-13T20:28:16.854485653Z" level=info msg="shim disconnected" id=79ed401a671d5958c08cd139d72dd8f8648eb710891ccd838d75221721b4a6a7 namespace=k8s.io Feb 13 20:28:16.855142 containerd[1536]: time="2025-02-13T20:28:16.855007201Z" level=warning msg="cleaning up after shim disconnected" id=79ed401a671d5958c08cd139d72dd8f8648eb710891ccd838d75221721b4a6a7 namespace=k8s.io Feb 13 20:28:16.855142 containerd[1536]: time="2025-02-13T20:28:16.855026611Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:28:16.857035 containerd[1536]: time="2025-02-13T20:28:16.856996348Z" level=info msg="shim disconnected" id=8d3b6a7306ce4a4b89d977a12f1e648fa7f976e4ab00d72f54c2dd35f7bd8407 namespace=k8s.io Feb 13 20:28:16.857238 containerd[1536]: time="2025-02-13T20:28:16.857115953Z" level=warning msg="cleaning up after shim disconnected" id=8d3b6a7306ce4a4b89d977a12f1e648fa7f976e4ab00d72f54c2dd35f7bd8407 namespace=k8s.io Feb 13 20:28:16.857238 containerd[1536]: time="2025-02-13T20:28:16.857129206Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:28:16.873830 containerd[1536]: time="2025-02-13T20:28:16.873784174Z" level=info msg="TearDown network for sandbox \"79ed401a671d5958c08cd139d72dd8f8648eb710891ccd838d75221721b4a6a7\" successfully" Feb 13 20:28:16.873830 containerd[1536]: time="2025-02-13T20:28:16.873819286Z" level=info msg="StopPodSandbox for \"79ed401a671d5958c08cd139d72dd8f8648eb710891ccd838d75221721b4a6a7\" returns successfully" Feb 13 20:28:16.875286 containerd[1536]: time="2025-02-13T20:28:16.875259105Z" level=warning msg="cleanup warnings time=\"2025-02-13T20:28:16Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 20:28:16.876617 containerd[1536]: time="2025-02-13T20:28:16.876588027Z" level=info msg="TearDown network for sandbox \"8d3b6a7306ce4a4b89d977a12f1e648fa7f976e4ab00d72f54c2dd35f7bd8407\" successfully" Feb 13 20:28:16.876617 containerd[1536]: time="2025-02-13T20:28:16.876611486Z" level=info msg="StopPodSandbox for \"8d3b6a7306ce4a4b89d977a12f1e648fa7f976e4ab00d72f54c2dd35f7bd8407\" returns successfully" Feb 13 20:28:16.979967 kubelet[2742]: I0213 20:28:16.978835 2742 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6f77539a-ba4f-4aed-a331-cdc49c9d4779-clustermesh-secrets\") pod \"6f77539a-ba4f-4aed-a331-cdc49c9d4779\" (UID: \"6f77539a-ba4f-4aed-a331-cdc49c9d4779\") " Feb 13 20:28:16.979967 kubelet[2742]: I0213 20:28:16.978898 2742 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6f77539a-ba4f-4aed-a331-cdc49c9d4779-host-proc-sys-net\") pod \"6f77539a-ba4f-4aed-a331-cdc49c9d4779\" (UID: \"6f77539a-ba4f-4aed-a331-cdc49c9d4779\") " Feb 13 20:28:16.979967 kubelet[2742]: I0213 20:28:16.978920 2742 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6f77539a-ba4f-4aed-a331-cdc49c9d4779-host-proc-sys-kernel\") pod \"6f77539a-ba4f-4aed-a331-cdc49c9d4779\" (UID: \"6f77539a-ba4f-4aed-a331-cdc49c9d4779\") " Feb 13 20:28:16.979967 kubelet[2742]: I0213 20:28:16.978941 2742 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6f77539a-ba4f-4aed-a331-cdc49c9d4779-etc-cni-netd\") pod \"6f77539a-ba4f-4aed-a331-cdc49c9d4779\" (UID: \"6f77539a-ba4f-4aed-a331-cdc49c9d4779\") " Feb 13 20:28:16.979967 kubelet[2742]: I0213 20:28:16.978965 2742 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6f77539a-ba4f-4aed-a331-cdc49c9d4779-cilium-config-path\") pod \"6f77539a-ba4f-4aed-a331-cdc49c9d4779\" (UID: \"6f77539a-ba4f-4aed-a331-cdc49c9d4779\") " Feb 13 20:28:16.979967 kubelet[2742]: I0213 20:28:16.978984 2742 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6f77539a-ba4f-4aed-a331-cdc49c9d4779-cilium-run\") pod \"6f77539a-ba4f-4aed-a331-cdc49c9d4779\" (UID: \"6f77539a-ba4f-4aed-a331-cdc49c9d4779\") " Feb 13 20:28:16.981741 kubelet[2742]: I0213 20:28:16.979004 2742 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c710d067-c616-4320-8001-fe3879354682-cilium-config-path\") pod \"c710d067-c616-4320-8001-fe3879354682\" (UID: \"c710d067-c616-4320-8001-fe3879354682\") " Feb 13 20:28:16.981741 kubelet[2742]: I0213 20:28:16.979023 2742 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6f77539a-ba4f-4aed-a331-cdc49c9d4779-xtables-lock\") pod \"6f77539a-ba4f-4aed-a331-cdc49c9d4779\" (UID: \"6f77539a-ba4f-4aed-a331-cdc49c9d4779\") " Feb 13 20:28:16.981741 kubelet[2742]: I0213 20:28:16.979038 2742 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6f77539a-ba4f-4aed-a331-cdc49c9d4779-lib-modules\") pod \"6f77539a-ba4f-4aed-a331-cdc49c9d4779\" (UID: \"6f77539a-ba4f-4aed-a331-cdc49c9d4779\") " Feb 13 20:28:16.981741 kubelet[2742]: I0213 20:28:16.979054 2742 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6f77539a-ba4f-4aed-a331-cdc49c9d4779-cni-path\") pod \"6f77539a-ba4f-4aed-a331-cdc49c9d4779\" (UID: \"6f77539a-ba4f-4aed-a331-cdc49c9d4779\") " Feb 13 20:28:16.981741 kubelet[2742]: I0213 20:28:16.979072 2742 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfzx7\" (UniqueName: \"kubernetes.io/projected/c710d067-c616-4320-8001-fe3879354682-kube-api-access-jfzx7\") pod \"c710d067-c616-4320-8001-fe3879354682\" (UID: \"c710d067-c616-4320-8001-fe3879354682\") " Feb 13 20:28:16.981741 kubelet[2742]: I0213 20:28:16.979090 2742 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6f77539a-ba4f-4aed-a331-cdc49c9d4779-hostproc\") pod \"6f77539a-ba4f-4aed-a331-cdc49c9d4779\" (UID: \"6f77539a-ba4f-4aed-a331-cdc49c9d4779\") " Feb 13 20:28:16.983175 kubelet[2742]: I0213 20:28:16.979107 2742 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mzh99\" (UniqueName: \"kubernetes.io/projected/6f77539a-ba4f-4aed-a331-cdc49c9d4779-kube-api-access-mzh99\") pod \"6f77539a-ba4f-4aed-a331-cdc49c9d4779\" (UID: \"6f77539a-ba4f-4aed-a331-cdc49c9d4779\") " Feb 13 20:28:16.983175 kubelet[2742]: I0213 20:28:16.979122 2742 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6f77539a-ba4f-4aed-a331-cdc49c9d4779-bpf-maps\") pod \"6f77539a-ba4f-4aed-a331-cdc49c9d4779\" (UID: \"6f77539a-ba4f-4aed-a331-cdc49c9d4779\") " Feb 13 20:28:16.983175 kubelet[2742]: I0213 20:28:16.979136 2742 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6f77539a-ba4f-4aed-a331-cdc49c9d4779-cilium-cgroup\") pod \"6f77539a-ba4f-4aed-a331-cdc49c9d4779\" (UID: \"6f77539a-ba4f-4aed-a331-cdc49c9d4779\") " Feb 13 20:28:16.983175 kubelet[2742]: I0213 20:28:16.979152 2742 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6f77539a-ba4f-4aed-a331-cdc49c9d4779-hubble-tls\") pod \"6f77539a-ba4f-4aed-a331-cdc49c9d4779\" (UID: \"6f77539a-ba4f-4aed-a331-cdc49c9d4779\") " Feb 13 20:28:16.992883 kubelet[2742]: I0213 20:28:16.990812 2742 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f77539a-ba4f-4aed-a331-cdc49c9d4779-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "6f77539a-ba4f-4aed-a331-cdc49c9d4779" (UID: "6f77539a-ba4f-4aed-a331-cdc49c9d4779"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 20:28:16.992883 kubelet[2742]: I0213 20:28:16.990855 2742 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f77539a-ba4f-4aed-a331-cdc49c9d4779-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "6f77539a-ba4f-4aed-a331-cdc49c9d4779" (UID: "6f77539a-ba4f-4aed-a331-cdc49c9d4779"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 13 20:28:16.992883 kubelet[2742]: I0213 20:28:16.991870 2742 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f77539a-ba4f-4aed-a331-cdc49c9d4779-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "6f77539a-ba4f-4aed-a331-cdc49c9d4779" (UID: "6f77539a-ba4f-4aed-a331-cdc49c9d4779"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 20:28:16.992883 kubelet[2742]: I0213 20:28:16.991892 2742 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f77539a-ba4f-4aed-a331-cdc49c9d4779-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "6f77539a-ba4f-4aed-a331-cdc49c9d4779" (UID: "6f77539a-ba4f-4aed-a331-cdc49c9d4779"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 20:28:16.992883 kubelet[2742]: I0213 20:28:16.991993 2742 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f77539a-ba4f-4aed-a331-cdc49c9d4779-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "6f77539a-ba4f-4aed-a331-cdc49c9d4779" (UID: "6f77539a-ba4f-4aed-a331-cdc49c9d4779"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 13 20:28:16.993147 kubelet[2742]: I0213 20:28:16.992075 2742 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f77539a-ba4f-4aed-a331-cdc49c9d4779-cni-path" (OuterVolumeSpecName: "cni-path") pod "6f77539a-ba4f-4aed-a331-cdc49c9d4779" (UID: "6f77539a-ba4f-4aed-a331-cdc49c9d4779"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 20:28:16.993147 kubelet[2742]: I0213 20:28:16.992117 2742 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f77539a-ba4f-4aed-a331-cdc49c9d4779-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "6f77539a-ba4f-4aed-a331-cdc49c9d4779" (UID: "6f77539a-ba4f-4aed-a331-cdc49c9d4779"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 20:28:16.994925 kubelet[2742]: I0213 20:28:16.994890 2742 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f77539a-ba4f-4aed-a331-cdc49c9d4779-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6f77539a-ba4f-4aed-a331-cdc49c9d4779" (UID: "6f77539a-ba4f-4aed-a331-cdc49c9d4779"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 13 20:28:16.998887 kubelet[2742]: I0213 20:28:16.998300 2742 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c710d067-c616-4320-8001-fe3879354682-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c710d067-c616-4320-8001-fe3879354682" (UID: "c710d067-c616-4320-8001-fe3879354682"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 13 20:28:16.998887 kubelet[2742]: I0213 20:28:16.998364 2742 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f77539a-ba4f-4aed-a331-cdc49c9d4779-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "6f77539a-ba4f-4aed-a331-cdc49c9d4779" (UID: "6f77539a-ba4f-4aed-a331-cdc49c9d4779"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 20:28:16.998887 kubelet[2742]: I0213 20:28:16.998404 2742 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f77539a-ba4f-4aed-a331-cdc49c9d4779-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6f77539a-ba4f-4aed-a331-cdc49c9d4779" (UID: "6f77539a-ba4f-4aed-a331-cdc49c9d4779"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 20:28:16.999289 kubelet[2742]: I0213 20:28:16.999253 2742 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c710d067-c616-4320-8001-fe3879354682-kube-api-access-jfzx7" (OuterVolumeSpecName: "kube-api-access-jfzx7") pod "c710d067-c616-4320-8001-fe3879354682" (UID: "c710d067-c616-4320-8001-fe3879354682"). InnerVolumeSpecName "kube-api-access-jfzx7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 13 20:28:16.999559 kubelet[2742]: I0213 20:28:16.999463 2742 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f77539a-ba4f-4aed-a331-cdc49c9d4779-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "6f77539a-ba4f-4aed-a331-cdc49c9d4779" (UID: "6f77539a-ba4f-4aed-a331-cdc49c9d4779"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 20:28:16.999703 kubelet[2742]: I0213 20:28:16.999642 2742 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f77539a-ba4f-4aed-a331-cdc49c9d4779-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "6f77539a-ba4f-4aed-a331-cdc49c9d4779" (UID: "6f77539a-ba4f-4aed-a331-cdc49c9d4779"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 20:28:16.999999 kubelet[2742]: I0213 20:28:16.999901 2742 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f77539a-ba4f-4aed-a331-cdc49c9d4779-hostproc" (OuterVolumeSpecName: "hostproc") pod "6f77539a-ba4f-4aed-a331-cdc49c9d4779" (UID: "6f77539a-ba4f-4aed-a331-cdc49c9d4779"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 20:28:17.003164 kubelet[2742]: I0213 20:28:17.003104 2742 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f77539a-ba4f-4aed-a331-cdc49c9d4779-kube-api-access-mzh99" (OuterVolumeSpecName: "kube-api-access-mzh99") pod "6f77539a-ba4f-4aed-a331-cdc49c9d4779" (UID: "6f77539a-ba4f-4aed-a331-cdc49c9d4779"). InnerVolumeSpecName "kube-api-access-mzh99". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 13 20:28:17.041719 systemd[1]: Removed slice kubepods-burstable-pod6f77539a_ba4f_4aed_a331_cdc49c9d4779.slice - libcontainer container kubepods-burstable-pod6f77539a_ba4f_4aed_a331_cdc49c9d4779.slice. Feb 13 20:28:17.041856 systemd[1]: kubepods-burstable-pod6f77539a_ba4f_4aed_a331_cdc49c9d4779.slice: Consumed 7.791s CPU time, 196.1M memory peak, 72M read from disk, 13.3M written to disk. Feb 13 20:28:17.043810 systemd[1]: Removed slice kubepods-besteffort-podc710d067_c616_4320_8001_fe3879354682.slice - libcontainer container kubepods-besteffort-podc710d067_c616_4320_8001_fe3879354682.slice. Feb 13 20:28:17.080046 kubelet[2742]: I0213 20:28:17.079986 2742 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6f77539a-ba4f-4aed-a331-cdc49c9d4779-etc-cni-netd\") on node \"srv-llv2e.gb1.brightbox.com\" DevicePath \"\"" Feb 13 20:28:17.080728 kubelet[2742]: I0213 20:28:17.080339 2742 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6f77539a-ba4f-4aed-a331-cdc49c9d4779-cilium-config-path\") on node \"srv-llv2e.gb1.brightbox.com\" DevicePath \"\"" Feb 13 20:28:17.080728 kubelet[2742]: I0213 20:28:17.080397 2742 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6f77539a-ba4f-4aed-a331-cdc49c9d4779-cilium-run\") on node \"srv-llv2e.gb1.brightbox.com\" DevicePath \"\"" Feb 13 20:28:17.080728 kubelet[2742]: I0213 20:28:17.080424 2742 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c710d067-c616-4320-8001-fe3879354682-cilium-config-path\") on node \"srv-llv2e.gb1.brightbox.com\" DevicePath \"\"" Feb 13 20:28:17.080728 kubelet[2742]: I0213 20:28:17.080446 2742 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6f77539a-ba4f-4aed-a331-cdc49c9d4779-lib-modules\") on node \"srv-llv2e.gb1.brightbox.com\" DevicePath \"\"" Feb 13 20:28:17.080728 kubelet[2742]: I0213 20:28:17.080469 2742 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6f77539a-ba4f-4aed-a331-cdc49c9d4779-xtables-lock\") on node \"srv-llv2e.gb1.brightbox.com\" DevicePath \"\"" Feb 13 20:28:17.080728 kubelet[2742]: I0213 20:28:17.080491 2742 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6f77539a-ba4f-4aed-a331-cdc49c9d4779-cni-path\") on node \"srv-llv2e.gb1.brightbox.com\" DevicePath \"\"" Feb 13 20:28:17.080728 kubelet[2742]: I0213 20:28:17.080512 2742 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jfzx7\" (UniqueName: \"kubernetes.io/projected/c710d067-c616-4320-8001-fe3879354682-kube-api-access-jfzx7\") on node \"srv-llv2e.gb1.brightbox.com\" DevicePath \"\"" Feb 13 20:28:17.080728 kubelet[2742]: I0213 20:28:17.080536 2742 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mzh99\" (UniqueName: \"kubernetes.io/projected/6f77539a-ba4f-4aed-a331-cdc49c9d4779-kube-api-access-mzh99\") on node \"srv-llv2e.gb1.brightbox.com\" DevicePath \"\"" Feb 13 20:28:17.081470 kubelet[2742]: I0213 20:28:17.080558 2742 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6f77539a-ba4f-4aed-a331-cdc49c9d4779-hostproc\") on node \"srv-llv2e.gb1.brightbox.com\" DevicePath \"\"" Feb 13 20:28:17.081470 kubelet[2742]: I0213 20:28:17.080580 2742 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6f77539a-ba4f-4aed-a331-cdc49c9d4779-bpf-maps\") on node \"srv-llv2e.gb1.brightbox.com\" DevicePath \"\"" Feb 13 20:28:17.081470 kubelet[2742]: I0213 20:28:17.080601 2742 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6f77539a-ba4f-4aed-a331-cdc49c9d4779-cilium-cgroup\") on node \"srv-llv2e.gb1.brightbox.com\" DevicePath \"\"" Feb 13 20:28:17.081470 kubelet[2742]: I0213 20:28:17.080622 2742 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6f77539a-ba4f-4aed-a331-cdc49c9d4779-hubble-tls\") on node \"srv-llv2e.gb1.brightbox.com\" DevicePath \"\"" Feb 13 20:28:17.081470 kubelet[2742]: I0213 20:28:17.080643 2742 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6f77539a-ba4f-4aed-a331-cdc49c9d4779-host-proc-sys-net\") on node \"srv-llv2e.gb1.brightbox.com\" DevicePath \"\"" Feb 13 20:28:17.081470 kubelet[2742]: I0213 20:28:17.080664 2742 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6f77539a-ba4f-4aed-a331-cdc49c9d4779-host-proc-sys-kernel\") on node \"srv-llv2e.gb1.brightbox.com\" DevicePath \"\"" Feb 13 20:28:17.081470 kubelet[2742]: I0213 20:28:17.080691 2742 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6f77539a-ba4f-4aed-a331-cdc49c9d4779-clustermesh-secrets\") on node \"srv-llv2e.gb1.brightbox.com\" DevicePath \"\"" Feb 13 20:28:17.625949 kubelet[2742]: I0213 20:28:17.625806 2742 scope.go:117] "RemoveContainer" containerID="9c54a7464ee7c303111b46f9220e13e6092d69dd1b4fed4f9c57b9d4e56cedd0" Feb 13 20:28:17.638697 containerd[1536]: time="2025-02-13T20:28:17.638315337Z" level=info msg="RemoveContainer for \"9c54a7464ee7c303111b46f9220e13e6092d69dd1b4fed4f9c57b9d4e56cedd0\"" Feb 13 20:28:17.641051 containerd[1536]: time="2025-02-13T20:28:17.641026593Z" level=info msg="RemoveContainer for \"9c54a7464ee7c303111b46f9220e13e6092d69dd1b4fed4f9c57b9d4e56cedd0\" returns successfully" Feb 13 20:28:17.642598 kubelet[2742]: I0213 20:28:17.642203 2742 scope.go:117] "RemoveContainer" containerID="9c54a7464ee7c303111b46f9220e13e6092d69dd1b4fed4f9c57b9d4e56cedd0" Feb 13 20:28:17.642926 containerd[1536]: time="2025-02-13T20:28:17.642888567Z" level=error msg="ContainerStatus for \"9c54a7464ee7c303111b46f9220e13e6092d69dd1b4fed4f9c57b9d4e56cedd0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9c54a7464ee7c303111b46f9220e13e6092d69dd1b4fed4f9c57b9d4e56cedd0\": not found" Feb 13 20:28:17.646174 kubelet[2742]: E0213 20:28:17.645469 2742 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9c54a7464ee7c303111b46f9220e13e6092d69dd1b4fed4f9c57b9d4e56cedd0\": not found" containerID="9c54a7464ee7c303111b46f9220e13e6092d69dd1b4fed4f9c57b9d4e56cedd0" Feb 13 20:28:17.651400 kubelet[2742]: I0213 20:28:17.646235 2742 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9c54a7464ee7c303111b46f9220e13e6092d69dd1b4fed4f9c57b9d4e56cedd0"} err="failed to get container status \"9c54a7464ee7c303111b46f9220e13e6092d69dd1b4fed4f9c57b9d4e56cedd0\": rpc error: code = NotFound desc = an error occurred when try to find container \"9c54a7464ee7c303111b46f9220e13e6092d69dd1b4fed4f9c57b9d4e56cedd0\": not found" Feb 13 20:28:17.651400 kubelet[2742]: I0213 20:28:17.650501 2742 scope.go:117] "RemoveContainer" containerID="71354c7de9cccdc84fa1a4e449dc404e4ecd941826516d981ea1b726507cf906" Feb 13 20:28:17.653574 containerd[1536]: time="2025-02-13T20:28:17.653552000Z" level=info msg="RemoveContainer for \"71354c7de9cccdc84fa1a4e449dc404e4ecd941826516d981ea1b726507cf906\"" Feb 13 20:28:17.656488 containerd[1536]: time="2025-02-13T20:28:17.656466389Z" level=info msg="RemoveContainer for \"71354c7de9cccdc84fa1a4e449dc404e4ecd941826516d981ea1b726507cf906\" returns successfully" Feb 13 20:28:17.657072 kubelet[2742]: I0213 20:28:17.656971 2742 scope.go:117] "RemoveContainer" containerID="5dc59c33d675bebce020ae783516eb4b89f17b5654e3e724b27eb3429c059c53" Feb 13 20:28:17.659444 containerd[1536]: time="2025-02-13T20:28:17.659325085Z" level=info msg="RemoveContainer for \"5dc59c33d675bebce020ae783516eb4b89f17b5654e3e724b27eb3429c059c53\"" Feb 13 20:28:17.662081 containerd[1536]: time="2025-02-13T20:28:17.661973549Z" level=info msg="RemoveContainer for \"5dc59c33d675bebce020ae783516eb4b89f17b5654e3e724b27eb3429c059c53\" returns successfully" Feb 13 20:28:17.662529 kubelet[2742]: I0213 20:28:17.662311 2742 scope.go:117] "RemoveContainer" containerID="5f425656f4795f8afcfffd1711977569c0aaeacf56e7067bff570683a1f322aa" Feb 13 20:28:17.664930 containerd[1536]: time="2025-02-13T20:28:17.664885119Z" level=info msg="RemoveContainer for \"5f425656f4795f8afcfffd1711977569c0aaeacf56e7067bff570683a1f322aa\"" Feb 13 20:28:17.667165 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-79ed401a671d5958c08cd139d72dd8f8648eb710891ccd838d75221721b4a6a7-rootfs.mount: Deactivated successfully. Feb 13 20:28:17.668227 systemd[1]: var-lib-kubelet-pods-c710d067\x2dc616\x2d4320\x2d8001\x2dfe3879354682-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djfzx7.mount: Deactivated successfully. Feb 13 20:28:17.668319 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8d3b6a7306ce4a4b89d977a12f1e648fa7f976e4ab00d72f54c2dd35f7bd8407-rootfs.mount: Deactivated successfully. Feb 13 20:28:17.668400 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8d3b6a7306ce4a4b89d977a12f1e648fa7f976e4ab00d72f54c2dd35f7bd8407-shm.mount: Deactivated successfully. Feb 13 20:28:17.668465 systemd[1]: var-lib-kubelet-pods-6f77539a\x2dba4f\x2d4aed\x2da331\x2dcdc49c9d4779-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmzh99.mount: Deactivated successfully. Feb 13 20:28:17.668535 systemd[1]: var-lib-kubelet-pods-6f77539a\x2dba4f\x2d4aed\x2da331\x2dcdc49c9d4779-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 20:28:17.668636 systemd[1]: var-lib-kubelet-pods-6f77539a\x2dba4f\x2d4aed\x2da331\x2dcdc49c9d4779-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 20:28:17.676854 containerd[1536]: time="2025-02-13T20:28:17.676811658Z" level=info msg="RemoveContainer for \"5f425656f4795f8afcfffd1711977569c0aaeacf56e7067bff570683a1f322aa\" returns successfully" Feb 13 20:28:17.677091 kubelet[2742]: I0213 20:28:17.677067 2742 scope.go:117] "RemoveContainer" containerID="992750fcfb2c82f766815b1aada5c5279bd4a7002ea61e11af7767e12018c7f5" Feb 13 20:28:17.679407 containerd[1536]: time="2025-02-13T20:28:17.679377979Z" level=info msg="RemoveContainer for \"992750fcfb2c82f766815b1aada5c5279bd4a7002ea61e11af7767e12018c7f5\"" Feb 13 20:28:17.681296 containerd[1536]: time="2025-02-13T20:28:17.681261864Z" level=info msg="RemoveContainer for \"992750fcfb2c82f766815b1aada5c5279bd4a7002ea61e11af7767e12018c7f5\" returns successfully" Feb 13 20:28:17.681534 kubelet[2742]: I0213 20:28:17.681484 2742 scope.go:117] "RemoveContainer" containerID="62630eec0eae38777d272c2355b37a492fd28b6f5e5d518cfc27482e122d18d2" Feb 13 20:28:17.682725 containerd[1536]: time="2025-02-13T20:28:17.682704380Z" level=info msg="RemoveContainer for \"62630eec0eae38777d272c2355b37a492fd28b6f5e5d518cfc27482e122d18d2\"" Feb 13 20:28:17.685084 containerd[1536]: time="2025-02-13T20:28:17.685037174Z" level=info msg="RemoveContainer for \"62630eec0eae38777d272c2355b37a492fd28b6f5e5d518cfc27482e122d18d2\" returns successfully" Feb 13 20:28:17.685398 kubelet[2742]: I0213 20:28:17.685355 2742 scope.go:117] "RemoveContainer" containerID="71354c7de9cccdc84fa1a4e449dc404e4ecd941826516d981ea1b726507cf906" Feb 13 20:28:17.685746 containerd[1536]: time="2025-02-13T20:28:17.685691552Z" level=error msg="ContainerStatus for \"71354c7de9cccdc84fa1a4e449dc404e4ecd941826516d981ea1b726507cf906\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"71354c7de9cccdc84fa1a4e449dc404e4ecd941826516d981ea1b726507cf906\": not found" Feb 13 20:28:17.686046 kubelet[2742]: E0213 20:28:17.685924 2742 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"71354c7de9cccdc84fa1a4e449dc404e4ecd941826516d981ea1b726507cf906\": not found" containerID="71354c7de9cccdc84fa1a4e449dc404e4ecd941826516d981ea1b726507cf906" Feb 13 20:28:17.686046 kubelet[2742]: I0213 20:28:17.685956 2742 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"71354c7de9cccdc84fa1a4e449dc404e4ecd941826516d981ea1b726507cf906"} err="failed to get container status \"71354c7de9cccdc84fa1a4e449dc404e4ecd941826516d981ea1b726507cf906\": rpc error: code = NotFound desc = an error occurred when try to find container \"71354c7de9cccdc84fa1a4e449dc404e4ecd941826516d981ea1b726507cf906\": not found" Feb 13 20:28:17.686046 kubelet[2742]: I0213 20:28:17.685978 2742 scope.go:117] "RemoveContainer" containerID="5dc59c33d675bebce020ae783516eb4b89f17b5654e3e724b27eb3429c059c53" Feb 13 20:28:17.686369 containerd[1536]: time="2025-02-13T20:28:17.686267872Z" level=error msg="ContainerStatus for \"5dc59c33d675bebce020ae783516eb4b89f17b5654e3e724b27eb3429c059c53\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5dc59c33d675bebce020ae783516eb4b89f17b5654e3e724b27eb3429c059c53\": not found" Feb 13 20:28:17.686481 kubelet[2742]: E0213 20:28:17.686448 2742 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5dc59c33d675bebce020ae783516eb4b89f17b5654e3e724b27eb3429c059c53\": not found" containerID="5dc59c33d675bebce020ae783516eb4b89f17b5654e3e724b27eb3429c059c53" Feb 13 20:28:17.686531 kubelet[2742]: I0213 20:28:17.686499 2742 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5dc59c33d675bebce020ae783516eb4b89f17b5654e3e724b27eb3429c059c53"} err="failed to get container status \"5dc59c33d675bebce020ae783516eb4b89f17b5654e3e724b27eb3429c059c53\": rpc error: code = NotFound desc = an error occurred when try to find container \"5dc59c33d675bebce020ae783516eb4b89f17b5654e3e724b27eb3429c059c53\": not found" Feb 13 20:28:17.686581 kubelet[2742]: I0213 20:28:17.686536 2742 scope.go:117] "RemoveContainer" containerID="5f425656f4795f8afcfffd1711977569c0aaeacf56e7067bff570683a1f322aa" Feb 13 20:28:17.686899 containerd[1536]: time="2025-02-13T20:28:17.686875544Z" level=error msg="ContainerStatus for \"5f425656f4795f8afcfffd1711977569c0aaeacf56e7067bff570683a1f322aa\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5f425656f4795f8afcfffd1711977569c0aaeacf56e7067bff570683a1f322aa\": not found" Feb 13 20:28:17.687172 kubelet[2742]: E0213 20:28:17.687122 2742 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5f425656f4795f8afcfffd1711977569c0aaeacf56e7067bff570683a1f322aa\": not found" containerID="5f425656f4795f8afcfffd1711977569c0aaeacf56e7067bff570683a1f322aa" Feb 13 20:28:17.687225 kubelet[2742]: I0213 20:28:17.687187 2742 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5f425656f4795f8afcfffd1711977569c0aaeacf56e7067bff570683a1f322aa"} err="failed to get container status \"5f425656f4795f8afcfffd1711977569c0aaeacf56e7067bff570683a1f322aa\": rpc error: code = NotFound desc = an error occurred when try to find container \"5f425656f4795f8afcfffd1711977569c0aaeacf56e7067bff570683a1f322aa\": not found" Feb 13 20:28:17.687256 kubelet[2742]: I0213 20:28:17.687219 2742 scope.go:117] "RemoveContainer" containerID="992750fcfb2c82f766815b1aada5c5279bd4a7002ea61e11af7767e12018c7f5" Feb 13 20:28:17.687967 containerd[1536]: time="2025-02-13T20:28:17.687643108Z" level=error msg="ContainerStatus for \"992750fcfb2c82f766815b1aada5c5279bd4a7002ea61e11af7767e12018c7f5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"992750fcfb2c82f766815b1aada5c5279bd4a7002ea61e11af7767e12018c7f5\": not found" Feb 13 20:28:17.688178 kubelet[2742]: E0213 20:28:17.688149 2742 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"992750fcfb2c82f766815b1aada5c5279bd4a7002ea61e11af7767e12018c7f5\": not found" containerID="992750fcfb2c82f766815b1aada5c5279bd4a7002ea61e11af7767e12018c7f5" Feb 13 20:28:17.688263 kubelet[2742]: I0213 20:28:17.688224 2742 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"992750fcfb2c82f766815b1aada5c5279bd4a7002ea61e11af7767e12018c7f5"} err="failed to get container status \"992750fcfb2c82f766815b1aada5c5279bd4a7002ea61e11af7767e12018c7f5\": rpc error: code = NotFound desc = an error occurred when try to find container \"992750fcfb2c82f766815b1aada5c5279bd4a7002ea61e11af7767e12018c7f5\": not found" Feb 13 20:28:17.688306 kubelet[2742]: I0213 20:28:17.688274 2742 scope.go:117] "RemoveContainer" containerID="62630eec0eae38777d272c2355b37a492fd28b6f5e5d518cfc27482e122d18d2" Feb 13 20:28:17.688529 containerd[1536]: time="2025-02-13T20:28:17.688472856Z" level=error msg="ContainerStatus for \"62630eec0eae38777d272c2355b37a492fd28b6f5e5d518cfc27482e122d18d2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"62630eec0eae38777d272c2355b37a492fd28b6f5e5d518cfc27482e122d18d2\": not found" Feb 13 20:28:17.688720 kubelet[2742]: E0213 20:28:17.688703 2742 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"62630eec0eae38777d272c2355b37a492fd28b6f5e5d518cfc27482e122d18d2\": not found" containerID="62630eec0eae38777d272c2355b37a492fd28b6f5e5d518cfc27482e122d18d2" Feb 13 20:28:17.688818 kubelet[2742]: I0213 20:28:17.688798 2742 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"62630eec0eae38777d272c2355b37a492fd28b6f5e5d518cfc27482e122d18d2"} err="failed to get container status \"62630eec0eae38777d272c2355b37a492fd28b6f5e5d518cfc27482e122d18d2\": rpc error: code = NotFound desc = an error occurred when try to find container \"62630eec0eae38777d272c2355b37a492fd28b6f5e5d518cfc27482e122d18d2\": not found" Feb 13 20:28:18.169345 kubelet[2742]: E0213 20:28:18.169210 2742 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:28:18.726084 sshd[4360]: Connection closed by 139.178.89.65 port 39472 Feb 13 20:28:18.727858 sshd-session[4358]: pam_unix(sshd:session): session closed for user core Feb 13 20:28:18.735732 systemd[1]: sshd@22-10.244.92.114:22-139.178.89.65:39472.service: Deactivated successfully. Feb 13 20:28:18.739295 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 20:28:18.741102 systemd-logind[1511]: Session 25 logged out. Waiting for processes to exit. Feb 13 20:28:18.743648 systemd-logind[1511]: Removed session 25. Feb 13 20:28:18.893446 systemd[1]: Started sshd@23-10.244.92.114:22-139.178.89.65:34076.service - OpenSSH per-connection server daemon (139.178.89.65:34076). Feb 13 20:28:19.018428 kubelet[2742]: I0213 20:28:19.018215 2742 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f77539a-ba4f-4aed-a331-cdc49c9d4779" path="/var/lib/kubelet/pods/6f77539a-ba4f-4aed-a331-cdc49c9d4779/volumes" Feb 13 20:28:19.019586 kubelet[2742]: I0213 20:28:19.019543 2742 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c710d067-c616-4320-8001-fe3879354682" path="/var/lib/kubelet/pods/c710d067-c616-4320-8001-fe3879354682/volumes" Feb 13 20:28:19.814351 sshd[4520]: Accepted publickey for core from 139.178.89.65 port 34076 ssh2: RSA SHA256:1d/NPWzJh4p1csN6rw9jx6l57+TZuIaUuHeQZhkXldk Feb 13 20:28:19.818528 sshd-session[4520]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:28:19.830159 systemd-logind[1511]: New session 26 of user core. Feb 13 20:28:19.836957 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 20:28:21.243009 kubelet[2742]: I0213 20:28:21.242773 2742 memory_manager.go:355] "RemoveStaleState removing state" podUID="c710d067-c616-4320-8001-fe3879354682" containerName="cilium-operator" Feb 13 20:28:21.243009 kubelet[2742]: I0213 20:28:21.242826 2742 memory_manager.go:355] "RemoveStaleState removing state" podUID="6f77539a-ba4f-4aed-a331-cdc49c9d4779" containerName="cilium-agent" Feb 13 20:28:21.263736 systemd[1]: Created slice kubepods-burstable-pod0909e24e_be0b_4acb_b7f9_9dac409ba6f0.slice - libcontainer container kubepods-burstable-pod0909e24e_be0b_4acb_b7f9_9dac409ba6f0.slice. Feb 13 20:28:21.312388 kubelet[2742]: I0213 20:28:21.312325 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0909e24e-be0b-4acb-b7f9-9dac409ba6f0-cilium-run\") pod \"cilium-9gb4g\" (UID: \"0909e24e-be0b-4acb-b7f9-9dac409ba6f0\") " pod="kube-system/cilium-9gb4g" Feb 13 20:28:21.312581 kubelet[2742]: I0213 20:28:21.312401 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0909e24e-be0b-4acb-b7f9-9dac409ba6f0-cilium-cgroup\") pod \"cilium-9gb4g\" (UID: \"0909e24e-be0b-4acb-b7f9-9dac409ba6f0\") " pod="kube-system/cilium-9gb4g" Feb 13 20:28:21.312581 kubelet[2742]: I0213 20:28:21.312446 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0909e24e-be0b-4acb-b7f9-9dac409ba6f0-xtables-lock\") pod \"cilium-9gb4g\" (UID: \"0909e24e-be0b-4acb-b7f9-9dac409ba6f0\") " pod="kube-system/cilium-9gb4g" Feb 13 20:28:21.312581 kubelet[2742]: I0213 20:28:21.312487 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0909e24e-be0b-4acb-b7f9-9dac409ba6f0-lib-modules\") pod \"cilium-9gb4g\" (UID: \"0909e24e-be0b-4acb-b7f9-9dac409ba6f0\") " pod="kube-system/cilium-9gb4g" Feb 13 20:28:21.312581 kubelet[2742]: I0213 20:28:21.312524 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0909e24e-be0b-4acb-b7f9-9dac409ba6f0-host-proc-sys-net\") pod \"cilium-9gb4g\" (UID: \"0909e24e-be0b-4acb-b7f9-9dac409ba6f0\") " pod="kube-system/cilium-9gb4g" Feb 13 20:28:21.312834 kubelet[2742]: I0213 20:28:21.312610 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0909e24e-be0b-4acb-b7f9-9dac409ba6f0-cni-path\") pod \"cilium-9gb4g\" (UID: \"0909e24e-be0b-4acb-b7f9-9dac409ba6f0\") " pod="kube-system/cilium-9gb4g" Feb 13 20:28:21.312834 kubelet[2742]: I0213 20:28:21.312656 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lhxt\" (UniqueName: \"kubernetes.io/projected/0909e24e-be0b-4acb-b7f9-9dac409ba6f0-kube-api-access-6lhxt\") pod \"cilium-9gb4g\" (UID: \"0909e24e-be0b-4acb-b7f9-9dac409ba6f0\") " pod="kube-system/cilium-9gb4g" Feb 13 20:28:21.312834 kubelet[2742]: I0213 20:28:21.312695 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0909e24e-be0b-4acb-b7f9-9dac409ba6f0-hostproc\") pod \"cilium-9gb4g\" (UID: \"0909e24e-be0b-4acb-b7f9-9dac409ba6f0\") " pod="kube-system/cilium-9gb4g" Feb 13 20:28:21.312834 kubelet[2742]: I0213 20:28:21.312791 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0909e24e-be0b-4acb-b7f9-9dac409ba6f0-etc-cni-netd\") pod \"cilium-9gb4g\" (UID: \"0909e24e-be0b-4acb-b7f9-9dac409ba6f0\") " pod="kube-system/cilium-9gb4g" Feb 13 20:28:21.312949 kubelet[2742]: I0213 20:28:21.312877 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0909e24e-be0b-4acb-b7f9-9dac409ba6f0-bpf-maps\") pod \"cilium-9gb4g\" (UID: \"0909e24e-be0b-4acb-b7f9-9dac409ba6f0\") " pod="kube-system/cilium-9gb4g" Feb 13 20:28:21.312949 kubelet[2742]: I0213 20:28:21.312924 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0909e24e-be0b-4acb-b7f9-9dac409ba6f0-cilium-config-path\") pod \"cilium-9gb4g\" (UID: \"0909e24e-be0b-4acb-b7f9-9dac409ba6f0\") " pod="kube-system/cilium-9gb4g" Feb 13 20:28:21.313004 kubelet[2742]: I0213 20:28:21.312964 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0909e24e-be0b-4acb-b7f9-9dac409ba6f0-cilium-ipsec-secrets\") pod \"cilium-9gb4g\" (UID: \"0909e24e-be0b-4acb-b7f9-9dac409ba6f0\") " pod="kube-system/cilium-9gb4g" Feb 13 20:28:21.313031 kubelet[2742]: I0213 20:28:21.313000 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0909e24e-be0b-4acb-b7f9-9dac409ba6f0-hubble-tls\") pod \"cilium-9gb4g\" (UID: \"0909e24e-be0b-4acb-b7f9-9dac409ba6f0\") " pod="kube-system/cilium-9gb4g" Feb 13 20:28:21.313063 kubelet[2742]: I0213 20:28:21.313040 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0909e24e-be0b-4acb-b7f9-9dac409ba6f0-clustermesh-secrets\") pod \"cilium-9gb4g\" (UID: \"0909e24e-be0b-4acb-b7f9-9dac409ba6f0\") " pod="kube-system/cilium-9gb4g" Feb 13 20:28:21.313094 kubelet[2742]: I0213 20:28:21.313077 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0909e24e-be0b-4acb-b7f9-9dac409ba6f0-host-proc-sys-kernel\") pod \"cilium-9gb4g\" (UID: \"0909e24e-be0b-4acb-b7f9-9dac409ba6f0\") " pod="kube-system/cilium-9gb4g" Feb 13 20:28:21.418669 sshd[4523]: Connection closed by 139.178.89.65 port 34076 Feb 13 20:28:21.418281 sshd-session[4520]: pam_unix(sshd:session): session closed for user core Feb 13 20:28:21.428442 systemd[1]: sshd@23-10.244.92.114:22-139.178.89.65:34076.service: Deactivated successfully. Feb 13 20:28:21.458488 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 20:28:21.466848 systemd-logind[1511]: Session 26 logged out. Waiting for processes to exit. Feb 13 20:28:21.469328 systemd-logind[1511]: Removed session 26. Feb 13 20:28:21.580696 containerd[1536]: time="2025-02-13T20:28:21.579848002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9gb4g,Uid:0909e24e-be0b-4acb-b7f9-9dac409ba6f0,Namespace:kube-system,Attempt:0,}" Feb 13 20:28:21.585993 systemd[1]: Started sshd@24-10.244.92.114:22-139.178.89.65:34092.service - OpenSSH per-connection server daemon (139.178.89.65:34092). Feb 13 20:28:21.615697 containerd[1536]: time="2025-02-13T20:28:21.615615830Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:28:21.616902 containerd[1536]: time="2025-02-13T20:28:21.616855989Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:28:21.617041 containerd[1536]: time="2025-02-13T20:28:21.617021783Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:28:21.617257 containerd[1536]: time="2025-02-13T20:28:21.617228602Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:28:21.646059 systemd[1]: Started cri-containerd-d6c67984e33848518e664784a417868846804886ef66e42028f39f38736903c9.scope - libcontainer container d6c67984e33848518e664784a417868846804886ef66e42028f39f38736903c9. Feb 13 20:28:21.683098 containerd[1536]: time="2025-02-13T20:28:21.683032578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9gb4g,Uid:0909e24e-be0b-4acb-b7f9-9dac409ba6f0,Namespace:kube-system,Attempt:0,} returns sandbox id \"d6c67984e33848518e664784a417868846804886ef66e42028f39f38736903c9\"" Feb 13 20:28:21.699291 containerd[1536]: time="2025-02-13T20:28:21.699242184Z" level=info msg="CreateContainer within sandbox \"d6c67984e33848518e664784a417868846804886ef66e42028f39f38736903c9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 20:28:21.708743 containerd[1536]: time="2025-02-13T20:28:21.708681422Z" level=info msg="CreateContainer within sandbox \"d6c67984e33848518e664784a417868846804886ef66e42028f39f38736903c9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"220a3f2cc288adcdd9cfb3725d9f583d693e233a301b79446b16a76a3735d625\"" Feb 13 20:28:21.711353 containerd[1536]: time="2025-02-13T20:28:21.710968240Z" level=info msg="StartContainer for \"220a3f2cc288adcdd9cfb3725d9f583d693e233a301b79446b16a76a3735d625\"" Feb 13 20:28:21.750251 systemd[1]: Started cri-containerd-220a3f2cc288adcdd9cfb3725d9f583d693e233a301b79446b16a76a3735d625.scope - libcontainer container 220a3f2cc288adcdd9cfb3725d9f583d693e233a301b79446b16a76a3735d625. Feb 13 20:28:21.786043 containerd[1536]: time="2025-02-13T20:28:21.785942377Z" level=info msg="StartContainer for \"220a3f2cc288adcdd9cfb3725d9f583d693e233a301b79446b16a76a3735d625\" returns successfully" Feb 13 20:28:21.800248 systemd[1]: cri-containerd-220a3f2cc288adcdd9cfb3725d9f583d693e233a301b79446b16a76a3735d625.scope: Deactivated successfully. Feb 13 20:28:21.800848 systemd[1]: cri-containerd-220a3f2cc288adcdd9cfb3725d9f583d693e233a301b79446b16a76a3735d625.scope: Consumed 25ms CPU time, 9.2M memory peak, 2.7M read from disk. Feb 13 20:28:21.841716 containerd[1536]: time="2025-02-13T20:28:21.841134151Z" level=info msg="shim disconnected" id=220a3f2cc288adcdd9cfb3725d9f583d693e233a301b79446b16a76a3735d625 namespace=k8s.io Feb 13 20:28:21.841716 containerd[1536]: time="2025-02-13T20:28:21.841264526Z" level=warning msg="cleaning up after shim disconnected" id=220a3f2cc288adcdd9cfb3725d9f583d693e233a301b79446b16a76a3735d625 namespace=k8s.io Feb 13 20:28:21.841716 containerd[1536]: time="2025-02-13T20:28:21.841292136Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:28:22.498218 sshd[4538]: Accepted publickey for core from 139.178.89.65 port 34092 ssh2: RSA SHA256:1d/NPWzJh4p1csN6rw9jx6l57+TZuIaUuHeQZhkXldk Feb 13 20:28:22.501998 sshd-session[4538]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:28:22.512914 systemd-logind[1511]: New session 27 of user core. Feb 13 20:28:22.518950 systemd[1]: Started session-27.scope - Session 27 of User core. Feb 13 20:28:22.653640 containerd[1536]: time="2025-02-13T20:28:22.652911926Z" level=info msg="CreateContainer within sandbox \"d6c67984e33848518e664784a417868846804886ef66e42028f39f38736903c9\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 20:28:22.668122 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount724976613.mount: Deactivated successfully. Feb 13 20:28:22.671344 containerd[1536]: time="2025-02-13T20:28:22.670450293Z" level=info msg="CreateContainer within sandbox \"d6c67984e33848518e664784a417868846804886ef66e42028f39f38736903c9\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"9ee0c8fa4840a554b1590c03f858b64cda3702f4e051cd81342a827ce892ae18\"" Feb 13 20:28:22.671344 containerd[1536]: time="2025-02-13T20:28:22.671197135Z" level=info msg="StartContainer for \"9ee0c8fa4840a554b1590c03f858b64cda3702f4e051cd81342a827ce892ae18\"" Feb 13 20:28:22.731937 systemd[1]: Started cri-containerd-9ee0c8fa4840a554b1590c03f858b64cda3702f4e051cd81342a827ce892ae18.scope - libcontainer container 9ee0c8fa4840a554b1590c03f858b64cda3702f4e051cd81342a827ce892ae18. Feb 13 20:28:22.762866 containerd[1536]: time="2025-02-13T20:28:22.762007475Z" level=info msg="StartContainer for \"9ee0c8fa4840a554b1590c03f858b64cda3702f4e051cd81342a827ce892ae18\" returns successfully" Feb 13 20:28:22.772279 systemd[1]: cri-containerd-9ee0c8fa4840a554b1590c03f858b64cda3702f4e051cd81342a827ce892ae18.scope: Deactivated successfully. Feb 13 20:28:22.773348 systemd[1]: cri-containerd-9ee0c8fa4840a554b1590c03f858b64cda3702f4e051cd81342a827ce892ae18.scope: Consumed 22ms CPU time, 7.5M memory peak, 2M read from disk. Feb 13 20:28:22.804420 containerd[1536]: time="2025-02-13T20:28:22.804331561Z" level=info msg="shim disconnected" id=9ee0c8fa4840a554b1590c03f858b64cda3702f4e051cd81342a827ce892ae18 namespace=k8s.io Feb 13 20:28:22.804899 containerd[1536]: time="2025-02-13T20:28:22.804671189Z" level=warning msg="cleaning up after shim disconnected" id=9ee0c8fa4840a554b1590c03f858b64cda3702f4e051cd81342a827ce892ae18 namespace=k8s.io Feb 13 20:28:22.804899 containerd[1536]: time="2025-02-13T20:28:22.804688458Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:28:23.120022 sshd[4641]: Connection closed by 139.178.89.65 port 34092 Feb 13 20:28:23.121317 sshd-session[4538]: pam_unix(sshd:session): session closed for user core Feb 13 20:28:23.131141 systemd[1]: sshd@24-10.244.92.114:22-139.178.89.65:34092.service: Deactivated successfully. Feb 13 20:28:23.135381 systemd[1]: session-27.scope: Deactivated successfully. Feb 13 20:28:23.136382 systemd-logind[1511]: Session 27 logged out. Waiting for processes to exit. Feb 13 20:28:23.137442 systemd-logind[1511]: Removed session 27. Feb 13 20:28:23.171241 kubelet[2742]: E0213 20:28:23.171142 2742 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:28:23.286126 systemd[1]: Started sshd@25-10.244.92.114:22-139.178.89.65:34096.service - OpenSSH per-connection server daemon (139.178.89.65:34096). Feb 13 20:28:23.440514 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9ee0c8fa4840a554b1590c03f858b64cda3702f4e051cd81342a827ce892ae18-rootfs.mount: Deactivated successfully. Feb 13 20:28:23.656222 containerd[1536]: time="2025-02-13T20:28:23.656166202Z" level=info msg="CreateContainer within sandbox \"d6c67984e33848518e664784a417868846804886ef66e42028f39f38736903c9\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 20:28:23.672675 containerd[1536]: time="2025-02-13T20:28:23.672627669Z" level=info msg="CreateContainer within sandbox \"d6c67984e33848518e664784a417868846804886ef66e42028f39f38736903c9\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"81fd85f76f4ec6690abc243ea13f610bfa2a4511a3f6d06342c988cb3aa63251\"" Feb 13 20:28:23.676380 containerd[1536]: time="2025-02-13T20:28:23.676103159Z" level=info msg="StartContainer for \"81fd85f76f4ec6690abc243ea13f610bfa2a4511a3f6d06342c988cb3aa63251\"" Feb 13 20:28:23.715997 systemd[1]: run-containerd-runc-k8s.io-81fd85f76f4ec6690abc243ea13f610bfa2a4511a3f6d06342c988cb3aa63251-runc.aqmtbu.mount: Deactivated successfully. Feb 13 20:28:23.723927 systemd[1]: Started cri-containerd-81fd85f76f4ec6690abc243ea13f610bfa2a4511a3f6d06342c988cb3aa63251.scope - libcontainer container 81fd85f76f4ec6690abc243ea13f610bfa2a4511a3f6d06342c988cb3aa63251. Feb 13 20:28:23.761870 containerd[1536]: time="2025-02-13T20:28:23.761626586Z" level=info msg="StartContainer for \"81fd85f76f4ec6690abc243ea13f610bfa2a4511a3f6d06342c988cb3aa63251\" returns successfully" Feb 13 20:28:23.766910 systemd[1]: cri-containerd-81fd85f76f4ec6690abc243ea13f610bfa2a4511a3f6d06342c988cb3aa63251.scope: Deactivated successfully. Feb 13 20:28:23.799171 containerd[1536]: time="2025-02-13T20:28:23.798874443Z" level=info msg="shim disconnected" id=81fd85f76f4ec6690abc243ea13f610bfa2a4511a3f6d06342c988cb3aa63251 namespace=k8s.io Feb 13 20:28:23.799171 containerd[1536]: time="2025-02-13T20:28:23.798978806Z" level=warning msg="cleaning up after shim disconnected" id=81fd85f76f4ec6690abc243ea13f610bfa2a4511a3f6d06342c988cb3aa63251 namespace=k8s.io Feb 13 20:28:23.799171 containerd[1536]: time="2025-02-13T20:28:23.798998299Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:28:24.191855 sshd[4709]: Accepted publickey for core from 139.178.89.65 port 34096 ssh2: RSA SHA256:1d/NPWzJh4p1csN6rw9jx6l57+TZuIaUuHeQZhkXldk Feb 13 20:28:24.195817 sshd-session[4709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:28:24.207813 systemd-logind[1511]: New session 28 of user core. Feb 13 20:28:24.216958 systemd[1]: Started session-28.scope - Session 28 of User core. Feb 13 20:28:24.441063 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-81fd85f76f4ec6690abc243ea13f610bfa2a4511a3f6d06342c988cb3aa63251-rootfs.mount: Deactivated successfully. Feb 13 20:28:24.695115 containerd[1536]: time="2025-02-13T20:28:24.695049752Z" level=info msg="CreateContainer within sandbox \"d6c67984e33848518e664784a417868846804886ef66e42028f39f38736903c9\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 20:28:24.707298 containerd[1536]: time="2025-02-13T20:28:24.707249286Z" level=info msg="CreateContainer within sandbox \"d6c67984e33848518e664784a417868846804886ef66e42028f39f38736903c9\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4c2a18792220dbc36cf04809e6b0ad86e745ef22c579198666b23b843bd1bb68\"" Feb 13 20:28:24.709774 containerd[1536]: time="2025-02-13T20:28:24.709664226Z" level=info msg="StartContainer for \"4c2a18792220dbc36cf04809e6b0ad86e745ef22c579198666b23b843bd1bb68\"" Feb 13 20:28:24.762171 systemd[1]: run-containerd-runc-k8s.io-4c2a18792220dbc36cf04809e6b0ad86e745ef22c579198666b23b843bd1bb68-runc.Did49J.mount: Deactivated successfully. Feb 13 20:28:24.771268 systemd[1]: Started cri-containerd-4c2a18792220dbc36cf04809e6b0ad86e745ef22c579198666b23b843bd1bb68.scope - libcontainer container 4c2a18792220dbc36cf04809e6b0ad86e745ef22c579198666b23b843bd1bb68. Feb 13 20:28:24.818333 systemd[1]: cri-containerd-4c2a18792220dbc36cf04809e6b0ad86e745ef22c579198666b23b843bd1bb68.scope: Deactivated successfully. Feb 13 20:28:24.823287 containerd[1536]: time="2025-02-13T20:28:24.820033812Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0909e24e_be0b_4acb_b7f9_9dac409ba6f0.slice/cri-containerd-4c2a18792220dbc36cf04809e6b0ad86e745ef22c579198666b23b843bd1bb68.scope/memory.events\": no such file or directory" Feb 13 20:28:24.823287 containerd[1536]: time="2025-02-13T20:28:24.823143007Z" level=info msg="StartContainer for \"4c2a18792220dbc36cf04809e6b0ad86e745ef22c579198666b23b843bd1bb68\" returns successfully" Feb 13 20:28:24.862841 containerd[1536]: time="2025-02-13T20:28:24.862644915Z" level=info msg="shim disconnected" id=4c2a18792220dbc36cf04809e6b0ad86e745ef22c579198666b23b843bd1bb68 namespace=k8s.io Feb 13 20:28:24.862841 containerd[1536]: time="2025-02-13T20:28:24.862817180Z" level=warning msg="cleaning up after shim disconnected" id=4c2a18792220dbc36cf04809e6b0ad86e745ef22c579198666b23b843bd1bb68 namespace=k8s.io Feb 13 20:28:24.862841 containerd[1536]: time="2025-02-13T20:28:24.862842010Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:28:25.442103 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4c2a18792220dbc36cf04809e6b0ad86e745ef22c579198666b23b843bd1bb68-rootfs.mount: Deactivated successfully. Feb 13 20:28:25.678715 containerd[1536]: time="2025-02-13T20:28:25.678653953Z" level=info msg="CreateContainer within sandbox \"d6c67984e33848518e664784a417868846804886ef66e42028f39f38736903c9\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 20:28:25.695604 containerd[1536]: time="2025-02-13T20:28:25.695489090Z" level=info msg="CreateContainer within sandbox \"d6c67984e33848518e664784a417868846804886ef66e42028f39f38736903c9\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4127929940b9bfa3cd8dc81b614d096308e6a7c69fdef8f2b93f3c85cd0f02b0\"" Feb 13 20:28:25.697765 containerd[1536]: time="2025-02-13T20:28:25.696685203Z" level=info msg="StartContainer for \"4127929940b9bfa3cd8dc81b614d096308e6a7c69fdef8f2b93f3c85cd0f02b0\"" Feb 13 20:28:25.763965 systemd[1]: Started cri-containerd-4127929940b9bfa3cd8dc81b614d096308e6a7c69fdef8f2b93f3c85cd0f02b0.scope - libcontainer container 4127929940b9bfa3cd8dc81b614d096308e6a7c69fdef8f2b93f3c85cd0f02b0. Feb 13 20:28:25.834705 containerd[1536]: time="2025-02-13T20:28:25.834650388Z" level=info msg="StartContainer for \"4127929940b9bfa3cd8dc81b614d096308e6a7c69fdef8f2b93f3c85cd0f02b0\" returns successfully" Feb 13 20:28:26.308949 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 13 20:28:26.440266 systemd[1]: run-containerd-runc-k8s.io-4127929940b9bfa3cd8dc81b614d096308e6a7c69fdef8f2b93f3c85cd0f02b0-runc.aY9FZL.mount: Deactivated successfully. Feb 13 20:28:26.545361 kubelet[2742]: I0213 20:28:26.544967 2742 setters.go:602] "Node became not ready" node="srv-llv2e.gb1.brightbox.com" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-02-13T20:28:26Z","lastTransitionTime":"2025-02-13T20:28:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 13 20:28:27.114568 systemd[1]: run-containerd-runc-k8s.io-4127929940b9bfa3cd8dc81b614d096308e6a7c69fdef8f2b93f3c85cd0f02b0-runc.zeS3J7.mount: Deactivated successfully. Feb 13 20:28:29.486288 systemd-networkd[1441]: lxc_health: Link UP Feb 13 20:28:29.494415 systemd-networkd[1441]: lxc_health: Gained carrier Feb 13 20:28:29.605389 kubelet[2742]: I0213 20:28:29.605303 2742 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-9gb4g" podStartSLOduration=8.605281009 podStartE2EDuration="8.605281009s" podCreationTimestamp="2025-02-13 20:28:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:28:26.710377405 +0000 UTC m=+153.912409535" watchObservedRunningTime="2025-02-13 20:28:29.605281009 +0000 UTC m=+156.807313145" Feb 13 20:28:31.031983 systemd-networkd[1441]: lxc_health: Gained IPv6LL Feb 13 20:28:31.659669 kubelet[2742]: E0213 20:28:31.659621 2742 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:37090->127.0.0.1:43039: write tcp 127.0.0.1:37090->127.0.0.1:43039: write: broken pipe Feb 13 20:28:33.776496 systemd[1]: run-containerd-runc-k8s.io-4127929940b9bfa3cd8dc81b614d096308e6a7c69fdef8f2b93f3c85cd0f02b0-runc.oLFBSq.mount: Deactivated successfully. Feb 13 20:28:35.431023 systemd[1]: Started sshd@26-10.244.92.114:22-194.0.234.37:59952.service - OpenSSH per-connection server daemon (194.0.234.37:59952). Feb 13 20:28:36.192209 sshd[4769]: Connection closed by 139.178.89.65 port 34096 Feb 13 20:28:36.195286 sshd-session[4709]: pam_unix(sshd:session): session closed for user core Feb 13 20:28:36.204318 systemd-logind[1511]: Session 28 logged out. Waiting for processes to exit. Feb 13 20:28:36.205585 systemd[1]: sshd@25-10.244.92.114:22-139.178.89.65:34096.service: Deactivated successfully. Feb 13 20:28:36.209496 systemd[1]: session-28.scope: Deactivated successfully. Feb 13 20:28:36.211726 systemd-logind[1511]: Removed session 28. Feb 13 20:28:37.235259 sshd[5508]: Invalid user nutanix from 194.0.234.37 port 59952 Feb 13 20:28:37.954789 sshd[5508]: Connection closed by invalid user nutanix 194.0.234.37 port 59952 [preauth] Feb 13 20:28:37.957932 systemd[1]: sshd@26-10.244.92.114:22-194.0.234.37:59952.service: Deactivated successfully.