Feb 13 22:50:08.029986 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 17:44:05 -00 2025 Feb 13 22:50:08.030021 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=ed9b5d8ea73d2e47b8decea8124089e04dd398ef43013c1b1a5809314044b1c3 Feb 13 22:50:08.030035 kernel: BIOS-provided physical RAM map: Feb 13 22:50:08.030050 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 13 22:50:08.030059 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 13 22:50:08.030069 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 13 22:50:08.030080 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Feb 13 22:50:08.030090 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Feb 13 22:50:08.030099 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Feb 13 22:50:08.030109 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Feb 13 22:50:08.030119 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 13 22:50:08.030129 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 13 22:50:08.030151 kernel: NX (Execute Disable) protection: active Feb 13 22:50:08.030172 kernel: APIC: Static calls initialized Feb 13 22:50:08.030186 kernel: SMBIOS 2.8 present. Feb 13 22:50:08.030202 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Feb 13 22:50:08.030214 kernel: Hypervisor detected: KVM Feb 13 22:50:08.030230 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 13 22:50:08.030241 kernel: kvm-clock: using sched offset of 5235814836 cycles Feb 13 22:50:08.030252 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 13 22:50:08.030263 kernel: tsc: Detected 2799.998 MHz processor Feb 13 22:50:08.030274 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 22:50:08.030285 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 22:50:08.030296 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Feb 13 22:50:08.030307 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Feb 13 22:50:08.030317 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 22:50:08.030332 kernel: Using GB pages for direct mapping Feb 13 22:50:08.030343 kernel: ACPI: Early table checksum verification disabled Feb 13 22:50:08.030354 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Feb 13 22:50:08.030365 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 22:50:08.030376 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 22:50:08.030386 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 22:50:08.030397 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Feb 13 22:50:08.030408 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 22:50:08.030418 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 22:50:08.030434 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 22:50:08.030445 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 22:50:08.030455 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Feb 13 22:50:08.030466 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Feb 13 22:50:08.030477 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Feb 13 22:50:08.030494 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Feb 13 22:50:08.030505 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Feb 13 22:50:08.030521 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Feb 13 22:50:08.030532 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Feb 13 22:50:08.030544 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 13 22:50:08.030560 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 13 22:50:08.030572 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Feb 13 22:50:08.030583 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Feb 13 22:50:08.030594 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Feb 13 22:50:08.030610 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Feb 13 22:50:08.030622 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Feb 13 22:50:08.030633 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Feb 13 22:50:08.030644 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Feb 13 22:50:08.030655 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Feb 13 22:50:08.030667 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Feb 13 22:50:08.030678 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Feb 13 22:50:08.030689 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Feb 13 22:50:08.030700 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Feb 13 22:50:08.030715 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Feb 13 22:50:08.030732 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Feb 13 22:50:08.030744 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Feb 13 22:50:08.030755 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Feb 13 22:50:08.030766 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Feb 13 22:50:08.030778 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Feb 13 22:50:08.030789 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Feb 13 22:50:08.030801 kernel: Zone ranges: Feb 13 22:50:08.030812 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 22:50:08.030823 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Feb 13 22:50:08.030839 kernel: Normal empty Feb 13 22:50:08.030851 kernel: Movable zone start for each node Feb 13 22:50:08.030862 kernel: Early memory node ranges Feb 13 22:50:08.031923 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 13 22:50:08.031938 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Feb 13 22:50:08.031949 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Feb 13 22:50:08.031961 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 22:50:08.031972 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 13 22:50:08.031990 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Feb 13 22:50:08.032002 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 13 22:50:08.032021 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 13 22:50:08.032033 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 13 22:50:08.032044 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 13 22:50:08.032055 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 13 22:50:08.032067 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 22:50:08.032078 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 13 22:50:08.032089 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 13 22:50:08.032100 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 22:50:08.032112 kernel: TSC deadline timer available Feb 13 22:50:08.032128 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Feb 13 22:50:08.032139 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Feb 13 22:50:08.032150 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Feb 13 22:50:08.032162 kernel: Booting paravirtualized kernel on KVM Feb 13 22:50:08.032184 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 22:50:08.032196 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Feb 13 22:50:08.032207 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Feb 13 22:50:08.032218 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Feb 13 22:50:08.032235 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Feb 13 22:50:08.032246 kernel: kvm-guest: PV spinlocks enabled Feb 13 22:50:08.032258 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 13 22:50:08.032271 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=ed9b5d8ea73d2e47b8decea8124089e04dd398ef43013c1b1a5809314044b1c3 Feb 13 22:50:08.032283 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 22:50:08.032294 kernel: random: crng init done Feb 13 22:50:08.032305 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 22:50:08.032316 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 13 22:50:08.032332 kernel: Fallback order for Node 0: 0 Feb 13 22:50:08.032344 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Feb 13 22:50:08.032360 kernel: Policy zone: DMA32 Feb 13 22:50:08.032372 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 22:50:08.032384 kernel: software IO TLB: area num 16. Feb 13 22:50:08.032395 kernel: Memory: 1901536K/2096616K available (12288K kernel code, 2301K rwdata, 22736K rodata, 42976K init, 2216K bss, 194820K reserved, 0K cma-reserved) Feb 13 22:50:08.032407 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Feb 13 22:50:08.032418 kernel: Kernel/User page tables isolation: enabled Feb 13 22:50:08.032430 kernel: ftrace: allocating 37923 entries in 149 pages Feb 13 22:50:08.032446 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 22:50:08.032458 kernel: Dynamic Preempt: voluntary Feb 13 22:50:08.032469 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 22:50:08.032481 kernel: rcu: RCU event tracing is enabled. Feb 13 22:50:08.032493 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Feb 13 22:50:08.032518 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 22:50:08.032545 kernel: Rude variant of Tasks RCU enabled. Feb 13 22:50:08.032562 kernel: Tracing variant of Tasks RCU enabled. Feb 13 22:50:08.032574 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 22:50:08.033896 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Feb 13 22:50:08.033911 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Feb 13 22:50:08.033924 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 22:50:08.033942 kernel: Console: colour VGA+ 80x25 Feb 13 22:50:08.033954 kernel: printk: console [tty0] enabled Feb 13 22:50:08.033967 kernel: printk: console [ttyS0] enabled Feb 13 22:50:08.033979 kernel: ACPI: Core revision 20230628 Feb 13 22:50:08.033991 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 22:50:08.034007 kernel: x2apic enabled Feb 13 22:50:08.034019 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 22:50:08.034038 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Feb 13 22:50:08.034051 kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998) Feb 13 22:50:08.034063 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Feb 13 22:50:08.034075 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Feb 13 22:50:08.034087 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Feb 13 22:50:08.034098 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 22:50:08.034110 kernel: Spectre V2 : Mitigation: Retpolines Feb 13 22:50:08.034122 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 22:50:08.034139 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 22:50:08.034151 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Feb 13 22:50:08.034171 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 22:50:08.034185 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 13 22:50:08.034197 kernel: MDS: Mitigation: Clear CPU buffers Feb 13 22:50:08.034208 kernel: MMIO Stale Data: Unknown: No mitigations Feb 13 22:50:08.034220 kernel: SRBDS: Unknown: Dependent on hypervisor status Feb 13 22:50:08.034231 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 22:50:08.034243 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 22:50:08.034255 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 22:50:08.034267 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 22:50:08.034284 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Feb 13 22:50:08.034296 kernel: Freeing SMP alternatives memory: 32K Feb 13 22:50:08.034313 kernel: pid_max: default: 32768 minimum: 301 Feb 13 22:50:08.034326 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 22:50:08.034338 kernel: landlock: Up and running. Feb 13 22:50:08.034350 kernel: SELinux: Initializing. Feb 13 22:50:08.034362 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 13 22:50:08.034373 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 13 22:50:08.034385 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Feb 13 22:50:08.034397 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Feb 13 22:50:08.034409 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Feb 13 22:50:08.034427 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Feb 13 22:50:08.034440 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Feb 13 22:50:08.034452 kernel: signal: max sigframe size: 1776 Feb 13 22:50:08.034464 kernel: rcu: Hierarchical SRCU implementation. Feb 13 22:50:08.034476 kernel: rcu: Max phase no-delay instances is 400. Feb 13 22:50:08.034488 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 13 22:50:08.034500 kernel: smp: Bringing up secondary CPUs ... Feb 13 22:50:08.034512 kernel: smpboot: x86: Booting SMP configuration: Feb 13 22:50:08.034524 kernel: .... node #0, CPUs: #1 Feb 13 22:50:08.034541 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Feb 13 22:50:08.034553 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 22:50:08.034564 kernel: smpboot: Max logical packages: 16 Feb 13 22:50:08.034576 kernel: smpboot: Total of 2 processors activated (11199.99 BogoMIPS) Feb 13 22:50:08.034588 kernel: devtmpfs: initialized Feb 13 22:50:08.034600 kernel: x86/mm: Memory block size: 128MB Feb 13 22:50:08.034612 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 22:50:08.034624 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Feb 13 22:50:08.034636 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 22:50:08.034653 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 22:50:08.034665 kernel: audit: initializing netlink subsys (disabled) Feb 13 22:50:08.034677 kernel: audit: type=2000 audit(1739487006.665:1): state=initialized audit_enabled=0 res=1 Feb 13 22:50:08.034688 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 22:50:08.034700 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 22:50:08.034712 kernel: cpuidle: using governor menu Feb 13 22:50:08.034724 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 22:50:08.034736 kernel: dca service started, version 1.12.1 Feb 13 22:50:08.034748 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Feb 13 22:50:08.034765 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Feb 13 22:50:08.034777 kernel: PCI: Using configuration type 1 for base access Feb 13 22:50:08.034789 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 22:50:08.034801 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 22:50:08.034812 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 22:50:08.034824 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 22:50:08.034836 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 22:50:08.034848 kernel: ACPI: Added _OSI(Module Device) Feb 13 22:50:08.034860 kernel: ACPI: Added _OSI(Processor Device) Feb 13 22:50:08.035928 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 22:50:08.035942 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 22:50:08.035954 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 22:50:08.035966 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 22:50:08.035978 kernel: ACPI: Interpreter enabled Feb 13 22:50:08.035990 kernel: ACPI: PM: (supports S0 S5) Feb 13 22:50:08.036002 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 22:50:08.036014 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 22:50:08.036026 kernel: PCI: Using E820 reservations for host bridge windows Feb 13 22:50:08.036045 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Feb 13 22:50:08.036057 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 22:50:08.036315 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 22:50:08.036492 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 22:50:08.036657 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 22:50:08.036675 kernel: PCI host bridge to bus 0000:00 Feb 13 22:50:08.036839 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 22:50:08.038034 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 22:50:08.038200 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 22:50:08.038350 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Feb 13 22:50:08.038498 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Feb 13 22:50:08.038650 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Feb 13 22:50:08.038798 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 22:50:08.040055 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Feb 13 22:50:08.040278 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Feb 13 22:50:08.040443 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Feb 13 22:50:08.040604 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Feb 13 22:50:08.040764 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Feb 13 22:50:08.041978 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 22:50:08.042196 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Feb 13 22:50:08.042371 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Feb 13 22:50:08.042551 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Feb 13 22:50:08.042714 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Feb 13 22:50:08.044919 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Feb 13 22:50:08.045093 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Feb 13 22:50:08.045298 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Feb 13 22:50:08.045472 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Feb 13 22:50:08.045653 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Feb 13 22:50:08.045817 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Feb 13 22:50:08.046048 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Feb 13 22:50:08.046228 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Feb 13 22:50:08.046415 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Feb 13 22:50:08.046588 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Feb 13 22:50:08.046772 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Feb 13 22:50:08.047979 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Feb 13 22:50:08.048178 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Feb 13 22:50:08.048348 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Feb 13 22:50:08.048514 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Feb 13 22:50:08.048679 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Feb 13 22:50:08.048853 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Feb 13 22:50:08.049075 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Feb 13 22:50:08.049252 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Feb 13 22:50:08.049415 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Feb 13 22:50:08.049600 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Feb 13 22:50:08.049790 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Feb 13 22:50:08.052996 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Feb 13 22:50:08.053202 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Feb 13 22:50:08.053368 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Feb 13 22:50:08.053528 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Feb 13 22:50:08.053698 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Feb 13 22:50:08.053862 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Feb 13 22:50:08.054081 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Feb 13 22:50:08.054279 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Feb 13 22:50:08.054454 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Feb 13 22:50:08.054619 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Feb 13 22:50:08.054784 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Feb 13 22:50:08.057010 kernel: pci_bus 0000:02: extended config space not accessible Feb 13 22:50:08.057344 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Feb 13 22:50:08.057594 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Feb 13 22:50:08.058430 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Feb 13 22:50:08.058617 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Feb 13 22:50:08.058809 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Feb 13 22:50:08.060035 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Feb 13 22:50:08.060251 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Feb 13 22:50:08.060418 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Feb 13 22:50:08.060591 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Feb 13 22:50:08.060798 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Feb 13 22:50:08.062021 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Feb 13 22:50:08.062207 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Feb 13 22:50:08.062372 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Feb 13 22:50:08.062533 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Feb 13 22:50:08.062700 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Feb 13 22:50:08.062861 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Feb 13 22:50:08.064088 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Feb 13 22:50:08.064277 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Feb 13 22:50:08.064442 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Feb 13 22:50:08.064606 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Feb 13 22:50:08.064772 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Feb 13 22:50:08.064969 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Feb 13 22:50:08.065132 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Feb 13 22:50:08.065312 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Feb 13 22:50:08.065483 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Feb 13 22:50:08.065643 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Feb 13 22:50:08.065811 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Feb 13 22:50:08.066150 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Feb 13 22:50:08.066334 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Feb 13 22:50:08.066354 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 13 22:50:08.066367 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 13 22:50:08.066380 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 22:50:08.066400 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 13 22:50:08.066412 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Feb 13 22:50:08.066424 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Feb 13 22:50:08.066436 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Feb 13 22:50:08.066449 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Feb 13 22:50:08.066461 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Feb 13 22:50:08.066473 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Feb 13 22:50:08.066485 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Feb 13 22:50:08.066497 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Feb 13 22:50:08.066514 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Feb 13 22:50:08.066526 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Feb 13 22:50:08.066538 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Feb 13 22:50:08.066550 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Feb 13 22:50:08.066562 kernel: iommu: Default domain type: Translated Feb 13 22:50:08.066575 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 22:50:08.066586 kernel: PCI: Using ACPI for IRQ routing Feb 13 22:50:08.066598 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 22:50:08.066610 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 13 22:50:08.066627 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Feb 13 22:50:08.066791 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Feb 13 22:50:08.067010 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Feb 13 22:50:08.067188 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 22:50:08.067208 kernel: vgaarb: loaded Feb 13 22:50:08.067221 kernel: clocksource: Switched to clocksource kvm-clock Feb 13 22:50:08.067233 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 22:50:08.067245 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 22:50:08.067264 kernel: pnp: PnP ACPI init Feb 13 22:50:08.067448 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Feb 13 22:50:08.067468 kernel: pnp: PnP ACPI: found 5 devices Feb 13 22:50:08.067481 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 22:50:08.067493 kernel: NET: Registered PF_INET protocol family Feb 13 22:50:08.067506 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 22:50:08.067518 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Feb 13 22:50:08.067530 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 22:50:08.067549 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 22:50:08.067561 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 13 22:50:08.067574 kernel: TCP: Hash tables configured (established 16384 bind 16384) Feb 13 22:50:08.067586 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 13 22:50:08.067598 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 13 22:50:08.067610 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 22:50:08.067622 kernel: NET: Registered PF_XDP protocol family Feb 13 22:50:08.067793 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Feb 13 22:50:08.067995 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Feb 13 22:50:08.068181 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Feb 13 22:50:08.068348 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Feb 13 22:50:08.068511 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Feb 13 22:50:08.068675 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Feb 13 22:50:08.068837 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Feb 13 22:50:08.069032 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Feb 13 22:50:08.069222 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Feb 13 22:50:08.069389 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Feb 13 22:50:08.069555 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Feb 13 22:50:08.069720 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Feb 13 22:50:08.069965 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Feb 13 22:50:08.070133 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Feb 13 22:50:08.070309 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Feb 13 22:50:08.070479 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Feb 13 22:50:08.070675 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Feb 13 22:50:08.070851 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Feb 13 22:50:08.071042 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Feb 13 22:50:08.071220 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Feb 13 22:50:08.071386 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Feb 13 22:50:08.071561 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Feb 13 22:50:08.071731 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Feb 13 22:50:08.071943 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Feb 13 22:50:08.072115 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Feb 13 22:50:08.072295 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Feb 13 22:50:08.072459 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Feb 13 22:50:08.072621 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Feb 13 22:50:08.072785 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Feb 13 22:50:08.072974 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Feb 13 22:50:08.073138 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Feb 13 22:50:08.073315 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Feb 13 22:50:08.073480 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Feb 13 22:50:08.073644 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Feb 13 22:50:08.073812 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Feb 13 22:50:08.074005 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Feb 13 22:50:08.074184 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Feb 13 22:50:08.074352 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Feb 13 22:50:08.074527 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Feb 13 22:50:08.074696 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Feb 13 22:50:08.074861 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Feb 13 22:50:08.075079 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Feb 13 22:50:08.075258 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Feb 13 22:50:08.075429 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Feb 13 22:50:08.075592 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Feb 13 22:50:08.075753 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Feb 13 22:50:08.075939 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Feb 13 22:50:08.076106 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Feb 13 22:50:08.076284 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Feb 13 22:50:08.076448 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Feb 13 22:50:08.076603 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 22:50:08.076752 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 22:50:08.076964 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 22:50:08.077110 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Feb 13 22:50:08.077280 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Feb 13 22:50:08.077425 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Feb 13 22:50:08.077590 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Feb 13 22:50:08.077745 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Feb 13 22:50:08.077926 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Feb 13 22:50:08.078101 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Feb 13 22:50:08.078294 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Feb 13 22:50:08.078451 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Feb 13 22:50:08.078606 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Feb 13 22:50:08.078782 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Feb 13 22:50:08.078983 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Feb 13 22:50:08.079139 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Feb 13 22:50:08.079323 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Feb 13 22:50:08.079477 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Feb 13 22:50:08.079630 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Feb 13 22:50:08.079803 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Feb 13 22:50:08.079988 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Feb 13 22:50:08.080178 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Feb 13 22:50:08.080378 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Feb 13 22:50:08.080555 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Feb 13 22:50:08.080714 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Feb 13 22:50:08.080920 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Feb 13 22:50:08.081078 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Feb 13 22:50:08.081248 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Feb 13 22:50:08.081412 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Feb 13 22:50:08.081573 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Feb 13 22:50:08.081726 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Feb 13 22:50:08.081746 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Feb 13 22:50:08.081760 kernel: PCI: CLS 0 bytes, default 64 Feb 13 22:50:08.081773 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 13 22:50:08.081786 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Feb 13 22:50:08.081799 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 13 22:50:08.081812 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Feb 13 22:50:08.081825 kernel: Initialise system trusted keyrings Feb 13 22:50:08.081844 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Feb 13 22:50:08.081857 kernel: Key type asymmetric registered Feb 13 22:50:08.081891 kernel: Asymmetric key parser 'x509' registered Feb 13 22:50:08.081907 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 22:50:08.081920 kernel: io scheduler mq-deadline registered Feb 13 22:50:08.081933 kernel: io scheduler kyber registered Feb 13 22:50:08.081945 kernel: io scheduler bfq registered Feb 13 22:50:08.082111 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Feb 13 22:50:08.082294 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Feb 13 22:50:08.082467 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 22:50:08.082633 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Feb 13 22:50:08.082797 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Feb 13 22:50:08.083007 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 22:50:08.083185 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Feb 13 22:50:08.083350 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Feb 13 22:50:08.083524 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 22:50:08.083693 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Feb 13 22:50:08.083855 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Feb 13 22:50:08.084042 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 22:50:08.084231 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Feb 13 22:50:08.084395 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Feb 13 22:50:08.084566 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 22:50:08.084729 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Feb 13 22:50:08.084935 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Feb 13 22:50:08.085100 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 22:50:08.085287 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Feb 13 22:50:08.085452 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Feb 13 22:50:08.085621 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 22:50:08.085784 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Feb 13 22:50:08.085974 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Feb 13 22:50:08.086137 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 22:50:08.086158 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 22:50:08.086184 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Feb 13 22:50:08.086205 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Feb 13 22:50:08.086218 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 22:50:08.086231 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 22:50:08.086244 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 13 22:50:08.086257 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 22:50:08.086270 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 22:50:08.086283 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 13 22:50:08.086450 kernel: rtc_cmos 00:03: RTC can wake from S4 Feb 13 22:50:08.086615 kernel: rtc_cmos 00:03: registered as rtc0 Feb 13 22:50:08.086769 kernel: rtc_cmos 00:03: setting system clock to 2025-02-13T22:50:07 UTC (1739487007) Feb 13 22:50:08.086966 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Feb 13 22:50:08.086986 kernel: intel_pstate: CPU model not supported Feb 13 22:50:08.086999 kernel: NET: Registered PF_INET6 protocol family Feb 13 22:50:08.087012 kernel: Segment Routing with IPv6 Feb 13 22:50:08.087024 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 22:50:08.087037 kernel: NET: Registered PF_PACKET protocol family Feb 13 22:50:08.087050 kernel: Key type dns_resolver registered Feb 13 22:50:08.087069 kernel: IPI shorthand broadcast: enabled Feb 13 22:50:08.087083 kernel: sched_clock: Marking stable (1397003608, 225951304)->(1757810189, -134855277) Feb 13 22:50:08.087095 kernel: registered taskstats version 1 Feb 13 22:50:08.087108 kernel: Loading compiled-in X.509 certificates Feb 13 22:50:08.087121 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 0cc219a306b9e46e583adebba1820decbdc4307b' Feb 13 22:50:08.087134 kernel: Key type .fscrypt registered Feb 13 22:50:08.087146 kernel: Key type fscrypt-provisioning registered Feb 13 22:50:08.087159 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 22:50:08.087190 kernel: ima: Allocated hash algorithm: sha1 Feb 13 22:50:08.087203 kernel: ima: No architecture policies found Feb 13 22:50:08.087216 kernel: clk: Disabling unused clocks Feb 13 22:50:08.087229 kernel: Freeing unused kernel image (initmem) memory: 42976K Feb 13 22:50:08.087241 kernel: Write protecting the kernel read-only data: 36864k Feb 13 22:50:08.087254 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Feb 13 22:50:08.087267 kernel: Run /init as init process Feb 13 22:50:08.087280 kernel: with arguments: Feb 13 22:50:08.087292 kernel: /init Feb 13 22:50:08.087304 kernel: with environment: Feb 13 22:50:08.087322 kernel: HOME=/ Feb 13 22:50:08.087334 kernel: TERM=linux Feb 13 22:50:08.087347 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 22:50:08.087363 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 22:50:08.087378 systemd[1]: Detected virtualization kvm. Feb 13 22:50:08.087392 systemd[1]: Detected architecture x86-64. Feb 13 22:50:08.087405 systemd[1]: Running in initrd. Feb 13 22:50:08.087422 systemd[1]: No hostname configured, using default hostname. Feb 13 22:50:08.087436 systemd[1]: Hostname set to . Feb 13 22:50:08.087449 systemd[1]: Initializing machine ID from VM UUID. Feb 13 22:50:08.087462 systemd[1]: Queued start job for default target initrd.target. Feb 13 22:50:08.087476 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 22:50:08.087489 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 22:50:08.087504 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 22:50:08.087517 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 22:50:08.087536 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 22:50:08.087550 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 22:50:08.087565 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 22:50:08.087579 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 22:50:08.087593 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 22:50:08.087606 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 22:50:08.087619 systemd[1]: Reached target paths.target - Path Units. Feb 13 22:50:08.087638 systemd[1]: Reached target slices.target - Slice Units. Feb 13 22:50:08.087652 systemd[1]: Reached target swap.target - Swaps. Feb 13 22:50:08.087670 systemd[1]: Reached target timers.target - Timer Units. Feb 13 22:50:08.087684 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 22:50:08.087697 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 22:50:08.087711 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 22:50:08.087725 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 22:50:08.087738 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 22:50:08.087752 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 22:50:08.087770 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 22:50:08.087784 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 22:50:08.087797 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 22:50:08.087810 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 22:50:08.087824 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 22:50:08.087837 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 22:50:08.087851 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 22:50:08.087864 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 22:50:08.087910 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 22:50:08.087925 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 22:50:08.087939 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 22:50:08.087952 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 22:50:08.088008 systemd-journald[202]: Collecting audit messages is disabled. Feb 13 22:50:08.088044 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 22:50:08.088059 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 22:50:08.088072 kernel: Bridge firewalling registered Feb 13 22:50:08.088090 systemd-journald[202]: Journal started Feb 13 22:50:08.088116 systemd-journald[202]: Runtime Journal (/run/log/journal/be9603bc1f4948c48090eca85bc2996a) is 4.7M, max 38.0M, 33.2M free. Feb 13 22:50:08.028270 systemd-modules-load[203]: Inserted module 'overlay' Feb 13 22:50:08.063387 systemd-modules-load[203]: Inserted module 'br_netfilter' Feb 13 22:50:08.120888 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 22:50:08.122082 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 22:50:08.123066 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 22:50:08.131070 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 22:50:08.137055 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 22:50:08.146089 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 22:50:08.148925 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 22:50:08.160027 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 22:50:08.162673 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 22:50:08.166178 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 22:50:08.176765 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 22:50:08.178949 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 22:50:08.181290 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 22:50:08.191129 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 22:50:08.192944 dracut-cmdline[234]: dracut-dracut-053 Feb 13 22:50:08.196958 dracut-cmdline[234]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=ed9b5d8ea73d2e47b8decea8124089e04dd398ef43013c1b1a5809314044b1c3 Feb 13 22:50:08.236265 systemd-resolved[240]: Positive Trust Anchors: Feb 13 22:50:08.237251 systemd-resolved[240]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 22:50:08.237295 systemd-resolved[240]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 22:50:08.244446 systemd-resolved[240]: Defaulting to hostname 'linux'. Feb 13 22:50:08.246542 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 22:50:08.247332 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 22:50:08.294923 kernel: SCSI subsystem initialized Feb 13 22:50:08.306909 kernel: Loading iSCSI transport class v2.0-870. Feb 13 22:50:08.318916 kernel: iscsi: registered transport (tcp) Feb 13 22:50:08.344139 kernel: iscsi: registered transport (qla4xxx) Feb 13 22:50:08.344228 kernel: QLogic iSCSI HBA Driver Feb 13 22:50:08.398412 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 22:50:08.409088 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 22:50:08.438516 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 22:50:08.438598 kernel: device-mapper: uevent: version 1.0.3 Feb 13 22:50:08.438619 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 22:50:08.488955 kernel: raid6: sse2x4 gen() 14721 MB/s Feb 13 22:50:08.506909 kernel: raid6: sse2x2 gen() 10026 MB/s Feb 13 22:50:08.525412 kernel: raid6: sse2x1 gen() 10785 MB/s Feb 13 22:50:08.525493 kernel: raid6: using algorithm sse2x4 gen() 14721 MB/s Feb 13 22:50:08.545608 kernel: raid6: .... xor() 8367 MB/s, rmw enabled Feb 13 22:50:08.545729 kernel: raid6: using ssse3x2 recovery algorithm Feb 13 22:50:08.570946 kernel: xor: automatically using best checksumming function avx Feb 13 22:50:08.764928 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 22:50:08.779999 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 22:50:08.787090 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 22:50:08.812224 systemd-udevd[421]: Using default interface naming scheme 'v255'. Feb 13 22:50:08.819055 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 22:50:08.827075 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 22:50:08.848463 dracut-pre-trigger[427]: rd.md=0: removing MD RAID activation Feb 13 22:50:08.887192 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 22:50:08.894086 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 22:50:09.001448 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 22:50:09.009615 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 22:50:09.040035 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 22:50:09.042291 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 22:50:09.044491 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 22:50:09.046793 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 22:50:09.055075 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 22:50:09.088410 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 22:50:09.118908 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Feb 13 22:50:09.165362 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Feb 13 22:50:09.165573 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 22:50:09.165594 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 22:50:09.165622 kernel: GPT:17805311 != 125829119 Feb 13 22:50:09.165640 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 22:50:09.165657 kernel: GPT:17805311 != 125829119 Feb 13 22:50:09.165672 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 22:50:09.165688 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 22:50:09.169589 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 22:50:09.172592 kernel: libata version 3.00 loaded. Feb 13 22:50:09.169763 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 22:50:09.171882 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 22:50:09.173803 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 22:50:09.175202 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 22:50:09.178261 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 22:50:09.187531 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 22:50:09.197473 kernel: ahci 0000:00:1f.2: version 3.0 Feb 13 22:50:09.318953 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Feb 13 22:50:09.319007 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (478) Feb 13 22:50:09.319029 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Feb 13 22:50:09.319324 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Feb 13 22:50:09.319535 kernel: AVX version of gcm_enc/dec engaged. Feb 13 22:50:09.319558 kernel: ACPI: bus type USB registered Feb 13 22:50:09.319583 kernel: usbcore: registered new interface driver usbfs Feb 13 22:50:09.319602 kernel: usbcore: registered new interface driver hub Feb 13 22:50:09.319619 kernel: usbcore: registered new device driver usb Feb 13 22:50:09.319643 kernel: scsi host0: ahci Feb 13 22:50:09.321255 kernel: scsi host1: ahci Feb 13 22:50:09.321453 kernel: scsi host2: ahci Feb 13 22:50:09.321651 kernel: AES CTR mode by8 optimization enabled Feb 13 22:50:09.321672 kernel: scsi host3: ahci Feb 13 22:50:09.321860 kernel: scsi host4: ahci Feb 13 22:50:09.324335 kernel: scsi host5: ahci Feb 13 22:50:09.324541 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 38 Feb 13 22:50:09.324572 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 38 Feb 13 22:50:09.324597 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 38 Feb 13 22:50:09.324616 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 38 Feb 13 22:50:09.324633 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 38 Feb 13 22:50:09.324651 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 38 Feb 13 22:50:09.324676 kernel: BTRFS: device fsid e9c87d9f-3864-4b45-9be4-80a5397f1fc6 devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (471) Feb 13 22:50:09.292939 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 22:50:09.321989 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 22:50:09.421053 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 22:50:09.432851 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 22:50:09.444298 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 22:50:09.445200 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 22:50:09.452060 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 22:50:09.457048 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 22:50:09.460295 disk-uuid[558]: Primary Header is updated. Feb 13 22:50:09.460295 disk-uuid[558]: Secondary Entries is updated. Feb 13 22:50:09.460295 disk-uuid[558]: Secondary Header is updated. Feb 13 22:50:09.466283 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 22:50:09.486375 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 22:50:09.629916 kernel: ata2: SATA link down (SStatus 0 SControl 300) Feb 13 22:50:09.630004 kernel: ata3: SATA link down (SStatus 0 SControl 300) Feb 13 22:50:09.632657 kernel: ata1: SATA link down (SStatus 0 SControl 300) Feb 13 22:50:09.633311 kernel: ata4: SATA link down (SStatus 0 SControl 300) Feb 13 22:50:09.635968 kernel: ata6: SATA link down (SStatus 0 SControl 300) Feb 13 22:50:09.637623 kernel: ata5: SATA link down (SStatus 0 SControl 300) Feb 13 22:50:09.644520 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Feb 13 22:50:09.668425 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Feb 13 22:50:09.668696 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Feb 13 22:50:09.668954 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Feb 13 22:50:09.669214 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Feb 13 22:50:09.669429 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Feb 13 22:50:09.669625 kernel: hub 1-0:1.0: USB hub found Feb 13 22:50:09.669843 kernel: hub 1-0:1.0: 4 ports detected Feb 13 22:50:09.670084 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Feb 13 22:50:09.670326 kernel: hub 2-0:1.0: USB hub found Feb 13 22:50:09.670548 kernel: hub 2-0:1.0: 4 ports detected Feb 13 22:50:09.904956 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Feb 13 22:50:10.045937 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 22:50:10.050923 kernel: usbcore: registered new interface driver usbhid Feb 13 22:50:10.050964 kernel: usbhid: USB HID core driver Feb 13 22:50:10.059575 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Feb 13 22:50:10.059614 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Feb 13 22:50:10.477910 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 22:50:10.478512 disk-uuid[559]: The operation has completed successfully. Feb 13 22:50:10.531635 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 22:50:10.532722 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 22:50:10.549179 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 22:50:10.554881 sh[584]: Success Feb 13 22:50:10.572608 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Feb 13 22:50:10.628675 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 22:50:10.642026 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 22:50:10.645939 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 22:50:10.669134 kernel: BTRFS info (device dm-0): first mount of filesystem e9c87d9f-3864-4b45-9be4-80a5397f1fc6 Feb 13 22:50:10.669196 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 22:50:10.669217 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 22:50:10.672995 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 22:50:10.673037 kernel: BTRFS info (device dm-0): using free space tree Feb 13 22:50:10.683371 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 22:50:10.684803 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 22:50:10.691036 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 22:50:10.694940 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 22:50:10.711444 kernel: BTRFS info (device vda6): first mount of filesystem 84d576e4-038f-4c76-aa8e-6cfd81e812ea Feb 13 22:50:10.711499 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 22:50:10.711517 kernel: BTRFS info (device vda6): using free space tree Feb 13 22:50:10.718923 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 22:50:10.731997 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 22:50:10.735587 kernel: BTRFS info (device vda6): last unmount of filesystem 84d576e4-038f-4c76-aa8e-6cfd81e812ea Feb 13 22:50:10.741344 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 22:50:10.748076 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 22:50:10.921199 ignition[669]: Ignition 2.20.0 Feb 13 22:50:10.922235 ignition[669]: Stage: fetch-offline Feb 13 22:50:10.922320 ignition[669]: no configs at "/usr/lib/ignition/base.d" Feb 13 22:50:10.924549 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 22:50:10.922341 ignition[669]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 13 22:50:10.925997 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 22:50:10.922484 ignition[669]: parsed url from cmdline: "" Feb 13 22:50:10.922491 ignition[669]: no config URL provided Feb 13 22:50:10.922501 ignition[669]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 22:50:10.922517 ignition[669]: no config at "/usr/lib/ignition/user.ign" Feb 13 22:50:10.922527 ignition[669]: failed to fetch config: resource requires networking Feb 13 22:50:10.922772 ignition[669]: Ignition finished successfully Feb 13 22:50:10.934162 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 22:50:10.975339 systemd-networkd[773]: lo: Link UP Feb 13 22:50:10.975357 systemd-networkd[773]: lo: Gained carrier Feb 13 22:50:10.977512 systemd-networkd[773]: Enumeration completed Feb 13 22:50:10.978058 systemd-networkd[773]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 22:50:10.978064 systemd-networkd[773]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 22:50:10.979405 systemd-networkd[773]: eth0: Link UP Feb 13 22:50:10.979411 systemd-networkd[773]: eth0: Gained carrier Feb 13 22:50:10.979422 systemd-networkd[773]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 22:50:10.979698 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 22:50:10.983247 systemd[1]: Reached target network.target - Network. Feb 13 22:50:10.994108 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 22:50:11.000950 systemd-networkd[773]: eth0: DHCPv4 address 10.230.69.138/30, gateway 10.230.69.137 acquired from 10.230.69.137 Feb 13 22:50:11.015309 ignition[775]: Ignition 2.20.0 Feb 13 22:50:11.015332 ignition[775]: Stage: fetch Feb 13 22:50:11.015549 ignition[775]: no configs at "/usr/lib/ignition/base.d" Feb 13 22:50:11.015569 ignition[775]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 13 22:50:11.015695 ignition[775]: parsed url from cmdline: "" Feb 13 22:50:11.015702 ignition[775]: no config URL provided Feb 13 22:50:11.015721 ignition[775]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 22:50:11.015736 ignition[775]: no config at "/usr/lib/ignition/user.ign" Feb 13 22:50:11.015917 ignition[775]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Feb 13 22:50:11.016105 ignition[775]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Feb 13 22:50:11.016153 ignition[775]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Feb 13 22:50:11.031307 ignition[775]: GET result: OK Feb 13 22:50:11.031842 ignition[775]: parsing config with SHA512: faea1fb53080528fc2d7239b4ba8a276de6740f9a2c22013c1aefb9dbcfa06d21fb8f0bd6c49fe8a6ef18e6e7601ca261f849d9008710f6961faa27a8a8a2e41 Feb 13 22:50:11.039546 unknown[775]: fetched base config from "system" Feb 13 22:50:11.039562 unknown[775]: fetched base config from "system" Feb 13 22:50:11.039571 unknown[775]: fetched user config from "openstack" Feb 13 22:50:11.040491 ignition[775]: fetch: fetch complete Feb 13 22:50:11.040500 ignition[775]: fetch: fetch passed Feb 13 22:50:11.042481 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 22:50:11.040564 ignition[775]: Ignition finished successfully Feb 13 22:50:11.050067 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 22:50:11.070088 ignition[782]: Ignition 2.20.0 Feb 13 22:50:11.071084 ignition[782]: Stage: kargs Feb 13 22:50:11.071311 ignition[782]: no configs at "/usr/lib/ignition/base.d" Feb 13 22:50:11.071331 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 13 22:50:11.073559 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 22:50:11.072343 ignition[782]: kargs: kargs passed Feb 13 22:50:11.072413 ignition[782]: Ignition finished successfully Feb 13 22:50:11.084157 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 22:50:11.109482 ignition[788]: Ignition 2.20.0 Feb 13 22:50:11.109503 ignition[788]: Stage: disks Feb 13 22:50:11.109774 ignition[788]: no configs at "/usr/lib/ignition/base.d" Feb 13 22:50:11.112462 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 22:50:11.109793 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 13 22:50:11.114084 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 22:50:11.111180 ignition[788]: disks: disks passed Feb 13 22:50:11.115143 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 22:50:11.111250 ignition[788]: Ignition finished successfully Feb 13 22:50:11.116667 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 22:50:11.118179 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 22:50:11.119386 systemd[1]: Reached target basic.target - Basic System. Feb 13 22:50:11.129156 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 22:50:11.147653 systemd-fsck[796]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Feb 13 22:50:11.153004 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 22:50:11.162018 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 22:50:11.274942 kernel: EXT4-fs (vda9): mounted filesystem c5993b0e-9201-4b44-aa01-79dc9d6c9fc9 r/w with ordered data mode. Quota mode: none. Feb 13 22:50:11.275248 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 22:50:11.276602 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 22:50:11.286059 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 22:50:11.288959 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 22:50:11.290544 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 22:50:11.296220 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Feb 13 22:50:11.302263 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (804) Feb 13 22:50:11.302294 kernel: BTRFS info (device vda6): first mount of filesystem 84d576e4-038f-4c76-aa8e-6cfd81e812ea Feb 13 22:50:11.302312 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 22:50:11.302329 kernel: BTRFS info (device vda6): using free space tree Feb 13 22:50:11.307461 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 22:50:11.308865 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 22:50:11.308346 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 22:50:11.313214 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 22:50:11.318009 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 22:50:11.328130 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 22:50:11.400745 initrd-setup-root[832]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 22:50:11.408907 initrd-setup-root[839]: cut: /sysroot/etc/group: No such file or directory Feb 13 22:50:11.417308 initrd-setup-root[846]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 22:50:11.426747 initrd-setup-root[853]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 22:50:11.532292 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 22:50:11.540019 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 22:50:11.542188 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 22:50:11.554910 kernel: BTRFS info (device vda6): last unmount of filesystem 84d576e4-038f-4c76-aa8e-6cfd81e812ea Feb 13 22:50:11.684137 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 22:50:11.697847 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 22:50:11.710645 ignition[920]: INFO : Ignition 2.20.0 Feb 13 22:50:11.710645 ignition[920]: INFO : Stage: mount Feb 13 22:50:11.712431 ignition[920]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 22:50:11.712431 ignition[920]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 13 22:50:11.712431 ignition[920]: INFO : mount: mount passed Feb 13 22:50:11.712431 ignition[920]: INFO : Ignition finished successfully Feb 13 22:50:11.713390 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 22:50:12.061328 systemd-networkd[773]: eth0: Gained IPv6LL Feb 13 22:50:13.566989 systemd-networkd[773]: eth0: Ignoring DHCPv6 address 2a02:1348:179:9162:24:19ff:fee6:458a/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:9162:24:19ff:fee6:458a/64 assigned by NDisc. Feb 13 22:50:13.567006 systemd-networkd[773]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Feb 13 22:50:18.470698 coreos-metadata[806]: Feb 13 22:50:18.470 WARN failed to locate config-drive, using the metadata service API instead Feb 13 22:50:18.491351 coreos-metadata[806]: Feb 13 22:50:18.491 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Feb 13 22:50:18.504414 coreos-metadata[806]: Feb 13 22:50:18.504 INFO Fetch successful Feb 13 22:50:18.505266 coreos-metadata[806]: Feb 13 22:50:18.504 INFO wrote hostname srv-uk33m.gb1.brightbox.com to /sysroot/etc/hostname Feb 13 22:50:18.507430 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Feb 13 22:50:18.507625 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Feb 13 22:50:18.513998 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 22:50:18.536144 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 22:50:18.558922 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (937) Feb 13 22:50:18.564158 kernel: BTRFS info (device vda6): first mount of filesystem 84d576e4-038f-4c76-aa8e-6cfd81e812ea Feb 13 22:50:18.564215 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 22:50:18.565907 kernel: BTRFS info (device vda6): using free space tree Feb 13 22:50:18.570931 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 22:50:18.573547 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 22:50:18.601302 ignition[955]: INFO : Ignition 2.20.0 Feb 13 22:50:18.602439 ignition[955]: INFO : Stage: files Feb 13 22:50:18.602439 ignition[955]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 22:50:18.602439 ignition[955]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 13 22:50:18.604778 ignition[955]: DEBUG : files: compiled without relabeling support, skipping Feb 13 22:50:18.604778 ignition[955]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 22:50:18.604778 ignition[955]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 22:50:18.607525 ignition[955]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 22:50:18.607525 ignition[955]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 22:50:18.609363 ignition[955]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 22:50:18.607866 unknown[955]: wrote ssh authorized keys file for user: core Feb 13 22:50:18.611278 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Feb 13 22:50:18.611278 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Feb 13 22:50:18.794481 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 22:50:19.085064 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Feb 13 22:50:19.085064 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 22:50:19.085064 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Feb 13 22:50:19.642412 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 22:50:19.963852 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 22:50:19.965314 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Feb 13 22:50:19.965314 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 22:50:19.965314 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 22:50:19.965314 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 22:50:19.965314 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 22:50:19.965314 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 22:50:19.965314 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 22:50:19.965314 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 22:50:19.974719 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 22:50:19.974719 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 22:50:19.974719 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 22:50:19.974719 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 22:50:19.974719 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 22:50:19.974719 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Feb 13 22:50:20.559593 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 13 22:50:22.627454 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 22:50:22.627454 ignition[955]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Feb 13 22:50:22.633707 ignition[955]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 22:50:22.635275 ignition[955]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 22:50:22.635275 ignition[955]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Feb 13 22:50:22.635275 ignition[955]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Feb 13 22:50:22.635275 ignition[955]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 22:50:22.640370 ignition[955]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 22:50:22.640370 ignition[955]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 22:50:22.640370 ignition[955]: INFO : files: files passed Feb 13 22:50:22.640370 ignition[955]: INFO : Ignition finished successfully Feb 13 22:50:22.639414 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 22:50:22.663343 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 22:50:22.669353 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 22:50:22.674758 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 22:50:22.674995 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 22:50:22.693015 initrd-setup-root-after-ignition[984]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 22:50:22.693015 initrd-setup-root-after-ignition[984]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 22:50:22.697099 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 22:50:22.699697 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 22:50:22.702356 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 22:50:22.709209 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 22:50:22.751440 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 22:50:22.751636 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 22:50:22.753187 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 22:50:22.754212 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 22:50:22.756199 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 22:50:22.763152 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 22:50:22.790005 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 22:50:22.797109 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 22:50:22.812692 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 22:50:22.814677 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 22:50:22.815567 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 22:50:22.817162 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 22:50:22.817357 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 22:50:22.819202 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 22:50:22.820319 systemd[1]: Stopped target basic.target - Basic System. Feb 13 22:50:22.821724 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 22:50:22.823112 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 22:50:22.824501 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 22:50:22.826137 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 22:50:22.827731 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 22:50:22.829461 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 22:50:22.831029 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 22:50:22.832511 systemd[1]: Stopped target swap.target - Swaps. Feb 13 22:50:22.833863 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 22:50:22.834181 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 22:50:22.835790 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 22:50:22.836855 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 22:50:22.838106 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 22:50:22.838316 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 22:50:22.839812 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 22:50:22.840115 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 22:50:22.841905 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 22:50:22.842089 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 22:50:22.843779 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 22:50:22.844084 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 22:50:22.852271 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 22:50:22.853061 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 22:50:22.853394 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 22:50:22.864143 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 22:50:22.865733 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 22:50:22.872440 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 22:50:22.877320 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 22:50:22.877512 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 22:50:22.888489 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 22:50:22.890019 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 22:50:22.915648 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 22:50:22.921293 ignition[1008]: INFO : Ignition 2.20.0 Feb 13 22:50:22.921293 ignition[1008]: INFO : Stage: umount Feb 13 22:50:22.925026 ignition[1008]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 22:50:22.925026 ignition[1008]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 13 22:50:22.925026 ignition[1008]: INFO : umount: umount passed Feb 13 22:50:22.925026 ignition[1008]: INFO : Ignition finished successfully Feb 13 22:50:22.922670 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 22:50:22.922828 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 22:50:22.925095 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 22:50:22.925236 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 22:50:22.927544 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 22:50:22.927712 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 22:50:22.929986 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 22:50:22.930097 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 22:50:22.931322 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 22:50:22.931445 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 22:50:22.932708 systemd[1]: Stopped target network.target - Network. Feb 13 22:50:22.934099 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 22:50:22.934197 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 22:50:22.935618 systemd[1]: Stopped target paths.target - Path Units. Feb 13 22:50:22.936976 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 22:50:22.942515 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 22:50:22.943616 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 22:50:22.945189 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 22:50:22.946632 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 22:50:22.946733 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 22:50:22.948226 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 22:50:22.948308 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 22:50:22.949775 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 22:50:22.949926 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 22:50:22.951198 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 22:50:22.951267 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 22:50:22.952604 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 22:50:22.952704 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 22:50:22.954382 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 22:50:22.958058 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 22:50:22.961224 systemd-networkd[773]: eth0: DHCPv6 lease lost Feb 13 22:50:22.965747 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 22:50:22.966108 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 22:50:22.969494 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 22:50:22.969753 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 22:50:22.974851 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 22:50:22.975554 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 22:50:22.982018 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 22:50:22.983552 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 22:50:22.984477 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 22:50:22.986725 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 22:50:22.986810 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 22:50:22.987548 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 22:50:22.987643 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 22:50:22.988438 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 22:50:22.988525 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 22:50:22.990196 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 22:50:22.999311 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 22:50:22.999566 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 22:50:23.002350 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 22:50:23.002535 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 22:50:23.003771 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 22:50:23.003830 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 22:50:23.004635 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 22:50:23.004711 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 22:50:23.006844 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 22:50:23.006971 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 22:50:23.009949 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 22:50:23.010037 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 22:50:23.020225 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 22:50:23.022015 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 22:50:23.022093 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 22:50:23.024008 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 22:50:23.024087 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 22:50:23.025609 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 22:50:23.028359 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 22:50:23.029992 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 22:50:23.030139 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 22:50:23.032062 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 22:50:23.040142 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 22:50:23.051201 systemd[1]: Switching root. Feb 13 22:50:23.091770 systemd-journald[202]: Journal stopped Feb 13 22:50:24.521447 systemd-journald[202]: Received SIGTERM from PID 1 (systemd). Feb 13 22:50:24.521682 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 22:50:24.521738 kernel: SELinux: policy capability open_perms=1 Feb 13 22:50:24.521765 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 22:50:24.521784 kernel: SELinux: policy capability always_check_network=0 Feb 13 22:50:24.521820 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 22:50:24.521840 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 22:50:24.521882 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 22:50:24.521915 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 22:50:24.521957 systemd[1]: Successfully loaded SELinux policy in 56.572ms. Feb 13 22:50:24.522024 kernel: audit: type=1403 audit(1739487023.388:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 22:50:24.522076 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 21.013ms. Feb 13 22:50:24.522115 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 22:50:24.522147 systemd[1]: Detected virtualization kvm. Feb 13 22:50:24.522169 systemd[1]: Detected architecture x86-64. Feb 13 22:50:24.522187 systemd[1]: Detected first boot. Feb 13 22:50:24.522218 systemd[1]: Hostname set to . Feb 13 22:50:24.522239 systemd[1]: Initializing machine ID from VM UUID. Feb 13 22:50:24.522279 zram_generator::config[1056]: No configuration found. Feb 13 22:50:24.522308 systemd[1]: Populated /etc with preset unit settings. Feb 13 22:50:24.522328 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 22:50:24.522356 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 22:50:24.522384 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 22:50:24.522417 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 22:50:24.522456 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 22:50:24.522485 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 22:50:24.522523 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 22:50:24.522553 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 22:50:24.522574 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 22:50:24.522603 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 22:50:24.522624 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 22:50:24.522658 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 22:50:24.522680 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 22:50:24.522713 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 22:50:24.522733 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 22:50:24.522770 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 22:50:24.522792 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 22:50:24.522811 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 22:50:24.522841 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 22:50:24.522862 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 22:50:24.523928 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 22:50:24.523973 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 22:50:24.523995 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 22:50:24.524014 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 22:50:24.524049 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 22:50:24.524078 systemd[1]: Reached target slices.target - Slice Units. Feb 13 22:50:24.524108 systemd[1]: Reached target swap.target - Swaps. Feb 13 22:50:24.524136 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 22:50:24.524165 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 22:50:24.524193 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 22:50:24.524242 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 22:50:24.524271 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 22:50:24.524292 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 22:50:24.524323 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 22:50:24.524371 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 22:50:24.524393 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 22:50:24.524427 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 22:50:24.524449 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 22:50:24.524468 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 22:50:24.524495 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 22:50:24.524541 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 22:50:24.524563 systemd[1]: Reached target machines.target - Containers. Feb 13 22:50:24.524583 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 22:50:24.524602 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 22:50:24.524636 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 22:50:24.524665 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 22:50:24.524686 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 22:50:24.524713 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 22:50:24.524741 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 22:50:24.524762 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 22:50:24.524781 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 22:50:24.524809 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 22:50:24.524846 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 22:50:24.524898 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 22:50:24.524922 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 22:50:24.524956 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 22:50:24.524977 kernel: fuse: init (API version 7.39) Feb 13 22:50:24.524996 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 22:50:24.525027 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 22:50:24.525049 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 22:50:24.525079 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 22:50:24.525159 systemd-journald[1141]: Collecting audit messages is disabled. Feb 13 22:50:24.525222 systemd-journald[1141]: Journal started Feb 13 22:50:24.525279 systemd-journald[1141]: Runtime Journal (/run/log/journal/be9603bc1f4948c48090eca85bc2996a) is 4.7M, max 38.0M, 33.2M free. Feb 13 22:50:24.165055 systemd[1]: Queued start job for default target multi-user.target. Feb 13 22:50:24.189174 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 22:50:24.190014 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 22:50:24.545908 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 22:50:24.554915 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 22:50:24.577911 systemd[1]: Stopped verity-setup.service. Feb 13 22:50:24.577992 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 22:50:24.582901 kernel: ACPI: bus type drm_connector registered Feb 13 22:50:24.589932 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 22:50:24.591498 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 22:50:24.593163 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 22:50:24.594069 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 22:50:24.594992 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 22:50:24.598097 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 22:50:24.598997 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 22:50:24.600372 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 22:50:24.601622 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 22:50:24.601981 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 22:50:24.603155 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 22:50:24.603414 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 22:50:24.604703 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 22:50:24.604965 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 22:50:24.606425 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 22:50:24.606667 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 22:50:24.607903 kernel: loop: module loaded Feb 13 22:50:24.609420 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 22:50:24.609679 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 22:50:24.610960 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 22:50:24.611193 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 22:50:24.612331 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 22:50:24.615576 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 22:50:24.621286 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 22:50:24.622502 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 22:50:24.634753 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 22:50:24.642999 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 22:50:24.652022 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 22:50:24.654982 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 22:50:24.655043 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 22:50:24.659095 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 22:50:24.682145 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 22:50:24.691037 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 22:50:24.692031 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 22:50:24.700540 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 22:50:24.711135 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 22:50:24.713995 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 22:50:24.720104 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 22:50:24.720958 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 22:50:24.728525 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 22:50:24.734030 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 22:50:24.739264 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 22:50:24.742251 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 22:50:24.750091 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 22:50:24.751324 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 22:50:24.770425 systemd-journald[1141]: Time spent on flushing to /var/log/journal/be9603bc1f4948c48090eca85bc2996a is 118.908ms for 1141 entries. Feb 13 22:50:24.770425 systemd-journald[1141]: System Journal (/var/log/journal/be9603bc1f4948c48090eca85bc2996a) is 8.0M, max 584.8M, 576.8M free. Feb 13 22:50:24.977315 systemd-journald[1141]: Received client request to flush runtime journal. Feb 13 22:50:24.977375 kernel: loop0: detected capacity change from 0 to 218376 Feb 13 22:50:24.977437 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 22:50:24.778622 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 22:50:24.780439 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 22:50:24.791112 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 22:50:24.852271 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 22:50:24.853964 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 22:50:24.979211 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 22:50:24.983398 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 22:50:24.999168 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 22:50:25.015341 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 22:50:25.021073 kernel: loop1: detected capacity change from 0 to 8 Feb 13 22:50:25.026893 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 22:50:25.036302 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 22:50:25.077581 kernel: loop2: detected capacity change from 0 to 138184 Feb 13 22:50:25.076628 udevadm[1200]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 22:50:25.144710 systemd-tmpfiles[1205]: ACLs are not supported, ignoring. Feb 13 22:50:25.144742 systemd-tmpfiles[1205]: ACLs are not supported, ignoring. Feb 13 22:50:25.164765 kernel: loop3: detected capacity change from 0 to 140992 Feb 13 22:50:25.167581 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 22:50:25.246089 kernel: loop4: detected capacity change from 0 to 218376 Feb 13 22:50:25.283921 kernel: loop5: detected capacity change from 0 to 8 Feb 13 22:50:25.295123 kernel: loop6: detected capacity change from 0 to 138184 Feb 13 22:50:25.333825 kernel: loop7: detected capacity change from 0 to 140992 Feb 13 22:50:25.366051 (sd-merge)[1211]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Feb 13 22:50:25.369143 (sd-merge)[1211]: Merged extensions into '/usr'. Feb 13 22:50:25.407206 systemd[1]: Reloading requested from client PID 1185 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 22:50:25.407297 systemd[1]: Reloading... Feb 13 22:50:25.698435 ldconfig[1180]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 22:50:25.700590 zram_generator::config[1238]: No configuration found. Feb 13 22:50:25.987393 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 22:50:26.054188 systemd[1]: Reloading finished in 645 ms. Feb 13 22:50:26.082809 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 22:50:26.089453 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 22:50:26.100227 systemd[1]: Starting ensure-sysext.service... Feb 13 22:50:26.112281 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 22:50:26.144678 systemd[1]: Reloading requested from client PID 1293 ('systemctl') (unit ensure-sysext.service)... Feb 13 22:50:26.145087 systemd[1]: Reloading... Feb 13 22:50:26.148134 systemd-tmpfiles[1294]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 22:50:26.148749 systemd-tmpfiles[1294]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 22:50:26.157220 systemd-tmpfiles[1294]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 22:50:26.158510 systemd-tmpfiles[1294]: ACLs are not supported, ignoring. Feb 13 22:50:26.158619 systemd-tmpfiles[1294]: ACLs are not supported, ignoring. Feb 13 22:50:26.167483 systemd-tmpfiles[1294]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 22:50:26.168792 systemd-tmpfiles[1294]: Skipping /boot Feb 13 22:50:26.222156 systemd-tmpfiles[1294]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 22:50:26.222367 systemd-tmpfiles[1294]: Skipping /boot Feb 13 22:50:26.313927 zram_generator::config[1322]: No configuration found. Feb 13 22:50:26.489645 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 22:50:26.557170 systemd[1]: Reloading finished in 410 ms. Feb 13 22:50:26.583951 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 22:50:26.589599 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 22:50:26.610167 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 22:50:26.617143 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 22:50:26.620099 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 22:50:26.626299 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 22:50:26.638148 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 22:50:26.649175 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 22:50:26.665447 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 22:50:26.669843 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 22:50:26.671178 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 22:50:26.678276 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 22:50:26.681253 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 22:50:26.688282 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 22:50:26.691237 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 22:50:26.691428 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 22:50:26.697734 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 22:50:26.698058 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 22:50:26.698316 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 22:50:26.698459 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 22:50:26.705089 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 22:50:26.705966 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 22:50:26.712497 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 22:50:26.712815 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 22:50:26.726293 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 22:50:26.733221 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 22:50:26.734211 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 22:50:26.734424 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 22:50:26.740977 systemd[1]: Finished ensure-sysext.service. Feb 13 22:50:26.749095 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 22:50:26.751372 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 22:50:26.773209 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 22:50:26.774063 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 22:50:26.780582 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 22:50:26.781206 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 22:50:26.793694 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 22:50:26.794036 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 22:50:26.796413 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 22:50:26.813206 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 22:50:26.813462 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 22:50:26.814708 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 22:50:26.825955 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 22:50:26.843297 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 22:50:26.844953 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 22:50:26.845294 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 22:50:26.887402 systemd-udevd[1384]: Using default interface naming scheme 'v255'. Feb 13 22:50:26.891888 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 22:50:26.894423 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 22:50:26.898500 augenrules[1423]: No rules Feb 13 22:50:26.904490 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 22:50:26.904802 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 22:50:26.955462 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 22:50:26.971077 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 22:50:27.011676 systemd-resolved[1383]: Positive Trust Anchors: Feb 13 22:50:27.011702 systemd-resolved[1383]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 22:50:27.011744 systemd-resolved[1383]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 22:50:27.021708 systemd-resolved[1383]: Using system hostname 'srv-uk33m.gb1.brightbox.com'. Feb 13 22:50:27.025595 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 22:50:27.026609 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 22:50:27.041923 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 22:50:27.045197 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 22:50:27.123260 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 22:50:27.138911 systemd-networkd[1439]: lo: Link UP Feb 13 22:50:27.138924 systemd-networkd[1439]: lo: Gained carrier Feb 13 22:50:27.146049 systemd-networkd[1439]: Enumeration completed Feb 13 22:50:27.146302 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 22:50:27.146951 systemd-networkd[1439]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 22:50:27.146958 systemd-networkd[1439]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 22:50:27.148145 systemd[1]: Reached target network.target - Network. Feb 13 22:50:27.153054 systemd-networkd[1439]: eth0: Link UP Feb 13 22:50:27.153068 systemd-networkd[1439]: eth0: Gained carrier Feb 13 22:50:27.153088 systemd-networkd[1439]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 22:50:27.156966 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 22:50:27.179251 systemd-networkd[1439]: eth0: DHCPv4 address 10.230.69.138/30, gateway 10.230.69.137 acquired from 10.230.69.137 Feb 13 22:50:27.180746 systemd-timesyncd[1404]: Network configuration changed, trying to establish connection. Feb 13 22:50:27.302086 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1436) Feb 13 22:50:27.302293 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Feb 13 22:50:27.313111 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 22:50:27.318183 kernel: ACPI: button: Power Button [PWRF] Feb 13 22:50:27.345092 systemd-networkd[1439]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 22:50:27.394177 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 22:50:27.405580 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 22:50:27.423029 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Feb 13 22:50:27.424166 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Feb 13 22:50:27.424477 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Feb 13 22:50:27.433897 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Feb 13 22:50:27.454338 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 22:50:27.491212 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 22:50:27.696174 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 22:50:27.721534 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 22:50:27.729190 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 22:50:27.747396 lvm[1473]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 22:50:27.827229 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 22:50:27.828527 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 22:50:27.829340 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 22:50:27.830301 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 22:50:27.831312 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 22:50:27.832459 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 22:50:27.833320 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 22:50:27.834185 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 22:50:27.834950 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 22:50:27.835002 systemd[1]: Reached target paths.target - Path Units. Feb 13 22:50:27.835626 systemd[1]: Reached target timers.target - Timer Units. Feb 13 22:50:27.838529 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 22:50:27.841607 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 22:50:27.847205 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 22:50:27.849988 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 22:50:27.851354 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 22:50:27.852240 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 22:50:27.852954 systemd[1]: Reached target basic.target - Basic System. Feb 13 22:50:27.853702 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 22:50:27.853748 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 22:50:27.860031 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 22:50:27.867224 lvm[1477]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 22:50:27.867475 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 22:50:27.870083 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 22:50:27.881994 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 22:50:27.886084 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 22:50:27.886787 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 22:50:27.890333 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 22:50:27.896224 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 22:50:27.905089 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 22:50:27.915161 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 22:50:27.930158 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 22:50:27.939064 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 22:50:27.939914 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 22:50:27.946121 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 22:50:27.950912 jq[1481]: false Feb 13 22:50:27.957019 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 22:50:27.962934 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 22:50:27.970564 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 22:50:27.971322 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 22:50:27.979510 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 22:50:27.979777 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 22:50:27.984455 extend-filesystems[1482]: Found loop4 Feb 13 22:50:27.984455 extend-filesystems[1482]: Found loop5 Feb 13 22:50:27.984455 extend-filesystems[1482]: Found loop6 Feb 13 22:50:27.984455 extend-filesystems[1482]: Found loop7 Feb 13 22:50:27.984455 extend-filesystems[1482]: Found vda Feb 13 22:50:27.984455 extend-filesystems[1482]: Found vda1 Feb 13 22:50:27.984455 extend-filesystems[1482]: Found vda2 Feb 13 22:50:27.984455 extend-filesystems[1482]: Found vda3 Feb 13 22:50:27.984455 extend-filesystems[1482]: Found usr Feb 13 22:50:27.984455 extend-filesystems[1482]: Found vda4 Feb 13 22:50:27.984455 extend-filesystems[1482]: Found vda6 Feb 13 22:50:27.984455 extend-filesystems[1482]: Found vda7 Feb 13 22:50:27.983579 dbus-daemon[1480]: [system] SELinux support is enabled Feb 13 22:50:28.033695 update_engine[1490]: I20250213 22:50:28.022486 1490 main.cc:92] Flatcar Update Engine starting Feb 13 22:50:28.034092 extend-filesystems[1482]: Found vda9 Feb 13 22:50:28.034092 extend-filesystems[1482]: Checking size of /dev/vda9 Feb 13 22:50:27.985340 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 22:50:28.006455 dbus-daemon[1480]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1439 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 13 22:50:28.043130 jq[1491]: true Feb 13 22:50:28.022958 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 22:50:28.026017 dbus-daemon[1480]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 13 22:50:28.023013 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 22:50:28.023986 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 22:50:28.024017 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 22:50:28.041093 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Feb 13 22:50:28.043389 systemd[1]: Started update-engine.service - Update Engine. Feb 13 22:50:28.064197 update_engine[1490]: I20250213 22:50:28.051364 1490 update_check_scheduler.cc:74] Next update check in 8m21s Feb 13 22:50:28.057061 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 22:50:28.073283 extend-filesystems[1482]: Resized partition /dev/vda9 Feb 13 22:50:28.091859 extend-filesystems[1520]: resize2fs 1.47.1 (20-May-2024) Feb 13 22:50:28.097177 jq[1502]: true Feb 13 22:50:28.096866 (ntainerd)[1513]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 22:50:28.103711 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Feb 13 22:50:28.111731 tar[1495]: linux-amd64/LICENSE Feb 13 22:50:28.113392 tar[1495]: linux-amd64/helm Feb 13 22:50:28.118482 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 22:50:28.119281 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 22:50:28.164525 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1449) Feb 13 22:50:28.480518 systemd-logind[1489]: Watching system buttons on /dev/input/event2 (Power Button) Feb 13 22:50:28.480590 systemd-logind[1489]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 22:50:28.487713 systemd-logind[1489]: New seat seat0. Feb 13 22:50:28.497767 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 22:50:28.510486 systemd-networkd[1439]: eth0: Gained IPv6LL Feb 13 22:50:28.516024 systemd-timesyncd[1404]: Network configuration changed, trying to establish connection. Feb 13 22:50:28.517344 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 22:50:28.520214 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 22:50:28.549650 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 22:50:28.560444 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 22:50:28.588946 bash[1540]: Updated "/home/core/.ssh/authorized_keys" Feb 13 22:50:28.601149 sshd_keygen[1519]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 22:50:28.595105 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 22:50:28.609404 systemd[1]: Starting sshkeys.service... Feb 13 22:50:28.665916 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Feb 13 22:50:28.702830 dbus-daemon[1480]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 13 22:50:28.705078 dbus-daemon[1480]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1511 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 13 22:50:28.706707 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Feb 13 22:50:28.713198 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 22:50:28.727083 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 22:50:28.729968 extend-filesystems[1520]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 22:50:28.729968 extend-filesystems[1520]: old_desc_blocks = 1, new_desc_blocks = 8 Feb 13 22:50:28.729968 extend-filesystems[1520]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Feb 13 22:50:28.740467 extend-filesystems[1482]: Resized filesystem in /dev/vda9 Feb 13 22:50:28.741371 systemd[1]: Starting polkit.service - Authorization Manager... Feb 13 22:50:28.751143 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 22:50:28.752069 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 22:50:28.763469 locksmithd[1517]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 22:50:28.885427 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 22:50:28.890059 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 22:50:28.907660 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 22:50:28.917130 polkitd[1566]: Started polkitd version 121 Feb 13 22:50:28.956157 polkitd[1566]: Loading rules from directory /etc/polkit-1/rules.d Feb 13 22:50:28.956319 polkitd[1566]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 13 22:50:28.961324 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 22:50:28.961937 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 22:50:28.966476 polkitd[1566]: Finished loading, compiling and executing 2 rules Feb 13 22:50:28.976341 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 22:50:28.988393 dbus-daemon[1480]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 13 22:50:28.991473 systemd[1]: Started polkit.service - Authorization Manager. Feb 13 22:50:28.994605 polkitd[1566]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 13 22:50:29.048608 containerd[1513]: time="2025-02-13T22:50:29.048407933Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 22:50:29.050827 systemd-hostnamed[1511]: Hostname set to (static) Feb 13 22:50:29.065231 systemd-timesyncd[1404]: Network configuration changed, trying to establish connection. Feb 13 22:50:29.079049 systemd-timesyncd[1404]: Network configuration changed, trying to establish connection. Feb 13 22:50:29.095829 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 22:50:29.107560 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 22:50:29.113177 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 22:50:29.114482 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 22:50:29.147388 containerd[1513]: time="2025-02-13T22:50:29.147234338Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 22:50:29.152644 containerd[1513]: time="2025-02-13T22:50:29.152566166Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 22:50:29.152791 containerd[1513]: time="2025-02-13T22:50:29.152764659Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 22:50:29.156050 containerd[1513]: time="2025-02-13T22:50:29.155991767Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 22:50:29.156490 containerd[1513]: time="2025-02-13T22:50:29.156445657Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 22:50:29.156580 containerd[1513]: time="2025-02-13T22:50:29.156502171Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 22:50:29.156667 containerd[1513]: time="2025-02-13T22:50:29.156630969Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 22:50:29.156722 containerd[1513]: time="2025-02-13T22:50:29.156666793Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 22:50:29.157028 containerd[1513]: time="2025-02-13T22:50:29.156986255Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 22:50:29.157028 containerd[1513]: time="2025-02-13T22:50:29.157019305Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 22:50:29.157146 containerd[1513]: time="2025-02-13T22:50:29.157042169Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 22:50:29.157146 containerd[1513]: time="2025-02-13T22:50:29.157058617Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 22:50:29.157301 containerd[1513]: time="2025-02-13T22:50:29.157255118Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 22:50:29.157891 containerd[1513]: time="2025-02-13T22:50:29.157840998Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 22:50:29.159168 containerd[1513]: time="2025-02-13T22:50:29.159133841Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 22:50:29.159475 containerd[1513]: time="2025-02-13T22:50:29.159443734Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 22:50:29.159894 containerd[1513]: time="2025-02-13T22:50:29.159835826Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 22:50:29.160709 containerd[1513]: time="2025-02-13T22:50:29.160681133Z" level=info msg="metadata content store policy set" policy=shared Feb 13 22:50:29.171560 containerd[1513]: time="2025-02-13T22:50:29.171504256Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 22:50:29.171693 containerd[1513]: time="2025-02-13T22:50:29.171653991Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 22:50:29.171799 containerd[1513]: time="2025-02-13T22:50:29.171764746Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 22:50:29.171895 containerd[1513]: time="2025-02-13T22:50:29.171836767Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 22:50:29.171944 containerd[1513]: time="2025-02-13T22:50:29.171888402Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 22:50:29.172231 containerd[1513]: time="2025-02-13T22:50:29.172199500Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 22:50:29.172616 containerd[1513]: time="2025-02-13T22:50:29.172524801Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 22:50:29.172878 containerd[1513]: time="2025-02-13T22:50:29.172794125Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 22:50:29.172878 containerd[1513]: time="2025-02-13T22:50:29.172849879Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 22:50:29.173003 containerd[1513]: time="2025-02-13T22:50:29.172897043Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 22:50:29.173003 containerd[1513]: time="2025-02-13T22:50:29.172954149Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 22:50:29.173003 containerd[1513]: time="2025-02-13T22:50:29.172982370Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 22:50:29.173136 containerd[1513]: time="2025-02-13T22:50:29.173004321Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 22:50:29.173136 containerd[1513]: time="2025-02-13T22:50:29.173027770Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 22:50:29.173136 containerd[1513]: time="2025-02-13T22:50:29.173071963Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 22:50:29.173136 containerd[1513]: time="2025-02-13T22:50:29.173099541Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 22:50:29.173136 containerd[1513]: time="2025-02-13T22:50:29.173130547Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 22:50:29.173329 containerd[1513]: time="2025-02-13T22:50:29.173153168Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 22:50:29.173329 containerd[1513]: time="2025-02-13T22:50:29.173247214Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 22:50:29.173329 containerd[1513]: time="2025-02-13T22:50:29.173277214Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 22:50:29.173329 containerd[1513]: time="2025-02-13T22:50:29.173309221Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 22:50:29.173550 containerd[1513]: time="2025-02-13T22:50:29.173355019Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 22:50:29.173550 containerd[1513]: time="2025-02-13T22:50:29.173399373Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 22:50:29.173550 containerd[1513]: time="2025-02-13T22:50:29.173444141Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 22:50:29.173550 containerd[1513]: time="2025-02-13T22:50:29.173467654Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 22:50:29.173550 containerd[1513]: time="2025-02-13T22:50:29.173488964Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 22:50:29.173550 containerd[1513]: time="2025-02-13T22:50:29.173509586Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 22:50:29.173550 containerd[1513]: time="2025-02-13T22:50:29.173531339Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 22:50:29.173550 containerd[1513]: time="2025-02-13T22:50:29.173550293Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 22:50:29.173917 containerd[1513]: time="2025-02-13T22:50:29.173590764Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 22:50:29.173917 containerd[1513]: time="2025-02-13T22:50:29.173633121Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 22:50:29.173917 containerd[1513]: time="2025-02-13T22:50:29.173680249Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 22:50:29.173917 containerd[1513]: time="2025-02-13T22:50:29.173750059Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 22:50:29.173917 containerd[1513]: time="2025-02-13T22:50:29.173786547Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 22:50:29.173917 containerd[1513]: time="2025-02-13T22:50:29.173819775Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 22:50:29.178292 containerd[1513]: time="2025-02-13T22:50:29.177842504Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 22:50:29.178379 containerd[1513]: time="2025-02-13T22:50:29.178315873Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 22:50:29.178379 containerd[1513]: time="2025-02-13T22:50:29.178343171Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 22:50:29.178379 containerd[1513]: time="2025-02-13T22:50:29.178366201Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 22:50:29.178557 containerd[1513]: time="2025-02-13T22:50:29.178383235Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 22:50:29.178557 containerd[1513]: time="2025-02-13T22:50:29.178444115Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 22:50:29.178557 containerd[1513]: time="2025-02-13T22:50:29.178490017Z" level=info msg="NRI interface is disabled by configuration." Feb 13 22:50:29.178557 containerd[1513]: time="2025-02-13T22:50:29.178521306Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 22:50:29.179759 containerd[1513]: time="2025-02-13T22:50:29.179296078Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 22:50:29.179759 containerd[1513]: time="2025-02-13T22:50:29.179403894Z" level=info msg="Connect containerd service" Feb 13 22:50:29.179759 containerd[1513]: time="2025-02-13T22:50:29.179493520Z" level=info msg="using legacy CRI server" Feb 13 22:50:29.179759 containerd[1513]: time="2025-02-13T22:50:29.179516397Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 22:50:29.183157 containerd[1513]: time="2025-02-13T22:50:29.182179294Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 22:50:29.191181 containerd[1513]: time="2025-02-13T22:50:29.183754764Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 22:50:29.195663 containerd[1513]: time="2025-02-13T22:50:29.195055701Z" level=info msg="Start subscribing containerd event" Feb 13 22:50:29.195663 containerd[1513]: time="2025-02-13T22:50:29.195217568Z" level=info msg="Start recovering state" Feb 13 22:50:29.195663 containerd[1513]: time="2025-02-13T22:50:29.195319754Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 22:50:29.195663 containerd[1513]: time="2025-02-13T22:50:29.195379401Z" level=info msg="Start event monitor" Feb 13 22:50:29.195663 containerd[1513]: time="2025-02-13T22:50:29.195413755Z" level=info msg="Start snapshots syncer" Feb 13 22:50:29.195663 containerd[1513]: time="2025-02-13T22:50:29.195454058Z" level=info msg="Start cni network conf syncer for default" Feb 13 22:50:29.195663 containerd[1513]: time="2025-02-13T22:50:29.195472127Z" level=info msg="Start streaming server" Feb 13 22:50:29.195663 containerd[1513]: time="2025-02-13T22:50:29.195484573Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 22:50:29.196457 containerd[1513]: time="2025-02-13T22:50:29.196300722Z" level=info msg="containerd successfully booted in 0.151241s" Feb 13 22:50:29.196452 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 22:50:29.974423 tar[1495]: linux-amd64/README.md Feb 13 22:50:29.992067 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 22:50:30.167136 systemd-networkd[1439]: eth0: Ignoring DHCPv6 address 2a02:1348:179:9162:24:19ff:fee6:458a/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:9162:24:19ff:fee6:458a/64 assigned by NDisc. Feb 13 22:50:30.167978 systemd-networkd[1439]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Feb 13 22:50:30.728802 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 22:50:30.744696 (kubelet)[1606]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 22:50:30.878368 systemd-timesyncd[1404]: Network configuration changed, trying to establish connection. Feb 13 22:50:31.576064 kubelet[1606]: E0213 22:50:31.575963 1606 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 22:50:31.579626 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 22:50:31.579958 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 22:50:31.580652 systemd[1]: kubelet.service: Consumed 1.855s CPU time. Feb 13 22:50:32.760556 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 22:50:32.778533 systemd[1]: Started sshd@0-10.230.69.138:22-147.75.109.163:47044.service - OpenSSH per-connection server daemon (147.75.109.163:47044). Feb 13 22:50:33.709988 sshd[1616]: Accepted publickey for core from 147.75.109.163 port 47044 ssh2: RSA SHA256:Yx7fWtREze/vjbfbVXgsOsi8+bAvCeghI7ZLGsIJS+I Feb 13 22:50:33.712784 sshd-session[1616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 22:50:33.730471 systemd-logind[1489]: New session 1 of user core. Feb 13 22:50:33.733709 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 22:50:33.740443 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 22:50:33.783865 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 22:50:33.792384 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 22:50:33.807499 (systemd)[1620]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 22:50:34.023815 systemd[1620]: Queued start job for default target default.target. Feb 13 22:50:34.034958 systemd[1620]: Created slice app.slice - User Application Slice. Feb 13 22:50:34.035237 systemd[1620]: Reached target paths.target - Paths. Feb 13 22:50:34.035394 systemd[1620]: Reached target timers.target - Timers. Feb 13 22:50:34.037786 systemd[1620]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 22:50:34.059076 systemd[1620]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 22:50:34.059258 systemd[1620]: Reached target sockets.target - Sockets. Feb 13 22:50:34.059294 systemd[1620]: Reached target basic.target - Basic System. Feb 13 22:50:34.059360 systemd[1620]: Reached target default.target - Main User Target. Feb 13 22:50:34.059438 systemd[1620]: Startup finished in 240ms. Feb 13 22:50:34.059783 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 22:50:34.072190 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 22:50:34.206392 login[1592]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 13 22:50:34.212511 login[1591]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 13 22:50:34.214158 systemd-logind[1489]: New session 2 of user core. Feb 13 22:50:34.226367 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 22:50:34.231939 systemd-logind[1489]: New session 3 of user core. Feb 13 22:50:34.245183 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 22:50:34.705340 systemd[1]: Started sshd@1-10.230.69.138:22-147.75.109.163:47046.service - OpenSSH per-connection server daemon (147.75.109.163:47046). Feb 13 22:50:35.129613 coreos-metadata[1479]: Feb 13 22:50:35.129 WARN failed to locate config-drive, using the metadata service API instead Feb 13 22:50:35.164562 coreos-metadata[1479]: Feb 13 22:50:35.164 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Feb 13 22:50:35.170456 coreos-metadata[1479]: Feb 13 22:50:35.170 INFO Fetch failed with 404: resource not found Feb 13 22:50:35.170456 coreos-metadata[1479]: Feb 13 22:50:35.170 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Feb 13 22:50:35.170920 coreos-metadata[1479]: Feb 13 22:50:35.170 INFO Fetch successful Feb 13 22:50:35.171039 coreos-metadata[1479]: Feb 13 22:50:35.170 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Feb 13 22:50:35.182022 coreos-metadata[1479]: Feb 13 22:50:35.181 INFO Fetch successful Feb 13 22:50:35.182210 coreos-metadata[1479]: Feb 13 22:50:35.182 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Feb 13 22:50:35.192782 coreos-metadata[1479]: Feb 13 22:50:35.192 INFO Fetch successful Feb 13 22:50:35.192927 coreos-metadata[1479]: Feb 13 22:50:35.192 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Feb 13 22:50:35.209174 coreos-metadata[1479]: Feb 13 22:50:35.209 INFO Fetch successful Feb 13 22:50:35.209294 coreos-metadata[1479]: Feb 13 22:50:35.209 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Feb 13 22:50:35.227597 coreos-metadata[1479]: Feb 13 22:50:35.227 INFO Fetch successful Feb 13 22:50:35.253193 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 22:50:35.254136 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 22:50:35.606386 sshd[1657]: Accepted publickey for core from 147.75.109.163 port 47046 ssh2: RSA SHA256:Yx7fWtREze/vjbfbVXgsOsi8+bAvCeghI7ZLGsIJS+I Feb 13 22:50:35.608602 sshd-session[1657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 22:50:35.615359 systemd-logind[1489]: New session 4 of user core. Feb 13 22:50:35.624487 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 22:50:36.096060 coreos-metadata[1563]: Feb 13 22:50:36.095 WARN failed to locate config-drive, using the metadata service API instead Feb 13 22:50:36.117295 coreos-metadata[1563]: Feb 13 22:50:36.117 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Feb 13 22:50:36.144767 coreos-metadata[1563]: Feb 13 22:50:36.144 INFO Fetch successful Feb 13 22:50:36.145131 coreos-metadata[1563]: Feb 13 22:50:36.145 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 13 22:50:36.177242 coreos-metadata[1563]: Feb 13 22:50:36.177 INFO Fetch successful Feb 13 22:50:36.181749 unknown[1563]: wrote ssh authorized keys file for user: core Feb 13 22:50:36.212776 update-ssh-keys[1670]: Updated "/home/core/.ssh/authorized_keys" Feb 13 22:50:36.213571 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 22:50:36.216466 systemd[1]: Finished sshkeys.service. Feb 13 22:50:36.220159 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 22:50:36.221194 systemd[1]: Startup finished in 1.566s (kernel) + 15.631s (initrd) + 12.887s (userspace) = 30.086s. Feb 13 22:50:36.224599 sshd[1666]: Connection closed by 147.75.109.163 port 47046 Feb 13 22:50:36.225106 sshd-session[1657]: pam_unix(sshd:session): session closed for user core Feb 13 22:50:36.229603 systemd[1]: sshd@1-10.230.69.138:22-147.75.109.163:47046.service: Deactivated successfully. Feb 13 22:50:36.232145 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 22:50:36.234325 systemd-logind[1489]: Session 4 logged out. Waiting for processes to exit. Feb 13 22:50:36.235757 systemd-logind[1489]: Removed session 4. Feb 13 22:50:36.387255 systemd[1]: Started sshd@2-10.230.69.138:22-147.75.109.163:47054.service - OpenSSH per-connection server daemon (147.75.109.163:47054). Feb 13 22:50:37.274987 sshd[1677]: Accepted publickey for core from 147.75.109.163 port 47054 ssh2: RSA SHA256:Yx7fWtREze/vjbfbVXgsOsi8+bAvCeghI7ZLGsIJS+I Feb 13 22:50:37.276971 sshd-session[1677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 22:50:37.284932 systemd-logind[1489]: New session 5 of user core. Feb 13 22:50:37.292154 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 22:50:37.894762 sshd[1679]: Connection closed by 147.75.109.163 port 47054 Feb 13 22:50:37.895754 sshd-session[1677]: pam_unix(sshd:session): session closed for user core Feb 13 22:50:37.900510 systemd-logind[1489]: Session 5 logged out. Waiting for processes to exit. Feb 13 22:50:37.901743 systemd[1]: sshd@2-10.230.69.138:22-147.75.109.163:47054.service: Deactivated successfully. Feb 13 22:50:37.904536 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 22:50:37.905978 systemd-logind[1489]: Removed session 5. Feb 13 22:50:41.830610 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 22:50:41.837431 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 22:50:42.133734 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 22:50:42.149358 (kubelet)[1691]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 22:50:42.235940 kubelet[1691]: E0213 22:50:42.235811 1691 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 22:50:42.239480 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 22:50:42.239731 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 22:50:48.052629 systemd[1]: Started sshd@3-10.230.69.138:22-147.75.109.163:34332.service - OpenSSH per-connection server daemon (147.75.109.163:34332). Feb 13 22:50:48.958613 sshd[1699]: Accepted publickey for core from 147.75.109.163 port 34332 ssh2: RSA SHA256:Yx7fWtREze/vjbfbVXgsOsi8+bAvCeghI7ZLGsIJS+I Feb 13 22:50:48.960639 sshd-session[1699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 22:50:48.967719 systemd-logind[1489]: New session 6 of user core. Feb 13 22:50:48.977151 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 22:50:49.581450 sshd[1701]: Connection closed by 147.75.109.163 port 34332 Feb 13 22:50:49.582882 sshd-session[1699]: pam_unix(sshd:session): session closed for user core Feb 13 22:50:49.589039 systemd[1]: sshd@3-10.230.69.138:22-147.75.109.163:34332.service: Deactivated successfully. Feb 13 22:50:49.591657 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 22:50:49.592760 systemd-logind[1489]: Session 6 logged out. Waiting for processes to exit. Feb 13 22:50:49.594787 systemd-logind[1489]: Removed session 6. Feb 13 22:50:49.747232 systemd[1]: Started sshd@4-10.230.69.138:22-147.75.109.163:50512.service - OpenSSH per-connection server daemon (147.75.109.163:50512). Feb 13 22:50:50.636545 sshd[1706]: Accepted publickey for core from 147.75.109.163 port 50512 ssh2: RSA SHA256:Yx7fWtREze/vjbfbVXgsOsi8+bAvCeghI7ZLGsIJS+I Feb 13 22:50:50.638558 sshd-session[1706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 22:50:50.645434 systemd-logind[1489]: New session 7 of user core. Feb 13 22:50:50.655116 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 22:50:51.249314 sshd[1708]: Connection closed by 147.75.109.163 port 50512 Feb 13 22:50:51.250267 sshd-session[1706]: pam_unix(sshd:session): session closed for user core Feb 13 22:50:51.255029 systemd[1]: sshd@4-10.230.69.138:22-147.75.109.163:50512.service: Deactivated successfully. Feb 13 22:50:51.257052 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 22:50:51.257816 systemd-logind[1489]: Session 7 logged out. Waiting for processes to exit. Feb 13 22:50:51.259206 systemd-logind[1489]: Removed session 7. Feb 13 22:50:51.401954 systemd[1]: Started sshd@5-10.230.69.138:22-147.75.109.163:50524.service - OpenSSH per-connection server daemon (147.75.109.163:50524). Feb 13 22:50:52.295692 sshd[1713]: Accepted publickey for core from 147.75.109.163 port 50524 ssh2: RSA SHA256:Yx7fWtREze/vjbfbVXgsOsi8+bAvCeghI7ZLGsIJS+I Feb 13 22:50:52.297774 sshd-session[1713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 22:50:52.299579 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 22:50:52.308185 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 22:50:52.329528 systemd-logind[1489]: New session 8 of user core. Feb 13 22:50:52.338290 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 22:50:52.501971 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 22:50:52.522459 (kubelet)[1724]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 22:50:52.588534 kubelet[1724]: E0213 22:50:52.588144 1724 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 22:50:52.591608 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 22:50:52.591863 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 22:50:52.917808 sshd[1718]: Connection closed by 147.75.109.163 port 50524 Feb 13 22:50:52.919147 sshd-session[1713]: pam_unix(sshd:session): session closed for user core Feb 13 22:50:52.924093 systemd[1]: sshd@5-10.230.69.138:22-147.75.109.163:50524.service: Deactivated successfully. Feb 13 22:50:52.926811 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 22:50:52.929230 systemd-logind[1489]: Session 8 logged out. Waiting for processes to exit. Feb 13 22:50:52.930695 systemd-logind[1489]: Removed session 8. Feb 13 22:50:53.077036 systemd[1]: Started sshd@6-10.230.69.138:22-147.75.109.163:50526.service - OpenSSH per-connection server daemon (147.75.109.163:50526). Feb 13 22:50:53.981551 sshd[1735]: Accepted publickey for core from 147.75.109.163 port 50526 ssh2: RSA SHA256:Yx7fWtREze/vjbfbVXgsOsi8+bAvCeghI7ZLGsIJS+I Feb 13 22:50:53.983511 sshd-session[1735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 22:50:53.991928 systemd-logind[1489]: New session 9 of user core. Feb 13 22:50:53.995079 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 22:50:54.474011 sudo[1738]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 22:50:54.474491 sudo[1738]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 22:50:54.489969 sudo[1738]: pam_unix(sudo:session): session closed for user root Feb 13 22:50:54.633365 sshd[1737]: Connection closed by 147.75.109.163 port 50526 Feb 13 22:50:54.634569 sshd-session[1735]: pam_unix(sshd:session): session closed for user core Feb 13 22:50:54.639620 systemd[1]: sshd@6-10.230.69.138:22-147.75.109.163:50526.service: Deactivated successfully. Feb 13 22:50:54.641613 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 22:50:54.642614 systemd-logind[1489]: Session 9 logged out. Waiting for processes to exit. Feb 13 22:50:54.644236 systemd-logind[1489]: Removed session 9. Feb 13 22:50:54.792292 systemd[1]: Started sshd@7-10.230.69.138:22-147.75.109.163:50542.service - OpenSSH per-connection server daemon (147.75.109.163:50542). Feb 13 22:50:55.686663 sshd[1743]: Accepted publickey for core from 147.75.109.163 port 50542 ssh2: RSA SHA256:Yx7fWtREze/vjbfbVXgsOsi8+bAvCeghI7ZLGsIJS+I Feb 13 22:50:55.688713 sshd-session[1743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 22:50:55.694798 systemd-logind[1489]: New session 10 of user core. Feb 13 22:50:55.705092 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 22:50:56.166430 sudo[1747]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 22:50:56.166901 sudo[1747]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 22:50:56.173378 sudo[1747]: pam_unix(sudo:session): session closed for user root Feb 13 22:50:56.181094 sudo[1746]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 22:50:56.181559 sudo[1746]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 22:50:56.202503 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 22:50:56.242312 augenrules[1769]: No rules Feb 13 22:50:56.243399 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 22:50:56.243800 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 22:50:56.245163 sudo[1746]: pam_unix(sudo:session): session closed for user root Feb 13 22:50:56.389442 sshd[1745]: Connection closed by 147.75.109.163 port 50542 Feb 13 22:50:56.390378 sshd-session[1743]: pam_unix(sshd:session): session closed for user core Feb 13 22:50:56.395003 systemd[1]: sshd@7-10.230.69.138:22-147.75.109.163:50542.service: Deactivated successfully. Feb 13 22:50:56.397142 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 22:50:56.397951 systemd-logind[1489]: Session 10 logged out. Waiting for processes to exit. Feb 13 22:50:56.399377 systemd-logind[1489]: Removed session 10. Feb 13 22:50:56.547298 systemd[1]: Started sshd@8-10.230.69.138:22-147.75.109.163:50552.service - OpenSSH per-connection server daemon (147.75.109.163:50552). Feb 13 22:50:57.437410 sshd[1777]: Accepted publickey for core from 147.75.109.163 port 50552 ssh2: RSA SHA256:Yx7fWtREze/vjbfbVXgsOsi8+bAvCeghI7ZLGsIJS+I Feb 13 22:50:57.439552 sshd-session[1777]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 22:50:57.446112 systemd-logind[1489]: New session 11 of user core. Feb 13 22:50:57.454279 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 22:50:57.913001 sudo[1780]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 22:50:57.913489 sudo[1780]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 22:50:58.652488 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 22:50:58.664536 (dockerd)[1798]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 22:50:59.407105 dockerd[1798]: time="2025-02-13T22:50:59.405892785Z" level=info msg="Starting up" Feb 13 22:50:59.538856 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2643492322-merged.mount: Deactivated successfully. Feb 13 22:50:59.551700 systemd[1]: var-lib-docker-metacopy\x2dcheck3452562130-merged.mount: Deactivated successfully. Feb 13 22:50:59.592763 dockerd[1798]: time="2025-02-13T22:50:59.592247972Z" level=info msg="Loading containers: start." Feb 13 22:50:59.868357 kernel: Initializing XFRM netlink socket Feb 13 22:50:59.904575 systemd-timesyncd[1404]: Network configuration changed, trying to establish connection. Feb 13 22:50:59.984504 systemd-networkd[1439]: docker0: Link UP Feb 13 22:51:00.015665 dockerd[1798]: time="2025-02-13T22:51:00.015591751Z" level=info msg="Loading containers: done." Feb 13 22:51:00.045749 dockerd[1798]: time="2025-02-13T22:51:00.045069834Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 22:51:00.045749 dockerd[1798]: time="2025-02-13T22:51:00.045230043Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Feb 13 22:51:00.045749 dockerd[1798]: time="2025-02-13T22:51:00.045410793Z" level=info msg="Daemon has completed initialization" Feb 13 22:51:00.083773 dockerd[1798]: time="2025-02-13T22:51:00.083594647Z" level=info msg="API listen on /run/docker.sock" Feb 13 22:51:00.084407 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 22:51:00.207586 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 13 22:51:00.969391 systemd-resolved[1383]: Clock change detected. Flushing caches. Feb 13 22:51:00.969755 systemd-timesyncd[1404]: Contacted time server [2a03:b0c0:1:d0::b1d:6001]:123 (2.flatcar.pool.ntp.org). Feb 13 22:51:00.969871 systemd-timesyncd[1404]: Initial clock synchronization to Thu 2025-02-13 22:51:00.969174 UTC. Feb 13 22:51:01.929230 containerd[1513]: time="2025-02-13T22:51:01.927697311Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.2\"" Feb 13 22:51:02.805687 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2230039698.mount: Deactivated successfully. Feb 13 22:51:03.423698 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Feb 13 22:51:03.433110 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 22:51:03.998876 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 22:51:04.007735 (kubelet)[2053]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 22:51:04.356680 kubelet[2053]: E0213 22:51:04.354405 2053 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 22:51:04.358921 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 22:51:04.359518 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 22:51:05.288640 containerd[1513]: time="2025-02-13T22:51:05.287268649Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 22:51:05.289287 containerd[1513]: time="2025-02-13T22:51:05.289208126Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.2: active requests=0, bytes read=28673939" Feb 13 22:51:05.290582 containerd[1513]: time="2025-02-13T22:51:05.290510568Z" level=info msg="ImageCreate event name:\"sha256:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 22:51:05.294562 containerd[1513]: time="2025-02-13T22:51:05.294471055Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 22:51:05.297279 containerd[1513]: time="2025-02-13T22:51:05.296374245Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.2\" with image id \"sha256:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f\", size \"28670731\" in 3.368480546s" Feb 13 22:51:05.297279 containerd[1513]: time="2025-02-13T22:51:05.296476953Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.2\" returns image reference \"sha256:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef\"" Feb 13 22:51:05.298654 containerd[1513]: time="2025-02-13T22:51:05.298624225Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.2\"" Feb 13 22:51:06.027102 systemd[1]: Started sshd@9-10.230.69.138:22-27.254.163.61:33382.service - OpenSSH per-connection server daemon (27.254.163.61:33382). Feb 13 22:51:07.115980 sshd[2061]: Invalid user xuhao from 27.254.163.61 port 33382 Feb 13 22:51:07.319421 sshd[2061]: Received disconnect from 27.254.163.61 port 33382:11: Bye Bye [preauth] Feb 13 22:51:07.319421 sshd[2061]: Disconnected from invalid user xuhao 27.254.163.61 port 33382 [preauth] Feb 13 22:51:07.319177 systemd[1]: sshd@9-10.230.69.138:22-27.254.163.61:33382.service: Deactivated successfully. Feb 13 22:51:08.196567 containerd[1513]: time="2025-02-13T22:51:08.196476842Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 22:51:08.198242 containerd[1513]: time="2025-02-13T22:51:08.198146208Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.2: active requests=0, bytes read=24771792" Feb 13 22:51:08.199069 containerd[1513]: time="2025-02-13T22:51:08.198996947Z" level=info msg="ImageCreate event name:\"sha256:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 22:51:08.203619 containerd[1513]: time="2025-02-13T22:51:08.203484362Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 22:51:08.205108 containerd[1513]: time="2025-02-13T22:51:08.205061994Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.2\" with image id \"sha256:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90\", size \"26259392\" in 2.906385925s" Feb 13 22:51:08.205196 containerd[1513]: time="2025-02-13T22:51:08.205109885Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.2\" returns image reference \"sha256:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389\"" Feb 13 22:51:08.206447 containerd[1513]: time="2025-02-13T22:51:08.206353314Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.2\"" Feb 13 22:51:10.190586 containerd[1513]: time="2025-02-13T22:51:10.190355640Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 22:51:10.196181 containerd[1513]: time="2025-02-13T22:51:10.195940403Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.2: active requests=0, bytes read=19170284" Feb 13 22:51:10.198884 containerd[1513]: time="2025-02-13T22:51:10.198809120Z" level=info msg="ImageCreate event name:\"sha256:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 22:51:10.203465 containerd[1513]: time="2025-02-13T22:51:10.203425838Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 22:51:10.204930 containerd[1513]: time="2025-02-13T22:51:10.204644275Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.2\" with image id \"sha256:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76\", size \"20657902\" in 1.998250652s" Feb 13 22:51:10.204930 containerd[1513]: time="2025-02-13T22:51:10.204701335Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.2\" returns image reference \"sha256:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d\"" Feb 13 22:51:10.207049 containerd[1513]: time="2025-02-13T22:51:10.206666170Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\"" Feb 13 22:51:11.905500 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1744463907.mount: Deactivated successfully. Feb 13 22:51:12.821489 containerd[1513]: time="2025-02-13T22:51:12.820352388Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 22:51:12.821489 containerd[1513]: time="2025-02-13T22:51:12.821432297Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.2: active requests=0, bytes read=30908847" Feb 13 22:51:12.822305 containerd[1513]: time="2025-02-13T22:51:12.822261501Z" level=info msg="ImageCreate event name:\"sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 22:51:12.824797 containerd[1513]: time="2025-02-13T22:51:12.824744035Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 22:51:12.826035 containerd[1513]: time="2025-02-13T22:51:12.825998996Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.2\" with image id \"sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5\", repo tag \"registry.k8s.io/kube-proxy:v1.32.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\", size \"30907858\" in 2.61879552s" Feb 13 22:51:12.826187 containerd[1513]: time="2025-02-13T22:51:12.826158952Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\" returns image reference \"sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5\"" Feb 13 22:51:12.828022 containerd[1513]: time="2025-02-13T22:51:12.827979042Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Feb 13 22:51:13.511040 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount928757454.mount: Deactivated successfully. Feb 13 22:51:13.878752 update_engine[1490]: I20250213 22:51:13.876973 1490 update_attempter.cc:509] Updating boot flags... Feb 13 22:51:14.009102 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2100) Feb 13 22:51:14.227633 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2097) Feb 13 22:51:14.423744 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Feb 13 22:51:14.432961 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 22:51:14.634929 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 22:51:14.649586 (kubelet)[2147]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 22:51:14.779796 kubelet[2147]: E0213 22:51:14.779287 2147 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 22:51:14.783310 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 22:51:14.783558 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 22:51:15.645869 containerd[1513]: time="2025-02-13T22:51:15.645777186Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 22:51:15.648074 containerd[1513]: time="2025-02-13T22:51:15.647475629Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" Feb 13 22:51:15.651624 containerd[1513]: time="2025-02-13T22:51:15.649970009Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 22:51:15.656707 containerd[1513]: time="2025-02-13T22:51:15.655359735Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 22:51:15.657278 containerd[1513]: time="2025-02-13T22:51:15.657239891Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.82920495s" Feb 13 22:51:15.657414 containerd[1513]: time="2025-02-13T22:51:15.657386623Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Feb 13 22:51:15.658757 containerd[1513]: time="2025-02-13T22:51:15.658711890Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Feb 13 22:51:16.192919 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1083158320.mount: Deactivated successfully. Feb 13 22:51:16.201149 containerd[1513]: time="2025-02-13T22:51:16.201068053Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 22:51:16.202359 containerd[1513]: time="2025-02-13T22:51:16.202302915Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Feb 13 22:51:16.204613 containerd[1513]: time="2025-02-13T22:51:16.203613279Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 22:51:16.206643 containerd[1513]: time="2025-02-13T22:51:16.206587375Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 22:51:16.208076 containerd[1513]: time="2025-02-13T22:51:16.208040750Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 549.282055ms" Feb 13 22:51:16.208238 containerd[1513]: time="2025-02-13T22:51:16.208210527Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Feb 13 22:51:16.209579 containerd[1513]: time="2025-02-13T22:51:16.209539563Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Feb 13 22:51:16.886476 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount546341402.mount: Deactivated successfully. Feb 13 22:51:19.919799 containerd[1513]: time="2025-02-13T22:51:19.919722479Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 22:51:19.922484 containerd[1513]: time="2025-02-13T22:51:19.922401744Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551328" Feb 13 22:51:19.923768 containerd[1513]: time="2025-02-13T22:51:19.923703473Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 22:51:19.929621 containerd[1513]: time="2025-02-13T22:51:19.928799406Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 22:51:19.932841 containerd[1513]: time="2025-02-13T22:51:19.932782892Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.723194988s" Feb 13 22:51:19.932927 containerd[1513]: time="2025-02-13T22:51:19.932857665Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Feb 13 22:51:24.215574 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 22:51:24.229995 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 22:51:24.270865 systemd[1]: Reloading requested from client PID 2247 ('systemctl') (unit session-11.scope)... Feb 13 22:51:24.270946 systemd[1]: Reloading... Feb 13 22:51:24.483681 zram_generator::config[2283]: No configuration found. Feb 13 22:51:24.671542 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 22:51:24.781833 systemd[1]: Reloading finished in 510 ms. Feb 13 22:51:24.863905 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 22:51:24.870410 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 22:51:24.873885 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 22:51:24.874528 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 22:51:24.881556 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 22:51:25.033171 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 22:51:25.048551 (kubelet)[2354]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 22:51:25.148103 kubelet[2354]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 22:51:25.148103 kubelet[2354]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Feb 13 22:51:25.148103 kubelet[2354]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 22:51:25.148753 kubelet[2354]: I0213 22:51:25.148254 2354 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 22:51:25.607636 kubelet[2354]: I0213 22:51:25.607556 2354 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Feb 13 22:51:25.607956 kubelet[2354]: I0213 22:51:25.607924 2354 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 22:51:25.608966 kubelet[2354]: I0213 22:51:25.608942 2354 server.go:954] "Client rotation is on, will bootstrap in background" Feb 13 22:51:25.660814 kubelet[2354]: E0213 22:51:25.660748 2354 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.230.69.138:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.69.138:6443: connect: connection refused" logger="UnhandledError" Feb 13 22:51:25.661278 kubelet[2354]: I0213 22:51:25.661252 2354 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 22:51:25.683275 kubelet[2354]: E0213 22:51:25.683217 2354 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 22:51:25.683569 kubelet[2354]: I0213 22:51:25.683547 2354 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 22:51:25.691895 kubelet[2354]: I0213 22:51:25.691846 2354 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 22:51:25.695994 kubelet[2354]: I0213 22:51:25.695935 2354 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 22:51:25.696388 kubelet[2354]: I0213 22:51:25.696096 2354 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-uk33m.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 22:51:25.697495 kubelet[2354]: I0213 22:51:25.696812 2354 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 22:51:25.697495 kubelet[2354]: I0213 22:51:25.696838 2354 container_manager_linux.go:304] "Creating device plugin manager" Feb 13 22:51:25.697495 kubelet[2354]: I0213 22:51:25.697101 2354 state_mem.go:36] "Initialized new in-memory state store" Feb 13 22:51:25.701179 kubelet[2354]: I0213 22:51:25.701155 2354 kubelet.go:446] "Attempting to sync node with API server" Feb 13 22:51:25.701336 kubelet[2354]: I0213 22:51:25.701315 2354 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 22:51:25.701501 kubelet[2354]: I0213 22:51:25.701482 2354 kubelet.go:352] "Adding apiserver pod source" Feb 13 22:51:25.701654 kubelet[2354]: I0213 22:51:25.701634 2354 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 22:51:25.706349 kubelet[2354]: W0213 22:51:25.706273 2354 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.69.138:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-uk33m.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.69.138:6443: connect: connection refused Feb 13 22:51:25.706433 kubelet[2354]: E0213 22:51:25.706363 2354 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.230.69.138:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-uk33m.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.69.138:6443: connect: connection refused" logger="UnhandledError" Feb 13 22:51:25.707301 kubelet[2354]: W0213 22:51:25.707257 2354 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.69.138:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.230.69.138:6443: connect: connection refused Feb 13 22:51:25.707378 kubelet[2354]: E0213 22:51:25.707310 2354 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.230.69.138:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.69.138:6443: connect: connection refused" logger="UnhandledError" Feb 13 22:51:25.708621 kubelet[2354]: I0213 22:51:25.707447 2354 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 22:51:25.711299 kubelet[2354]: I0213 22:51:25.711216 2354 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 22:51:25.712172 kubelet[2354]: W0213 22:51:25.712120 2354 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 22:51:25.724758 kubelet[2354]: I0213 22:51:25.722783 2354 watchdog_linux.go:99] "Systemd watchdog is not enabled" Feb 13 22:51:25.724758 kubelet[2354]: I0213 22:51:25.722858 2354 server.go:1287] "Started kubelet" Feb 13 22:51:25.730472 kubelet[2354]: I0213 22:51:25.730392 2354 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 22:51:25.731959 kubelet[2354]: I0213 22:51:25.731931 2354 server.go:490] "Adding debug handlers to kubelet server" Feb 13 22:51:25.733417 kubelet[2354]: I0213 22:51:25.733335 2354 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 22:51:25.733871 kubelet[2354]: I0213 22:51:25.733835 2354 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 22:51:25.737760 kubelet[2354]: I0213 22:51:25.737453 2354 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 22:51:25.737760 kubelet[2354]: E0213 22:51:25.735148 2354 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.69.138:6443/api/v1/namespaces/default/events\": dial tcp 10.230.69.138:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-uk33m.gb1.brightbox.com.1823e649cbe37154 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-uk33m.gb1.brightbox.com,UID:srv-uk33m.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-uk33m.gb1.brightbox.com,},FirstTimestamp:2025-02-13 22:51:25.722825044 +0000 UTC m=+0.668494131,LastTimestamp:2025-02-13 22:51:25.722825044 +0000 UTC m=+0.668494131,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-uk33m.gb1.brightbox.com,}" Feb 13 22:51:25.738702 kubelet[2354]: I0213 22:51:25.738281 2354 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 22:51:25.744560 kubelet[2354]: E0213 22:51:25.743818 2354 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"srv-uk33m.gb1.brightbox.com\" not found" Feb 13 22:51:25.744560 kubelet[2354]: I0213 22:51:25.743929 2354 volume_manager.go:297] "Starting Kubelet Volume Manager" Feb 13 22:51:25.748287 kubelet[2354]: I0213 22:51:25.747245 2354 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 22:51:25.748287 kubelet[2354]: I0213 22:51:25.747394 2354 reconciler.go:26] "Reconciler: start to sync state" Feb 13 22:51:25.748503 kubelet[2354]: W0213 22:51:25.748427 2354 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.69.138:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.69.138:6443: connect: connection refused Feb 13 22:51:25.748612 kubelet[2354]: E0213 22:51:25.748506 2354 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.230.69.138:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.69.138:6443: connect: connection refused" logger="UnhandledError" Feb 13 22:51:25.750637 kubelet[2354]: E0213 22:51:25.749485 2354 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.69.138:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-uk33m.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.69.138:6443: connect: connection refused" interval="200ms" Feb 13 22:51:25.755080 kubelet[2354]: I0213 22:51:25.755041 2354 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 22:51:25.758651 kubelet[2354]: E0213 22:51:25.758563 2354 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 22:51:25.759958 kubelet[2354]: I0213 22:51:25.759913 2354 factory.go:221] Registration of the containerd container factory successfully Feb 13 22:51:25.759958 kubelet[2354]: I0213 22:51:25.759940 2354 factory.go:221] Registration of the systemd container factory successfully Feb 13 22:51:25.792848 kubelet[2354]: I0213 22:51:25.792780 2354 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 22:51:25.797884 kubelet[2354]: I0213 22:51:25.796483 2354 cpu_manager.go:221] "Starting CPU manager" policy="none" Feb 13 22:51:25.797884 kubelet[2354]: I0213 22:51:25.796517 2354 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Feb 13 22:51:25.797884 kubelet[2354]: I0213 22:51:25.796544 2354 state_mem.go:36] "Initialized new in-memory state store" Feb 13 22:51:25.797884 kubelet[2354]: I0213 22:51:25.797196 2354 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 22:51:25.797884 kubelet[2354]: I0213 22:51:25.797238 2354 status_manager.go:227] "Starting to sync pod status with apiserver" Feb 13 22:51:25.797884 kubelet[2354]: I0213 22:51:25.797273 2354 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Feb 13 22:51:25.797884 kubelet[2354]: I0213 22:51:25.797295 2354 kubelet.go:2388] "Starting kubelet main sync loop" Feb 13 22:51:25.797884 kubelet[2354]: E0213 22:51:25.797494 2354 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 22:51:25.799867 kubelet[2354]: W0213 22:51:25.799820 2354 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.69.138:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.69.138:6443: connect: connection refused Feb 13 22:51:25.799997 kubelet[2354]: E0213 22:51:25.799886 2354 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.230.69.138:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.69.138:6443: connect: connection refused" logger="UnhandledError" Feb 13 22:51:25.801027 kubelet[2354]: I0213 22:51:25.800333 2354 policy_none.go:49] "None policy: Start" Feb 13 22:51:25.801027 kubelet[2354]: I0213 22:51:25.800370 2354 memory_manager.go:186] "Starting memorymanager" policy="None" Feb 13 22:51:25.801027 kubelet[2354]: I0213 22:51:25.800399 2354 state_mem.go:35] "Initializing new in-memory state store" Feb 13 22:51:25.811484 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 22:51:25.824807 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 22:51:25.828843 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 22:51:25.844395 kubelet[2354]: E0213 22:51:25.844293 2354 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"srv-uk33m.gb1.brightbox.com\" not found" Feb 13 22:51:25.845492 kubelet[2354]: I0213 22:51:25.845436 2354 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 22:51:25.845775 kubelet[2354]: I0213 22:51:25.845752 2354 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 22:51:25.845983 kubelet[2354]: I0213 22:51:25.845790 2354 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 22:51:25.846535 kubelet[2354]: I0213 22:51:25.846497 2354 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 22:51:25.852929 kubelet[2354]: E0213 22:51:25.852887 2354 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Feb 13 22:51:25.853028 kubelet[2354]: E0213 22:51:25.852974 2354 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-uk33m.gb1.brightbox.com\" not found" Feb 13 22:51:25.914940 systemd[1]: Created slice kubepods-burstable-podfd6de2db82020755d6288293a366c63d.slice - libcontainer container kubepods-burstable-podfd6de2db82020755d6288293a366c63d.slice. Feb 13 22:51:25.932136 kubelet[2354]: E0213 22:51:25.932073 2354 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-uk33m.gb1.brightbox.com\" not found" node="srv-uk33m.gb1.brightbox.com" Feb 13 22:51:25.941903 systemd[1]: Created slice kubepods-burstable-pod994991e7937382829015471f05a1c919.slice - libcontainer container kubepods-burstable-pod994991e7937382829015471f05a1c919.slice. Feb 13 22:51:25.955195 systemd[1]: Created slice kubepods-burstable-podf4cdd00cbf6905ad0778c6d8c2e03c4d.slice - libcontainer container kubepods-burstable-podf4cdd00cbf6905ad0778c6d8c2e03c4d.slice. Feb 13 22:51:25.975945 kubelet[2354]: E0213 22:51:25.944818 2354 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-uk33m.gb1.brightbox.com\" not found" node="srv-uk33m.gb1.brightbox.com" Feb 13 22:51:25.975945 kubelet[2354]: I0213 22:51:25.949474 2354 kubelet_node_status.go:76] "Attempting to register node" node="srv-uk33m.gb1.brightbox.com" Feb 13 22:51:25.975945 kubelet[2354]: E0213 22:51:25.950119 2354 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.230.69.138:6443/api/v1/nodes\": dial tcp 10.230.69.138:6443: connect: connection refused" node="srv-uk33m.gb1.brightbox.com" Feb 13 22:51:25.975945 kubelet[2354]: E0213 22:51:25.950130 2354 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.69.138:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-uk33m.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.69.138:6443: connect: connection refused" interval="400ms" Feb 13 22:51:25.975945 kubelet[2354]: E0213 22:51:25.957860 2354 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-uk33m.gb1.brightbox.com\" not found" node="srv-uk33m.gb1.brightbox.com" Feb 13 22:51:26.049244 kubelet[2354]: I0213 22:51:26.049063 2354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fd6de2db82020755d6288293a366c63d-usr-share-ca-certificates\") pod \"kube-apiserver-srv-uk33m.gb1.brightbox.com\" (UID: \"fd6de2db82020755d6288293a366c63d\") " pod="kube-system/kube-apiserver-srv-uk33m.gb1.brightbox.com" Feb 13 22:51:26.049507 kubelet[2354]: I0213 22:51:26.049277 2354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/994991e7937382829015471f05a1c919-flexvolume-dir\") pod \"kube-controller-manager-srv-uk33m.gb1.brightbox.com\" (UID: \"994991e7937382829015471f05a1c919\") " pod="kube-system/kube-controller-manager-srv-uk33m.gb1.brightbox.com" Feb 13 22:51:26.049507 kubelet[2354]: I0213 22:51:26.049368 2354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/994991e7937382829015471f05a1c919-k8s-certs\") pod \"kube-controller-manager-srv-uk33m.gb1.brightbox.com\" (UID: \"994991e7937382829015471f05a1c919\") " pod="kube-system/kube-controller-manager-srv-uk33m.gb1.brightbox.com" Feb 13 22:51:26.049507 kubelet[2354]: I0213 22:51:26.049461 2354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/994991e7937382829015471f05a1c919-kubeconfig\") pod \"kube-controller-manager-srv-uk33m.gb1.brightbox.com\" (UID: \"994991e7937382829015471f05a1c919\") " pod="kube-system/kube-controller-manager-srv-uk33m.gb1.brightbox.com" Feb 13 22:51:26.049733 kubelet[2354]: I0213 22:51:26.049537 2354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/994991e7937382829015471f05a1c919-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-uk33m.gb1.brightbox.com\" (UID: \"994991e7937382829015471f05a1c919\") " pod="kube-system/kube-controller-manager-srv-uk33m.gb1.brightbox.com" Feb 13 22:51:26.049733 kubelet[2354]: I0213 22:51:26.049643 2354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fd6de2db82020755d6288293a366c63d-ca-certs\") pod \"kube-apiserver-srv-uk33m.gb1.brightbox.com\" (UID: \"fd6de2db82020755d6288293a366c63d\") " pod="kube-system/kube-apiserver-srv-uk33m.gb1.brightbox.com" Feb 13 22:51:26.049733 kubelet[2354]: I0213 22:51:26.049723 2354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fd6de2db82020755d6288293a366c63d-k8s-certs\") pod \"kube-apiserver-srv-uk33m.gb1.brightbox.com\" (UID: \"fd6de2db82020755d6288293a366c63d\") " pod="kube-system/kube-apiserver-srv-uk33m.gb1.brightbox.com" Feb 13 22:51:26.049913 kubelet[2354]: I0213 22:51:26.049755 2354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/994991e7937382829015471f05a1c919-ca-certs\") pod \"kube-controller-manager-srv-uk33m.gb1.brightbox.com\" (UID: \"994991e7937382829015471f05a1c919\") " pod="kube-system/kube-controller-manager-srv-uk33m.gb1.brightbox.com" Feb 13 22:51:26.049913 kubelet[2354]: I0213 22:51:26.049805 2354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f4cdd00cbf6905ad0778c6d8c2e03c4d-kubeconfig\") pod \"kube-scheduler-srv-uk33m.gb1.brightbox.com\" (UID: \"f4cdd00cbf6905ad0778c6d8c2e03c4d\") " pod="kube-system/kube-scheduler-srv-uk33m.gb1.brightbox.com" Feb 13 22:51:26.153683 kubelet[2354]: I0213 22:51:26.153641 2354 kubelet_node_status.go:76] "Attempting to register node" node="srv-uk33m.gb1.brightbox.com" Feb 13 22:51:26.154501 kubelet[2354]: E0213 22:51:26.154462 2354 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.230.69.138:6443/api/v1/nodes\": dial tcp 10.230.69.138:6443: connect: connection refused" node="srv-uk33m.gb1.brightbox.com" Feb 13 22:51:26.239769 containerd[1513]: time="2025-02-13T22:51:26.237356613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-uk33m.gb1.brightbox.com,Uid:fd6de2db82020755d6288293a366c63d,Namespace:kube-system,Attempt:0,}" Feb 13 22:51:26.246590 containerd[1513]: time="2025-02-13T22:51:26.246037878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-uk33m.gb1.brightbox.com,Uid:994991e7937382829015471f05a1c919,Namespace:kube-system,Attempt:0,}" Feb 13 22:51:26.259670 containerd[1513]: time="2025-02-13T22:51:26.259560969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-uk33m.gb1.brightbox.com,Uid:f4cdd00cbf6905ad0778c6d8c2e03c4d,Namespace:kube-system,Attempt:0,}" Feb 13 22:51:26.351051 kubelet[2354]: E0213 22:51:26.350983 2354 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.69.138:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-uk33m.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.69.138:6443: connect: connection refused" interval="800ms" Feb 13 22:51:26.557314 kubelet[2354]: I0213 22:51:26.557158 2354 kubelet_node_status.go:76] "Attempting to register node" node="srv-uk33m.gb1.brightbox.com" Feb 13 22:51:26.557835 kubelet[2354]: E0213 22:51:26.557566 2354 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.230.69.138:6443/api/v1/nodes\": dial tcp 10.230.69.138:6443: connect: connection refused" node="srv-uk33m.gb1.brightbox.com" Feb 13 22:51:26.713109 kubelet[2354]: W0213 22:51:26.712978 2354 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.69.138:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.230.69.138:6443: connect: connection refused Feb 13 22:51:26.713109 kubelet[2354]: E0213 22:51:26.713088 2354 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.230.69.138:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.69.138:6443: connect: connection refused" logger="UnhandledError" Feb 13 22:51:26.780703 kubelet[2354]: W0213 22:51:26.780620 2354 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.69.138:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.69.138:6443: connect: connection refused Feb 13 22:51:26.780949 kubelet[2354]: E0213 22:51:26.780722 2354 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.230.69.138:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.69.138:6443: connect: connection refused" logger="UnhandledError" Feb 13 22:51:26.841901 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2172290086.mount: Deactivated successfully. Feb 13 22:51:26.849389 containerd[1513]: time="2025-02-13T22:51:26.848858940Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 22:51:26.850471 containerd[1513]: time="2025-02-13T22:51:26.850416401Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Feb 13 22:51:26.852610 containerd[1513]: time="2025-02-13T22:51:26.852536068Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 22:51:26.854153 containerd[1513]: time="2025-02-13T22:51:26.854102237Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 22:51:26.855374 containerd[1513]: time="2025-02-13T22:51:26.855298174Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 22:51:26.857621 containerd[1513]: time="2025-02-13T22:51:26.856932291Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 22:51:26.858132 containerd[1513]: time="2025-02-13T22:51:26.858099403Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 22:51:26.858817 containerd[1513]: time="2025-02-13T22:51:26.858771528Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 22:51:26.860813 containerd[1513]: time="2025-02-13T22:51:26.860772413Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 623.20361ms" Feb 13 22:51:26.866332 containerd[1513]: time="2025-02-13T22:51:26.866297120Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 620.144005ms" Feb 13 22:51:26.867679 containerd[1513]: time="2025-02-13T22:51:26.867645693Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 607.892082ms" Feb 13 22:51:26.895171 kubelet[2354]: W0213 22:51:26.894984 2354 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.69.138:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-uk33m.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.69.138:6443: connect: connection refused Feb 13 22:51:26.895171 kubelet[2354]: E0213 22:51:26.895098 2354 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.230.69.138:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-uk33m.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.69.138:6443: connect: connection refused" logger="UnhandledError" Feb 13 22:51:27.046245 containerd[1513]: time="2025-02-13T22:51:27.045855564Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 22:51:27.046245 containerd[1513]: time="2025-02-13T22:51:27.045938471Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 22:51:27.046245 containerd[1513]: time="2025-02-13T22:51:27.045966709Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 22:51:27.046245 containerd[1513]: time="2025-02-13T22:51:27.046079442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 22:51:27.057534 containerd[1513]: time="2025-02-13T22:51:27.044237440Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 22:51:27.057534 containerd[1513]: time="2025-02-13T22:51:27.055720996Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 22:51:27.057534 containerd[1513]: time="2025-02-13T22:51:27.055746973Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 22:51:27.057534 containerd[1513]: time="2025-02-13T22:51:27.055987149Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 22:51:27.062114 containerd[1513]: time="2025-02-13T22:51:27.060803665Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 22:51:27.062655 containerd[1513]: time="2025-02-13T22:51:27.062260214Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 22:51:27.062655 containerd[1513]: time="2025-02-13T22:51:27.062290191Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 22:51:27.062655 containerd[1513]: time="2025-02-13T22:51:27.062460379Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 22:51:27.105714 systemd[1]: Started cri-containerd-2446a74895224517a96dd5142954542f4c09344dd2f08bbf94ac2a5c5dea0a08.scope - libcontainer container 2446a74895224517a96dd5142954542f4c09344dd2f08bbf94ac2a5c5dea0a08. Feb 13 22:51:27.117403 systemd[1]: Started cri-containerd-4aac88ff95590ad3a6f8e9459b8e9b44f4399e6c8f2f5f9dd1dd42d1250a1e74.scope - libcontainer container 4aac88ff95590ad3a6f8e9459b8e9b44f4399e6c8f2f5f9dd1dd42d1250a1e74. Feb 13 22:51:27.123679 systemd[1]: Started cri-containerd-3de057d6fd4432500f82acc97327423216fe2b0cfcfe706136a2462f2ccbf6cd.scope - libcontainer container 3de057d6fd4432500f82acc97327423216fe2b0cfcfe706136a2462f2ccbf6cd. Feb 13 22:51:27.153195 kubelet[2354]: E0213 22:51:27.153135 2354 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.69.138:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-uk33m.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.69.138:6443: connect: connection refused" interval="1.6s" Feb 13 22:51:27.229810 containerd[1513]: time="2025-02-13T22:51:27.229733345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-uk33m.gb1.brightbox.com,Uid:994991e7937382829015471f05a1c919,Namespace:kube-system,Attempt:0,} returns sandbox id \"2446a74895224517a96dd5142954542f4c09344dd2f08bbf94ac2a5c5dea0a08\"" Feb 13 22:51:27.237552 containerd[1513]: time="2025-02-13T22:51:27.237512942Z" level=info msg="CreateContainer within sandbox \"2446a74895224517a96dd5142954542f4c09344dd2f08bbf94ac2a5c5dea0a08\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 22:51:27.239440 containerd[1513]: time="2025-02-13T22:51:27.239408266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-uk33m.gb1.brightbox.com,Uid:fd6de2db82020755d6288293a366c63d,Namespace:kube-system,Attempt:0,} returns sandbox id \"3de057d6fd4432500f82acc97327423216fe2b0cfcfe706136a2462f2ccbf6cd\"" Feb 13 22:51:27.245964 containerd[1513]: time="2025-02-13T22:51:27.245805454Z" level=info msg="CreateContainer within sandbox \"3de057d6fd4432500f82acc97327423216fe2b0cfcfe706136a2462f2ccbf6cd\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 22:51:27.269731 containerd[1513]: time="2025-02-13T22:51:27.268767728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-uk33m.gb1.brightbox.com,Uid:f4cdd00cbf6905ad0778c6d8c2e03c4d,Namespace:kube-system,Attempt:0,} returns sandbox id \"4aac88ff95590ad3a6f8e9459b8e9b44f4399e6c8f2f5f9dd1dd42d1250a1e74\"" Feb 13 22:51:27.275229 containerd[1513]: time="2025-02-13T22:51:27.275049216Z" level=info msg="CreateContainer within sandbox \"4aac88ff95590ad3a6f8e9459b8e9b44f4399e6c8f2f5f9dd1dd42d1250a1e74\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 22:51:27.284050 containerd[1513]: time="2025-02-13T22:51:27.283968772Z" level=info msg="CreateContainer within sandbox \"2446a74895224517a96dd5142954542f4c09344dd2f08bbf94ac2a5c5dea0a08\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f9de135b5718bc37c3ae29310c69abdc9df1ba0c1b98778c55e116281c09dff0\"" Feb 13 22:51:27.285513 containerd[1513]: time="2025-02-13T22:51:27.285482878Z" level=info msg="StartContainer for \"f9de135b5718bc37c3ae29310c69abdc9df1ba0c1b98778c55e116281c09dff0\"" Feb 13 22:51:27.292534 containerd[1513]: time="2025-02-13T22:51:27.292325530Z" level=info msg="CreateContainer within sandbox \"3de057d6fd4432500f82acc97327423216fe2b0cfcfe706136a2462f2ccbf6cd\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"baa31f1ea4dac70fe6b3524f8a652136ba062f2f0fd53cd52fd7be92489b82e8\"" Feb 13 22:51:27.293695 containerd[1513]: time="2025-02-13T22:51:27.293421296Z" level=info msg="StartContainer for \"baa31f1ea4dac70fe6b3524f8a652136ba062f2f0fd53cd52fd7be92489b82e8\"" Feb 13 22:51:27.295704 containerd[1513]: time="2025-02-13T22:51:27.295590959Z" level=info msg="CreateContainer within sandbox \"4aac88ff95590ad3a6f8e9459b8e9b44f4399e6c8f2f5f9dd1dd42d1250a1e74\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1ab8926f0aa212c1b2fa2ae83627661179378a5b623e77bafaefe0d915fbd381\"" Feb 13 22:51:27.296266 containerd[1513]: time="2025-02-13T22:51:27.296167217Z" level=info msg="StartContainer for \"1ab8926f0aa212c1b2fa2ae83627661179378a5b623e77bafaefe0d915fbd381\"" Feb 13 22:51:27.309684 kubelet[2354]: W0213 22:51:27.309472 2354 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.69.138:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.69.138:6443: connect: connection refused Feb 13 22:51:27.309684 kubelet[2354]: E0213 22:51:27.309558 2354 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.230.69.138:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.69.138:6443: connect: connection refused" logger="UnhandledError" Feb 13 22:51:27.338805 systemd[1]: Started cri-containerd-f9de135b5718bc37c3ae29310c69abdc9df1ba0c1b98778c55e116281c09dff0.scope - libcontainer container f9de135b5718bc37c3ae29310c69abdc9df1ba0c1b98778c55e116281c09dff0. Feb 13 22:51:27.363786 kubelet[2354]: I0213 22:51:27.363186 2354 kubelet_node_status.go:76] "Attempting to register node" node="srv-uk33m.gb1.brightbox.com" Feb 13 22:51:27.363786 kubelet[2354]: E0213 22:51:27.363660 2354 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.230.69.138:6443/api/v1/nodes\": dial tcp 10.230.69.138:6443: connect: connection refused" node="srv-uk33m.gb1.brightbox.com" Feb 13 22:51:27.365836 systemd[1]: Started cri-containerd-1ab8926f0aa212c1b2fa2ae83627661179378a5b623e77bafaefe0d915fbd381.scope - libcontainer container 1ab8926f0aa212c1b2fa2ae83627661179378a5b623e77bafaefe0d915fbd381. Feb 13 22:51:27.373774 systemd[1]: Started cri-containerd-baa31f1ea4dac70fe6b3524f8a652136ba062f2f0fd53cd52fd7be92489b82e8.scope - libcontainer container baa31f1ea4dac70fe6b3524f8a652136ba062f2f0fd53cd52fd7be92489b82e8. Feb 13 22:51:27.471149 containerd[1513]: time="2025-02-13T22:51:27.470724377Z" level=info msg="StartContainer for \"f9de135b5718bc37c3ae29310c69abdc9df1ba0c1b98778c55e116281c09dff0\" returns successfully" Feb 13 22:51:27.492175 containerd[1513]: time="2025-02-13T22:51:27.492105459Z" level=info msg="StartContainer for \"1ab8926f0aa212c1b2fa2ae83627661179378a5b623e77bafaefe0d915fbd381\" returns successfully" Feb 13 22:51:27.506964 containerd[1513]: time="2025-02-13T22:51:27.506762563Z" level=info msg="StartContainer for \"baa31f1ea4dac70fe6b3524f8a652136ba062f2f0fd53cd52fd7be92489b82e8\" returns successfully" Feb 13 22:51:27.712887 kubelet[2354]: E0213 22:51:27.712831 2354 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.230.69.138:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.69.138:6443: connect: connection refused" logger="UnhandledError" Feb 13 22:51:27.819458 kubelet[2354]: E0213 22:51:27.819093 2354 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-uk33m.gb1.brightbox.com\" not found" node="srv-uk33m.gb1.brightbox.com" Feb 13 22:51:27.829792 kubelet[2354]: E0213 22:51:27.829663 2354 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-uk33m.gb1.brightbox.com\" not found" node="srv-uk33m.gb1.brightbox.com" Feb 13 22:51:27.833632 kubelet[2354]: E0213 22:51:27.833040 2354 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-uk33m.gb1.brightbox.com\" not found" node="srv-uk33m.gb1.brightbox.com" Feb 13 22:51:28.833742 kubelet[2354]: E0213 22:51:28.831615 2354 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-uk33m.gb1.brightbox.com\" not found" node="srv-uk33m.gb1.brightbox.com" Feb 13 22:51:28.833742 kubelet[2354]: E0213 22:51:28.832140 2354 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-uk33m.gb1.brightbox.com\" not found" node="srv-uk33m.gb1.brightbox.com" Feb 13 22:51:28.967376 kubelet[2354]: I0213 22:51:28.966943 2354 kubelet_node_status.go:76] "Attempting to register node" node="srv-uk33m.gb1.brightbox.com" Feb 13 22:51:30.544271 kubelet[2354]: E0213 22:51:30.544080 2354 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-uk33m.gb1.brightbox.com\" not found" node="srv-uk33m.gb1.brightbox.com" Feb 13 22:51:30.671622 kubelet[2354]: I0213 22:51:30.670356 2354 kubelet_node_status.go:79] "Successfully registered node" node="srv-uk33m.gb1.brightbox.com" Feb 13 22:51:30.671622 kubelet[2354]: E0213 22:51:30.670439 2354 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"srv-uk33m.gb1.brightbox.com\": node \"srv-uk33m.gb1.brightbox.com\" not found" Feb 13 22:51:30.708927 kubelet[2354]: I0213 22:51:30.708868 2354 apiserver.go:52] "Watching apiserver" Feb 13 22:51:30.748108 kubelet[2354]: I0213 22:51:30.748030 2354 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 22:51:30.749094 kubelet[2354]: I0213 22:51:30.749066 2354 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-uk33m.gb1.brightbox.com" Feb 13 22:51:30.761524 kubelet[2354]: E0213 22:51:30.761470 2354 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-uk33m.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-uk33m.gb1.brightbox.com" Feb 13 22:51:30.761709 kubelet[2354]: I0213 22:51:30.761532 2354 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-uk33m.gb1.brightbox.com" Feb 13 22:51:30.763554 kubelet[2354]: E0213 22:51:30.763333 2354 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-uk33m.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-uk33m.gb1.brightbox.com" Feb 13 22:51:30.763554 kubelet[2354]: I0213 22:51:30.763364 2354 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-uk33m.gb1.brightbox.com" Feb 13 22:51:30.765194 kubelet[2354]: E0213 22:51:30.765169 2354 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-uk33m.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-uk33m.gb1.brightbox.com" Feb 13 22:51:32.584761 systemd[1]: Reloading requested from client PID 2638 ('systemctl') (unit session-11.scope)... Feb 13 22:51:32.584801 systemd[1]: Reloading... Feb 13 22:51:32.717810 zram_generator::config[2674]: No configuration found. Feb 13 22:51:32.921574 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 22:51:33.055451 systemd[1]: Reloading finished in 469 ms. Feb 13 22:51:33.135237 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 22:51:33.149153 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 22:51:33.150318 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 22:51:33.150483 systemd[1]: kubelet.service: Consumed 1.097s CPU time, 124.5M memory peak, 0B memory swap peak. Feb 13 22:51:33.163727 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 22:51:33.442865 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 22:51:33.455274 (kubelet)[2741]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 22:51:33.575451 kubelet[2741]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 22:51:33.575451 kubelet[2741]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Feb 13 22:51:33.575451 kubelet[2741]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 22:51:33.575451 kubelet[2741]: I0213 22:51:33.575375 2741 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 22:51:33.593765 kubelet[2741]: I0213 22:51:33.589982 2741 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Feb 13 22:51:33.593765 kubelet[2741]: I0213 22:51:33.590019 2741 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 22:51:33.593765 kubelet[2741]: I0213 22:51:33.590369 2741 server.go:954] "Client rotation is on, will bootstrap in background" Feb 13 22:51:33.593765 kubelet[2741]: I0213 22:51:33.592075 2741 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 22:51:33.600631 kubelet[2741]: I0213 22:51:33.600576 2741 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 22:51:33.601689 sudo[2752]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 13 22:51:33.602939 sudo[2752]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Feb 13 22:51:33.610530 kubelet[2741]: E0213 22:51:33.610437 2741 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 22:51:33.610530 kubelet[2741]: I0213 22:51:33.610527 2741 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 22:51:33.622519 kubelet[2741]: I0213 22:51:33.618897 2741 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 22:51:33.622519 kubelet[2741]: I0213 22:51:33.620351 2741 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 22:51:33.622519 kubelet[2741]: I0213 22:51:33.620387 2741 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-uk33m.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 22:51:33.622519 kubelet[2741]: I0213 22:51:33.620631 2741 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 22:51:33.622995 kubelet[2741]: I0213 22:51:33.620651 2741 container_manager_linux.go:304] "Creating device plugin manager" Feb 13 22:51:33.622995 kubelet[2741]: I0213 22:51:33.620716 2741 state_mem.go:36] "Initialized new in-memory state store" Feb 13 22:51:33.622995 kubelet[2741]: I0213 22:51:33.621021 2741 kubelet.go:446] "Attempting to sync node with API server" Feb 13 22:51:33.622995 kubelet[2741]: I0213 22:51:33.621043 2741 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 22:51:33.622995 kubelet[2741]: I0213 22:51:33.621084 2741 kubelet.go:352] "Adding apiserver pod source" Feb 13 22:51:33.622995 kubelet[2741]: I0213 22:51:33.621101 2741 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 22:51:33.635431 kubelet[2741]: I0213 22:51:33.635279 2741 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 22:51:33.636876 kubelet[2741]: I0213 22:51:33.636852 2741 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 22:51:33.637732 kubelet[2741]: I0213 22:51:33.637711 2741 watchdog_linux.go:99] "Systemd watchdog is not enabled" Feb 13 22:51:33.637881 kubelet[2741]: I0213 22:51:33.637860 2741 server.go:1287] "Started kubelet" Feb 13 22:51:33.644962 kubelet[2741]: I0213 22:51:33.644650 2741 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 22:51:33.647153 kubelet[2741]: I0213 22:51:33.646035 2741 server.go:490] "Adding debug handlers to kubelet server" Feb 13 22:51:33.647396 kubelet[2741]: I0213 22:51:33.647342 2741 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 22:51:33.647701 kubelet[2741]: I0213 22:51:33.647678 2741 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 22:51:33.649184 kubelet[2741]: I0213 22:51:33.648958 2741 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 22:51:33.662907 kubelet[2741]: I0213 22:51:33.659384 2741 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 22:51:33.662907 kubelet[2741]: I0213 22:51:33.662713 2741 volume_manager.go:297] "Starting Kubelet Volume Manager" Feb 13 22:51:33.663155 kubelet[2741]: E0213 22:51:33.662968 2741 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"srv-uk33m.gb1.brightbox.com\" not found" Feb 13 22:51:33.667313 kubelet[2741]: I0213 22:51:33.666098 2741 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 22:51:33.667313 kubelet[2741]: I0213 22:51:33.666451 2741 reconciler.go:26] "Reconciler: start to sync state" Feb 13 22:51:33.669688 kubelet[2741]: I0213 22:51:33.669548 2741 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 22:51:33.671139 kubelet[2741]: I0213 22:51:33.670522 2741 factory.go:221] Registration of the systemd container factory successfully Feb 13 22:51:33.672767 kubelet[2741]: I0213 22:51:33.672136 2741 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 22:51:33.677267 kubelet[2741]: I0213 22:51:33.671173 2741 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 22:51:33.677267 kubelet[2741]: I0213 22:51:33.673336 2741 status_manager.go:227] "Starting to sync pod status with apiserver" Feb 13 22:51:33.677267 kubelet[2741]: I0213 22:51:33.673365 2741 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Feb 13 22:51:33.677267 kubelet[2741]: I0213 22:51:33.673377 2741 kubelet.go:2388] "Starting kubelet main sync loop" Feb 13 22:51:33.677267 kubelet[2741]: E0213 22:51:33.673439 2741 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 22:51:33.708319 kubelet[2741]: I0213 22:51:33.708035 2741 factory.go:221] Registration of the containerd container factory successfully Feb 13 22:51:33.713286 kubelet[2741]: E0213 22:51:33.711990 2741 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 22:51:33.777993 kubelet[2741]: E0213 22:51:33.777427 2741 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 22:51:33.837118 kubelet[2741]: I0213 22:51:33.837075 2741 cpu_manager.go:221] "Starting CPU manager" policy="none" Feb 13 22:51:33.837118 kubelet[2741]: I0213 22:51:33.837106 2741 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Feb 13 22:51:33.837118 kubelet[2741]: I0213 22:51:33.837133 2741 state_mem.go:36] "Initialized new in-memory state store" Feb 13 22:51:33.837555 kubelet[2741]: I0213 22:51:33.837363 2741 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 22:51:33.837555 kubelet[2741]: I0213 22:51:33.837391 2741 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 22:51:33.837555 kubelet[2741]: I0213 22:51:33.837422 2741 policy_none.go:49] "None policy: Start" Feb 13 22:51:33.837555 kubelet[2741]: I0213 22:51:33.837447 2741 memory_manager.go:186] "Starting memorymanager" policy="None" Feb 13 22:51:33.837555 kubelet[2741]: I0213 22:51:33.837477 2741 state_mem.go:35] "Initializing new in-memory state store" Feb 13 22:51:33.837793 kubelet[2741]: I0213 22:51:33.837780 2741 state_mem.go:75] "Updated machine memory state" Feb 13 22:51:33.847965 kubelet[2741]: I0213 22:51:33.847938 2741 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 22:51:33.848196 kubelet[2741]: I0213 22:51:33.848174 2741 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 22:51:33.848281 kubelet[2741]: I0213 22:51:33.848198 2741 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 22:51:33.849124 kubelet[2741]: I0213 22:51:33.849002 2741 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 22:51:33.867818 kubelet[2741]: E0213 22:51:33.867690 2741 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Feb 13 22:51:33.980557 kubelet[2741]: I0213 22:51:33.980395 2741 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-uk33m.gb1.brightbox.com" Feb 13 22:51:33.981887 kubelet[2741]: I0213 22:51:33.981686 2741 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-uk33m.gb1.brightbox.com" Feb 13 22:51:33.983670 kubelet[2741]: I0213 22:51:33.982705 2741 kubelet_node_status.go:76] "Attempting to register node" node="srv-uk33m.gb1.brightbox.com" Feb 13 22:51:33.986616 kubelet[2741]: I0213 22:51:33.984136 2741 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-uk33m.gb1.brightbox.com" Feb 13 22:51:33.996358 kubelet[2741]: W0213 22:51:33.996319 2741 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 22:51:33.999591 kubelet[2741]: W0213 22:51:33.998465 2741 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 22:51:34.004623 kubelet[2741]: W0213 22:51:34.003435 2741 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 22:51:34.009659 kubelet[2741]: I0213 22:51:34.009177 2741 kubelet_node_status.go:125] "Node was previously registered" node="srv-uk33m.gb1.brightbox.com" Feb 13 22:51:34.009659 kubelet[2741]: I0213 22:51:34.009284 2741 kubelet_node_status.go:79] "Successfully registered node" node="srv-uk33m.gb1.brightbox.com" Feb 13 22:51:34.068740 kubelet[2741]: I0213 22:51:34.068683 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/994991e7937382829015471f05a1c919-kubeconfig\") pod \"kube-controller-manager-srv-uk33m.gb1.brightbox.com\" (UID: \"994991e7937382829015471f05a1c919\") " pod="kube-system/kube-controller-manager-srv-uk33m.gb1.brightbox.com" Feb 13 22:51:34.070712 kubelet[2741]: I0213 22:51:34.070682 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/994991e7937382829015471f05a1c919-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-uk33m.gb1.brightbox.com\" (UID: \"994991e7937382829015471f05a1c919\") " pod="kube-system/kube-controller-manager-srv-uk33m.gb1.brightbox.com" Feb 13 22:51:34.070861 kubelet[2741]: I0213 22:51:34.070832 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f4cdd00cbf6905ad0778c6d8c2e03c4d-kubeconfig\") pod \"kube-scheduler-srv-uk33m.gb1.brightbox.com\" (UID: \"f4cdd00cbf6905ad0778c6d8c2e03c4d\") " pod="kube-system/kube-scheduler-srv-uk33m.gb1.brightbox.com" Feb 13 22:51:34.071009 kubelet[2741]: I0213 22:51:34.070984 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fd6de2db82020755d6288293a366c63d-ca-certs\") pod \"kube-apiserver-srv-uk33m.gb1.brightbox.com\" (UID: \"fd6de2db82020755d6288293a366c63d\") " pod="kube-system/kube-apiserver-srv-uk33m.gb1.brightbox.com" Feb 13 22:51:34.071162 kubelet[2741]: I0213 22:51:34.071138 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fd6de2db82020755d6288293a366c63d-k8s-certs\") pod \"kube-apiserver-srv-uk33m.gb1.brightbox.com\" (UID: \"fd6de2db82020755d6288293a366c63d\") " pod="kube-system/kube-apiserver-srv-uk33m.gb1.brightbox.com" Feb 13 22:51:34.071411 kubelet[2741]: I0213 22:51:34.071278 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fd6de2db82020755d6288293a366c63d-usr-share-ca-certificates\") pod \"kube-apiserver-srv-uk33m.gb1.brightbox.com\" (UID: \"fd6de2db82020755d6288293a366c63d\") " pod="kube-system/kube-apiserver-srv-uk33m.gb1.brightbox.com" Feb 13 22:51:34.071411 kubelet[2741]: I0213 22:51:34.071314 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/994991e7937382829015471f05a1c919-ca-certs\") pod \"kube-controller-manager-srv-uk33m.gb1.brightbox.com\" (UID: \"994991e7937382829015471f05a1c919\") " pod="kube-system/kube-controller-manager-srv-uk33m.gb1.brightbox.com" Feb 13 22:51:34.071411 kubelet[2741]: I0213 22:51:34.071342 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/994991e7937382829015471f05a1c919-k8s-certs\") pod \"kube-controller-manager-srv-uk33m.gb1.brightbox.com\" (UID: \"994991e7937382829015471f05a1c919\") " pod="kube-system/kube-controller-manager-srv-uk33m.gb1.brightbox.com" Feb 13 22:51:34.071411 kubelet[2741]: I0213 22:51:34.071368 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/994991e7937382829015471f05a1c919-flexvolume-dir\") pod \"kube-controller-manager-srv-uk33m.gb1.brightbox.com\" (UID: \"994991e7937382829015471f05a1c919\") " pod="kube-system/kube-controller-manager-srv-uk33m.gb1.brightbox.com" Feb 13 22:51:34.486395 sudo[2752]: pam_unix(sudo:session): session closed for user root Feb 13 22:51:34.646801 kubelet[2741]: I0213 22:51:34.646721 2741 apiserver.go:52] "Watching apiserver" Feb 13 22:51:34.666643 kubelet[2741]: I0213 22:51:34.666544 2741 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 22:51:34.751742 kubelet[2741]: I0213 22:51:34.748925 2741 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-uk33m.gb1.brightbox.com" Feb 13 22:51:34.757336 kubelet[2741]: W0213 22:51:34.757303 2741 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 22:51:34.757843 kubelet[2741]: E0213 22:51:34.757711 2741 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-uk33m.gb1.brightbox.com\" already exists" pod="kube-system/kube-scheduler-srv-uk33m.gb1.brightbox.com" Feb 13 22:51:34.822100 kubelet[2741]: I0213 22:51:34.821876 2741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-uk33m.gb1.brightbox.com" podStartSLOduration=1.821253029 podStartE2EDuration="1.821253029s" podCreationTimestamp="2025-02-13 22:51:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 22:51:34.807138181 +0000 UTC m=+1.337413165" watchObservedRunningTime="2025-02-13 22:51:34.821253029 +0000 UTC m=+1.351528008" Feb 13 22:51:34.837021 kubelet[2741]: I0213 22:51:34.836910 2741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-uk33m.gb1.brightbox.com" podStartSLOduration=1.836884157 podStartE2EDuration="1.836884157s" podCreationTimestamp="2025-02-13 22:51:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 22:51:34.822498132 +0000 UTC m=+1.352773097" watchObservedRunningTime="2025-02-13 22:51:34.836884157 +0000 UTC m=+1.367159136" Feb 13 22:51:34.852016 kubelet[2741]: I0213 22:51:34.851517 2741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-uk33m.gb1.brightbox.com" podStartSLOduration=1.8514932179999999 podStartE2EDuration="1.851493218s" podCreationTimestamp="2025-02-13 22:51:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 22:51:34.838241551 +0000 UTC m=+1.368516541" watchObservedRunningTime="2025-02-13 22:51:34.851493218 +0000 UTC m=+1.381768188" Feb 13 22:51:36.280231 sudo[1780]: pam_unix(sudo:session): session closed for user root Feb 13 22:51:36.424112 sshd[1779]: Connection closed by 147.75.109.163 port 50552 Feb 13 22:51:36.426181 sshd-session[1777]: pam_unix(sshd:session): session closed for user core Feb 13 22:51:36.433386 systemd[1]: sshd@8-10.230.69.138:22-147.75.109.163:50552.service: Deactivated successfully. Feb 13 22:51:36.435905 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 22:51:36.436226 systemd[1]: session-11.scope: Consumed 7.043s CPU time, 137.7M memory peak, 0B memory swap peak. Feb 13 22:51:36.437174 systemd-logind[1489]: Session 11 logged out. Waiting for processes to exit. Feb 13 22:51:36.439183 systemd-logind[1489]: Removed session 11. Feb 13 22:51:37.534841 kubelet[2741]: I0213 22:51:37.534620 2741 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 22:51:37.535424 containerd[1513]: time="2025-02-13T22:51:37.535138433Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 22:51:37.536281 kubelet[2741]: I0213 22:51:37.536235 2741 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 22:51:38.084722 systemd[1]: Created slice kubepods-besteffort-pod5f9431eb_8e53_4ad0_acd8_4bb2369ed11d.slice - libcontainer container kubepods-besteffort-pod5f9431eb_8e53_4ad0_acd8_4bb2369ed11d.slice. Feb 13 22:51:38.101884 kubelet[2741]: I0213 22:51:38.101826 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/039cfe1d-50d6-49a8-83ef-f60be1c2e6d5-cilium-cgroup\") pod \"cilium-6kzdl\" (UID: \"039cfe1d-50d6-49a8-83ef-f60be1c2e6d5\") " pod="kube-system/cilium-6kzdl" Feb 13 22:51:38.102052 kubelet[2741]: I0213 22:51:38.101903 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/039cfe1d-50d6-49a8-83ef-f60be1c2e6d5-clustermesh-secrets\") pod \"cilium-6kzdl\" (UID: \"039cfe1d-50d6-49a8-83ef-f60be1c2e6d5\") " pod="kube-system/cilium-6kzdl" Feb 13 22:51:38.102052 kubelet[2741]: I0213 22:51:38.101941 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/039cfe1d-50d6-49a8-83ef-f60be1c2e6d5-bpf-maps\") pod \"cilium-6kzdl\" (UID: \"039cfe1d-50d6-49a8-83ef-f60be1c2e6d5\") " pod="kube-system/cilium-6kzdl" Feb 13 22:51:38.102052 kubelet[2741]: I0213 22:51:38.101970 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/039cfe1d-50d6-49a8-83ef-f60be1c2e6d5-host-proc-sys-net\") pod \"cilium-6kzdl\" (UID: \"039cfe1d-50d6-49a8-83ef-f60be1c2e6d5\") " pod="kube-system/cilium-6kzdl" Feb 13 22:51:38.102052 kubelet[2741]: I0213 22:51:38.102002 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5f9431eb-8e53-4ad0-acd8-4bb2369ed11d-lib-modules\") pod \"kube-proxy-gm9bc\" (UID: \"5f9431eb-8e53-4ad0-acd8-4bb2369ed11d\") " pod="kube-system/kube-proxy-gm9bc" Feb 13 22:51:38.102259 kubelet[2741]: I0213 22:51:38.102054 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t87n5\" (UniqueName: \"kubernetes.io/projected/5f9431eb-8e53-4ad0-acd8-4bb2369ed11d-kube-api-access-t87n5\") pod \"kube-proxy-gm9bc\" (UID: \"5f9431eb-8e53-4ad0-acd8-4bb2369ed11d\") " pod="kube-system/kube-proxy-gm9bc" Feb 13 22:51:38.102259 kubelet[2741]: I0213 22:51:38.102084 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5f9431eb-8e53-4ad0-acd8-4bb2369ed11d-xtables-lock\") pod \"kube-proxy-gm9bc\" (UID: \"5f9431eb-8e53-4ad0-acd8-4bb2369ed11d\") " pod="kube-system/kube-proxy-gm9bc" Feb 13 22:51:38.102259 kubelet[2741]: I0213 22:51:38.102109 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/039cfe1d-50d6-49a8-83ef-f60be1c2e6d5-cni-path\") pod \"cilium-6kzdl\" (UID: \"039cfe1d-50d6-49a8-83ef-f60be1c2e6d5\") " pod="kube-system/cilium-6kzdl" Feb 13 22:51:38.102259 kubelet[2741]: I0213 22:51:38.102148 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/039cfe1d-50d6-49a8-83ef-f60be1c2e6d5-etc-cni-netd\") pod \"cilium-6kzdl\" (UID: \"039cfe1d-50d6-49a8-83ef-f60be1c2e6d5\") " pod="kube-system/cilium-6kzdl" Feb 13 22:51:38.102259 kubelet[2741]: I0213 22:51:38.102178 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/039cfe1d-50d6-49a8-83ef-f60be1c2e6d5-cilium-config-path\") pod \"cilium-6kzdl\" (UID: \"039cfe1d-50d6-49a8-83ef-f60be1c2e6d5\") " pod="kube-system/cilium-6kzdl" Feb 13 22:51:38.102555 kubelet[2741]: I0213 22:51:38.102216 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/039cfe1d-50d6-49a8-83ef-f60be1c2e6d5-host-proc-sys-kernel\") pod \"cilium-6kzdl\" (UID: \"039cfe1d-50d6-49a8-83ef-f60be1c2e6d5\") " pod="kube-system/cilium-6kzdl" Feb 13 22:51:38.102555 kubelet[2741]: I0213 22:51:38.102251 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/039cfe1d-50d6-49a8-83ef-f60be1c2e6d5-xtables-lock\") pod \"cilium-6kzdl\" (UID: \"039cfe1d-50d6-49a8-83ef-f60be1c2e6d5\") " pod="kube-system/cilium-6kzdl" Feb 13 22:51:38.102555 kubelet[2741]: I0213 22:51:38.102280 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5f9431eb-8e53-4ad0-acd8-4bb2369ed11d-kube-proxy\") pod \"kube-proxy-gm9bc\" (UID: \"5f9431eb-8e53-4ad0-acd8-4bb2369ed11d\") " pod="kube-system/kube-proxy-gm9bc" Feb 13 22:51:38.102555 kubelet[2741]: I0213 22:51:38.102307 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/039cfe1d-50d6-49a8-83ef-f60be1c2e6d5-hubble-tls\") pod \"cilium-6kzdl\" (UID: \"039cfe1d-50d6-49a8-83ef-f60be1c2e6d5\") " pod="kube-system/cilium-6kzdl" Feb 13 22:51:38.102555 kubelet[2741]: I0213 22:51:38.102335 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/039cfe1d-50d6-49a8-83ef-f60be1c2e6d5-hostproc\") pod \"cilium-6kzdl\" (UID: \"039cfe1d-50d6-49a8-83ef-f60be1c2e6d5\") " pod="kube-system/cilium-6kzdl" Feb 13 22:51:38.102555 kubelet[2741]: I0213 22:51:38.102360 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/039cfe1d-50d6-49a8-83ef-f60be1c2e6d5-lib-modules\") pod \"cilium-6kzdl\" (UID: \"039cfe1d-50d6-49a8-83ef-f60be1c2e6d5\") " pod="kube-system/cilium-6kzdl" Feb 13 22:51:38.104774 kubelet[2741]: I0213 22:51:38.102386 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xphgl\" (UniqueName: \"kubernetes.io/projected/039cfe1d-50d6-49a8-83ef-f60be1c2e6d5-kube-api-access-xphgl\") pod \"cilium-6kzdl\" (UID: \"039cfe1d-50d6-49a8-83ef-f60be1c2e6d5\") " pod="kube-system/cilium-6kzdl" Feb 13 22:51:38.104774 kubelet[2741]: I0213 22:51:38.102416 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/039cfe1d-50d6-49a8-83ef-f60be1c2e6d5-cilium-run\") pod \"cilium-6kzdl\" (UID: \"039cfe1d-50d6-49a8-83ef-f60be1c2e6d5\") " pod="kube-system/cilium-6kzdl" Feb 13 22:51:38.106165 systemd[1]: Created slice kubepods-burstable-pod039cfe1d_50d6_49a8_83ef_f60be1c2e6d5.slice - libcontainer container kubepods-burstable-pod039cfe1d_50d6_49a8_83ef_f60be1c2e6d5.slice. Feb 13 22:51:38.231866 kubelet[2741]: E0213 22:51:38.231757 2741 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Feb 13 22:51:38.231866 kubelet[2741]: E0213 22:51:38.231827 2741 projected.go:194] Error preparing data for projected volume kube-api-access-xphgl for pod kube-system/cilium-6kzdl: configmap "kube-root-ca.crt" not found Feb 13 22:51:38.233571 kubelet[2741]: E0213 22:51:38.233454 2741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/039cfe1d-50d6-49a8-83ef-f60be1c2e6d5-kube-api-access-xphgl podName:039cfe1d-50d6-49a8-83ef-f60be1c2e6d5 nodeName:}" failed. No retries permitted until 2025-02-13 22:51:38.731954726 +0000 UTC m=+5.262229698 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-xphgl" (UniqueName: "kubernetes.io/projected/039cfe1d-50d6-49a8-83ef-f60be1c2e6d5-kube-api-access-xphgl") pod "cilium-6kzdl" (UID: "039cfe1d-50d6-49a8-83ef-f60be1c2e6d5") : configmap "kube-root-ca.crt" not found Feb 13 22:51:38.233946 kubelet[2741]: E0213 22:51:38.233914 2741 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Feb 13 22:51:38.234028 kubelet[2741]: E0213 22:51:38.233946 2741 projected.go:194] Error preparing data for projected volume kube-api-access-t87n5 for pod kube-system/kube-proxy-gm9bc: configmap "kube-root-ca.crt" not found Feb 13 22:51:38.234028 kubelet[2741]: E0213 22:51:38.233993 2741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5f9431eb-8e53-4ad0-acd8-4bb2369ed11d-kube-api-access-t87n5 podName:5f9431eb-8e53-4ad0-acd8-4bb2369ed11d nodeName:}" failed. No retries permitted until 2025-02-13 22:51:38.733977114 +0000 UTC m=+5.264252078 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-t87n5" (UniqueName: "kubernetes.io/projected/5f9431eb-8e53-4ad0-acd8-4bb2369ed11d-kube-api-access-t87n5") pod "kube-proxy-gm9bc" (UID: "5f9431eb-8e53-4ad0-acd8-4bb2369ed11d") : configmap "kube-root-ca.crt" not found Feb 13 22:51:38.702156 systemd[1]: Created slice kubepods-besteffort-pod5adf25ac_5cb9_4335_be07_d75abc733b91.slice - libcontainer container kubepods-besteffort-pod5adf25ac_5cb9_4335_be07_d75abc733b91.slice. Feb 13 22:51:38.709012 kubelet[2741]: I0213 22:51:38.707233 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5adf25ac-5cb9-4335-be07-d75abc733b91-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-hhfqg\" (UID: \"5adf25ac-5cb9-4335-be07-d75abc733b91\") " pod="kube-system/cilium-operator-6c4d7847fc-hhfqg" Feb 13 22:51:38.709012 kubelet[2741]: I0213 22:51:38.707288 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knrdj\" (UniqueName: \"kubernetes.io/projected/5adf25ac-5cb9-4335-be07-d75abc733b91-kube-api-access-knrdj\") pod \"cilium-operator-6c4d7847fc-hhfqg\" (UID: \"5adf25ac-5cb9-4335-be07-d75abc733b91\") " pod="kube-system/cilium-operator-6c4d7847fc-hhfqg" Feb 13 22:51:38.999560 containerd[1513]: time="2025-02-13T22:51:38.999379921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gm9bc,Uid:5f9431eb-8e53-4ad0-acd8-4bb2369ed11d,Namespace:kube-system,Attempt:0,}" Feb 13 22:51:39.009633 containerd[1513]: time="2025-02-13T22:51:39.009289868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-hhfqg,Uid:5adf25ac-5cb9-4335-be07-d75abc733b91,Namespace:kube-system,Attempt:0,}" Feb 13 22:51:39.023943 containerd[1513]: time="2025-02-13T22:51:39.023881817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6kzdl,Uid:039cfe1d-50d6-49a8-83ef-f60be1c2e6d5,Namespace:kube-system,Attempt:0,}" Feb 13 22:51:39.054330 containerd[1513]: time="2025-02-13T22:51:39.053294298Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 22:51:39.054330 containerd[1513]: time="2025-02-13T22:51:39.053438635Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 22:51:39.054330 containerd[1513]: time="2025-02-13T22:51:39.053463660Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 22:51:39.055857 containerd[1513]: time="2025-02-13T22:51:39.055696832Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 22:51:39.075634 containerd[1513]: time="2025-02-13T22:51:39.075200629Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 22:51:39.075634 containerd[1513]: time="2025-02-13T22:51:39.075320359Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 22:51:39.075634 containerd[1513]: time="2025-02-13T22:51:39.075411576Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 22:51:39.075634 containerd[1513]: time="2025-02-13T22:51:39.075551295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 22:51:39.096937 containerd[1513]: time="2025-02-13T22:51:39.096773771Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 22:51:39.098297 containerd[1513]: time="2025-02-13T22:51:39.097971500Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 22:51:39.098297 containerd[1513]: time="2025-02-13T22:51:39.098020601Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 22:51:39.102855 containerd[1513]: time="2025-02-13T22:51:39.100435020Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 22:51:39.106793 systemd[1]: Started cri-containerd-45e299e9394e5889595b084562c44ed0b44a70d8b06a0eb0a28c81e9e02a084c.scope - libcontainer container 45e299e9394e5889595b084562c44ed0b44a70d8b06a0eb0a28c81e9e02a084c. Feb 13 22:51:39.139892 systemd[1]: Started cri-containerd-44024082abf2ed3847030e66c247dd5da4f3bb5f2d94348290750d890bf2f2f5.scope - libcontainer container 44024082abf2ed3847030e66c247dd5da4f3bb5f2d94348290750d890bf2f2f5. Feb 13 22:51:39.154885 systemd[1]: Started cri-containerd-fe524400ab5759d4ac6df93612289220948c8f51939667a8ce94e020884ae21f.scope - libcontainer container fe524400ab5759d4ac6df93612289220948c8f51939667a8ce94e020884ae21f. Feb 13 22:51:39.212848 containerd[1513]: time="2025-02-13T22:51:39.212368538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gm9bc,Uid:5f9431eb-8e53-4ad0-acd8-4bb2369ed11d,Namespace:kube-system,Attempt:0,} returns sandbox id \"45e299e9394e5889595b084562c44ed0b44a70d8b06a0eb0a28c81e9e02a084c\"" Feb 13 22:51:39.238180 containerd[1513]: time="2025-02-13T22:51:39.236718189Z" level=info msg="CreateContainer within sandbox \"45e299e9394e5889595b084562c44ed0b44a70d8b06a0eb0a28c81e9e02a084c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 22:51:39.249759 containerd[1513]: time="2025-02-13T22:51:39.249564633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6kzdl,Uid:039cfe1d-50d6-49a8-83ef-f60be1c2e6d5,Namespace:kube-system,Attempt:0,} returns sandbox id \"fe524400ab5759d4ac6df93612289220948c8f51939667a8ce94e020884ae21f\"" Feb 13 22:51:39.261223 containerd[1513]: time="2025-02-13T22:51:39.260854063Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 22:51:39.281923 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1087537949.mount: Deactivated successfully. Feb 13 22:51:39.287589 containerd[1513]: time="2025-02-13T22:51:39.287245575Z" level=info msg="CreateContainer within sandbox \"45e299e9394e5889595b084562c44ed0b44a70d8b06a0eb0a28c81e9e02a084c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6a4c00a74c9678e9827f386dfe5a9d41179bcde130b12cd0d2858f1394edc436\"" Feb 13 22:51:39.288335 containerd[1513]: time="2025-02-13T22:51:39.288281931Z" level=info msg="StartContainer for \"6a4c00a74c9678e9827f386dfe5a9d41179bcde130b12cd0d2858f1394edc436\"" Feb 13 22:51:39.297667 containerd[1513]: time="2025-02-13T22:51:39.297545834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-hhfqg,Uid:5adf25ac-5cb9-4335-be07-d75abc733b91,Namespace:kube-system,Attempt:0,} returns sandbox id \"44024082abf2ed3847030e66c247dd5da4f3bb5f2d94348290750d890bf2f2f5\"" Feb 13 22:51:39.333792 systemd[1]: Started cri-containerd-6a4c00a74c9678e9827f386dfe5a9d41179bcde130b12cd0d2858f1394edc436.scope - libcontainer container 6a4c00a74c9678e9827f386dfe5a9d41179bcde130b12cd0d2858f1394edc436. Feb 13 22:51:39.388115 containerd[1513]: time="2025-02-13T22:51:39.387975670Z" level=info msg="StartContainer for \"6a4c00a74c9678e9827f386dfe5a9d41179bcde130b12cd0d2858f1394edc436\" returns successfully" Feb 13 22:51:39.807853 kubelet[2741]: I0213 22:51:39.807436 2741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gm9bc" podStartSLOduration=1.807411374 podStartE2EDuration="1.807411374s" podCreationTimestamp="2025-02-13 22:51:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 22:51:39.783089972 +0000 UTC m=+6.313364953" watchObservedRunningTime="2025-02-13 22:51:39.807411374 +0000 UTC m=+6.337686352" Feb 13 22:51:47.139313 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1327438289.mount: Deactivated successfully. Feb 13 22:51:50.186466 containerd[1513]: time="2025-02-13T22:51:50.186139739Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 22:51:50.192838 containerd[1513]: time="2025-02-13T22:51:50.192778255Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Feb 13 22:51:50.197259 containerd[1513]: time="2025-02-13T22:51:50.197195639Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 22:51:50.199964 containerd[1513]: time="2025-02-13T22:51:50.199768145Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.938740795s" Feb 13 22:51:50.199964 containerd[1513]: time="2025-02-13T22:51:50.199831169Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 13 22:51:50.201417 containerd[1513]: time="2025-02-13T22:51:50.201382189Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 13 22:51:50.203763 containerd[1513]: time="2025-02-13T22:51:50.203690574Z" level=info msg="CreateContainer within sandbox \"fe524400ab5759d4ac6df93612289220948c8f51939667a8ce94e020884ae21f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 22:51:50.291649 containerd[1513]: time="2025-02-13T22:51:50.290179837Z" level=info msg="CreateContainer within sandbox \"fe524400ab5759d4ac6df93612289220948c8f51939667a8ce94e020884ae21f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ef6d04928657abef93a24546df42c9ed8d75325b63f0eda63e3e06b0f2114e7a\"" Feb 13 22:51:50.292637 containerd[1513]: time="2025-02-13T22:51:50.292172556Z" level=info msg="StartContainer for \"ef6d04928657abef93a24546df42c9ed8d75325b63f0eda63e3e06b0f2114e7a\"" Feb 13 22:51:50.454992 systemd[1]: Started cri-containerd-ef6d04928657abef93a24546df42c9ed8d75325b63f0eda63e3e06b0f2114e7a.scope - libcontainer container ef6d04928657abef93a24546df42c9ed8d75325b63f0eda63e3e06b0f2114e7a. Feb 13 22:51:50.502227 containerd[1513]: time="2025-02-13T22:51:50.502169908Z" level=info msg="StartContainer for \"ef6d04928657abef93a24546df42c9ed8d75325b63f0eda63e3e06b0f2114e7a\" returns successfully" Feb 13 22:51:50.520494 systemd[1]: cri-containerd-ef6d04928657abef93a24546df42c9ed8d75325b63f0eda63e3e06b0f2114e7a.scope: Deactivated successfully. Feb 13 22:51:50.863949 containerd[1513]: time="2025-02-13T22:51:50.840552706Z" level=info msg="shim disconnected" id=ef6d04928657abef93a24546df42c9ed8d75325b63f0eda63e3e06b0f2114e7a namespace=k8s.io Feb 13 22:51:50.864339 containerd[1513]: time="2025-02-13T22:51:50.864294318Z" level=warning msg="cleaning up after shim disconnected" id=ef6d04928657abef93a24546df42c9ed8d75325b63f0eda63e3e06b0f2114e7a namespace=k8s.io Feb 13 22:51:50.864453 containerd[1513]: time="2025-02-13T22:51:50.864428451Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 22:51:51.259730 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ef6d04928657abef93a24546df42c9ed8d75325b63f0eda63e3e06b0f2114e7a-rootfs.mount: Deactivated successfully. Feb 13 22:51:51.848920 containerd[1513]: time="2025-02-13T22:51:51.848406389Z" level=info msg="CreateContainer within sandbox \"fe524400ab5759d4ac6df93612289220948c8f51939667a8ce94e020884ae21f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 22:51:51.872062 containerd[1513]: time="2025-02-13T22:51:51.871634438Z" level=info msg="CreateContainer within sandbox \"fe524400ab5759d4ac6df93612289220948c8f51939667a8ce94e020884ae21f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b4805739ea3d5c9896724654fdfcd579acdea589c8e425f9e7e80f4fb47596b4\"" Feb 13 22:51:51.874460 containerd[1513]: time="2025-02-13T22:51:51.874415243Z" level=info msg="StartContainer for \"b4805739ea3d5c9896724654fdfcd579acdea589c8e425f9e7e80f4fb47596b4\"" Feb 13 22:51:51.920809 systemd[1]: Started cri-containerd-b4805739ea3d5c9896724654fdfcd579acdea589c8e425f9e7e80f4fb47596b4.scope - libcontainer container b4805739ea3d5c9896724654fdfcd579acdea589c8e425f9e7e80f4fb47596b4. Feb 13 22:51:51.963027 containerd[1513]: time="2025-02-13T22:51:51.962956287Z" level=info msg="StartContainer for \"b4805739ea3d5c9896724654fdfcd579acdea589c8e425f9e7e80f4fb47596b4\" returns successfully" Feb 13 22:51:51.978221 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 22:51:51.979361 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 22:51:51.979490 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Feb 13 22:51:51.986574 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 22:51:51.986894 systemd[1]: cri-containerd-b4805739ea3d5c9896724654fdfcd579acdea589c8e425f9e7e80f4fb47596b4.scope: Deactivated successfully. Feb 13 22:51:52.018941 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b4805739ea3d5c9896724654fdfcd579acdea589c8e425f9e7e80f4fb47596b4-rootfs.mount: Deactivated successfully. Feb 13 22:51:52.022429 containerd[1513]: time="2025-02-13T22:51:52.022209795Z" level=info msg="shim disconnected" id=b4805739ea3d5c9896724654fdfcd579acdea589c8e425f9e7e80f4fb47596b4 namespace=k8s.io Feb 13 22:51:52.022429 containerd[1513]: time="2025-02-13T22:51:52.022386640Z" level=warning msg="cleaning up after shim disconnected" id=b4805739ea3d5c9896724654fdfcd579acdea589c8e425f9e7e80f4fb47596b4 namespace=k8s.io Feb 13 22:51:52.022429 containerd[1513]: time="2025-02-13T22:51:52.022402720Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 22:51:52.057780 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 22:51:52.850854 containerd[1513]: time="2025-02-13T22:51:52.850627899Z" level=info msg="CreateContainer within sandbox \"fe524400ab5759d4ac6df93612289220948c8f51939667a8ce94e020884ae21f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 22:51:52.895861 containerd[1513]: time="2025-02-13T22:51:52.895785452Z" level=info msg="CreateContainer within sandbox \"fe524400ab5759d4ac6df93612289220948c8f51939667a8ce94e020884ae21f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b768a2095461ba0354cc34f7ff914037ee4becec741a9b40519408530fae5073\"" Feb 13 22:51:52.898899 containerd[1513]: time="2025-02-13T22:51:52.896945665Z" level=info msg="StartContainer for \"b768a2095461ba0354cc34f7ff914037ee4becec741a9b40519408530fae5073\"" Feb 13 22:51:52.945903 systemd[1]: Started cri-containerd-b768a2095461ba0354cc34f7ff914037ee4becec741a9b40519408530fae5073.scope - libcontainer container b768a2095461ba0354cc34f7ff914037ee4becec741a9b40519408530fae5073. Feb 13 22:51:52.992836 containerd[1513]: time="2025-02-13T22:51:52.992776390Z" level=info msg="StartContainer for \"b768a2095461ba0354cc34f7ff914037ee4becec741a9b40519408530fae5073\" returns successfully" Feb 13 22:51:53.005028 systemd[1]: cri-containerd-b768a2095461ba0354cc34f7ff914037ee4becec741a9b40519408530fae5073.scope: Deactivated successfully. Feb 13 22:51:53.039822 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b768a2095461ba0354cc34f7ff914037ee4becec741a9b40519408530fae5073-rootfs.mount: Deactivated successfully. Feb 13 22:51:53.043783 containerd[1513]: time="2025-02-13T22:51:53.043688885Z" level=info msg="shim disconnected" id=b768a2095461ba0354cc34f7ff914037ee4becec741a9b40519408530fae5073 namespace=k8s.io Feb 13 22:51:53.043783 containerd[1513]: time="2025-02-13T22:51:53.043771507Z" level=warning msg="cleaning up after shim disconnected" id=b768a2095461ba0354cc34f7ff914037ee4becec741a9b40519408530fae5073 namespace=k8s.io Feb 13 22:51:53.043783 containerd[1513]: time="2025-02-13T22:51:53.043787437Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 22:51:53.061429 containerd[1513]: time="2025-02-13T22:51:53.061307645Z" level=warning msg="cleanup warnings time=\"2025-02-13T22:51:53Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 22:51:53.870716 containerd[1513]: time="2025-02-13T22:51:53.868859500Z" level=info msg="CreateContainer within sandbox \"fe524400ab5759d4ac6df93612289220948c8f51939667a8ce94e020884ae21f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 22:51:53.917781 containerd[1513]: time="2025-02-13T22:51:53.914671034Z" level=info msg="CreateContainer within sandbox \"fe524400ab5759d4ac6df93612289220948c8f51939667a8ce94e020884ae21f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7f529c91fa669345efa8da0c3d8499697fde35a565ab7fd7840b9011981ad625\"" Feb 13 22:51:53.921539 containerd[1513]: time="2025-02-13T22:51:53.921472708Z" level=info msg="StartContainer for \"7f529c91fa669345efa8da0c3d8499697fde35a565ab7fd7840b9011981ad625\"" Feb 13 22:51:53.982065 systemd[1]: run-containerd-runc-k8s.io-7f529c91fa669345efa8da0c3d8499697fde35a565ab7fd7840b9011981ad625-runc.3ZG6FW.mount: Deactivated successfully. Feb 13 22:51:53.994792 systemd[1]: Started cri-containerd-7f529c91fa669345efa8da0c3d8499697fde35a565ab7fd7840b9011981ad625.scope - libcontainer container 7f529c91fa669345efa8da0c3d8499697fde35a565ab7fd7840b9011981ad625. Feb 13 22:51:54.056029 systemd[1]: cri-containerd-7f529c91fa669345efa8da0c3d8499697fde35a565ab7fd7840b9011981ad625.scope: Deactivated successfully. Feb 13 22:51:54.066429 kubelet[2741]: E0213 22:51:54.066166 2741 cadvisor_stats_provider.go:522] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod039cfe1d_50d6_49a8_83ef_f60be1c2e6d5.slice/cri-containerd-7f529c91fa669345efa8da0c3d8499697fde35a565ab7fd7840b9011981ad625.scope\": RecentStats: unable to find data in memory cache]" Feb 13 22:51:54.077182 containerd[1513]: time="2025-02-13T22:51:54.077137281Z" level=info msg="StartContainer for \"7f529c91fa669345efa8da0c3d8499697fde35a565ab7fd7840b9011981ad625\" returns successfully" Feb 13 22:51:54.116137 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7f529c91fa669345efa8da0c3d8499697fde35a565ab7fd7840b9011981ad625-rootfs.mount: Deactivated successfully. Feb 13 22:51:54.136376 containerd[1513]: time="2025-02-13T22:51:54.135713288Z" level=info msg="shim disconnected" id=7f529c91fa669345efa8da0c3d8499697fde35a565ab7fd7840b9011981ad625 namespace=k8s.io Feb 13 22:51:54.136786 containerd[1513]: time="2025-02-13T22:51:54.136574957Z" level=warning msg="cleaning up after shim disconnected" id=7f529c91fa669345efa8da0c3d8499697fde35a565ab7fd7840b9011981ad625 namespace=k8s.io Feb 13 22:51:54.136786 containerd[1513]: time="2025-02-13T22:51:54.136613586Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 22:51:54.867967 containerd[1513]: time="2025-02-13T22:51:54.867901399Z" level=info msg="CreateContainer within sandbox \"fe524400ab5759d4ac6df93612289220948c8f51939667a8ce94e020884ae21f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 22:51:54.871562 containerd[1513]: time="2025-02-13T22:51:54.871424800Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 22:51:54.876033 containerd[1513]: time="2025-02-13T22:51:54.875834871Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Feb 13 22:51:54.878619 containerd[1513]: time="2025-02-13T22:51:54.877194880Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 22:51:54.882619 containerd[1513]: time="2025-02-13T22:51:54.881104921Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 4.679669312s" Feb 13 22:51:54.882619 containerd[1513]: time="2025-02-13T22:51:54.881161558Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 13 22:51:54.889448 containerd[1513]: time="2025-02-13T22:51:54.889397938Z" level=info msg="CreateContainer within sandbox \"44024082abf2ed3847030e66c247dd5da4f3bb5f2d94348290750d890bf2f2f5\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 13 22:51:54.917278 containerd[1513]: time="2025-02-13T22:51:54.917178863Z" level=info msg="CreateContainer within sandbox \"fe524400ab5759d4ac6df93612289220948c8f51939667a8ce94e020884ae21f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ebb74a4e47a3f3e51d16f284cc6a88a866fc6d7f4d298ca3932038982109977b\"" Feb 13 22:51:54.918855 containerd[1513]: time="2025-02-13T22:51:54.918817867Z" level=info msg="StartContainer for \"ebb74a4e47a3f3e51d16f284cc6a88a866fc6d7f4d298ca3932038982109977b\"" Feb 13 22:51:54.938907 containerd[1513]: time="2025-02-13T22:51:54.938807749Z" level=info msg="CreateContainer within sandbox \"44024082abf2ed3847030e66c247dd5da4f3bb5f2d94348290750d890bf2f2f5\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"91f258626d0d63f12926a776cd42c53b05e9bd5abfc9e985726de6e8cca24535\"" Feb 13 22:51:54.940432 containerd[1513]: time="2025-02-13T22:51:54.940346200Z" level=info msg="StartContainer for \"91f258626d0d63f12926a776cd42c53b05e9bd5abfc9e985726de6e8cca24535\"" Feb 13 22:51:54.981866 systemd[1]: Started cri-containerd-ebb74a4e47a3f3e51d16f284cc6a88a866fc6d7f4d298ca3932038982109977b.scope - libcontainer container ebb74a4e47a3f3e51d16f284cc6a88a866fc6d7f4d298ca3932038982109977b. Feb 13 22:51:55.002952 systemd[1]: Started cri-containerd-91f258626d0d63f12926a776cd42c53b05e9bd5abfc9e985726de6e8cca24535.scope - libcontainer container 91f258626d0d63f12926a776cd42c53b05e9bd5abfc9e985726de6e8cca24535. Feb 13 22:51:55.051900 containerd[1513]: time="2025-02-13T22:51:55.049661009Z" level=info msg="StartContainer for \"ebb74a4e47a3f3e51d16f284cc6a88a866fc6d7f4d298ca3932038982109977b\" returns successfully" Feb 13 22:51:55.123483 containerd[1513]: time="2025-02-13T22:51:55.123315831Z" level=info msg="StartContainer for \"91f258626d0d63f12926a776cd42c53b05e9bd5abfc9e985726de6e8cca24535\" returns successfully" Feb 13 22:51:55.458143 kubelet[2741]: I0213 22:51:55.458085 2741 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Feb 13 22:51:55.573866 systemd[1]: Created slice kubepods-burstable-pod29a758b3_cc0e_4523_ac90_2caf1cd144f3.slice - libcontainer container kubepods-burstable-pod29a758b3_cc0e_4523_ac90_2caf1cd144f3.slice. Feb 13 22:51:55.587535 systemd[1]: Created slice kubepods-burstable-podb72b6109_5b22_4a5c_b0de_5060138bb8db.slice - libcontainer container kubepods-burstable-podb72b6109_5b22_4a5c_b0de_5060138bb8db.slice. Feb 13 22:51:55.633720 kubelet[2741]: I0213 22:51:55.633416 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4xfw\" (UniqueName: \"kubernetes.io/projected/b72b6109-5b22-4a5c-b0de-5060138bb8db-kube-api-access-k4xfw\") pod \"coredns-668d6bf9bc-8bdht\" (UID: \"b72b6109-5b22-4a5c-b0de-5060138bb8db\") " pod="kube-system/coredns-668d6bf9bc-8bdht" Feb 13 22:51:55.634803 kubelet[2741]: I0213 22:51:55.634144 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xc8xd\" (UniqueName: \"kubernetes.io/projected/29a758b3-cc0e-4523-ac90-2caf1cd144f3-kube-api-access-xc8xd\") pod \"coredns-668d6bf9bc-tmq5j\" (UID: \"29a758b3-cc0e-4523-ac90-2caf1cd144f3\") " pod="kube-system/coredns-668d6bf9bc-tmq5j" Feb 13 22:51:55.635025 kubelet[2741]: I0213 22:51:55.634965 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b72b6109-5b22-4a5c-b0de-5060138bb8db-config-volume\") pod \"coredns-668d6bf9bc-8bdht\" (UID: \"b72b6109-5b22-4a5c-b0de-5060138bb8db\") " pod="kube-system/coredns-668d6bf9bc-8bdht" Feb 13 22:51:55.635207 kubelet[2741]: I0213 22:51:55.635176 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/29a758b3-cc0e-4523-ac90-2caf1cd144f3-config-volume\") pod \"coredns-668d6bf9bc-tmq5j\" (UID: \"29a758b3-cc0e-4523-ac90-2caf1cd144f3\") " pod="kube-system/coredns-668d6bf9bc-tmq5j" Feb 13 22:51:55.884039 containerd[1513]: time="2025-02-13T22:51:55.883888696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tmq5j,Uid:29a758b3-cc0e-4523-ac90-2caf1cd144f3,Namespace:kube-system,Attempt:0,}" Feb 13 22:51:55.895670 containerd[1513]: time="2025-02-13T22:51:55.894852845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8bdht,Uid:b72b6109-5b22-4a5c-b0de-5060138bb8db,Namespace:kube-system,Attempt:0,}" Feb 13 22:51:56.126110 kubelet[2741]: I0213 22:51:56.125994 2741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-6kzdl" podStartSLOduration=7.183562283 podStartE2EDuration="18.125940589s" podCreationTimestamp="2025-02-13 22:51:38 +0000 UTC" firstStartedPulling="2025-02-13 22:51:39.258815465 +0000 UTC m=+5.789090441" lastFinishedPulling="2025-02-13 22:51:50.201193766 +0000 UTC m=+16.731468747" observedRunningTime="2025-02-13 22:51:56.069309721 +0000 UTC m=+22.599584724" watchObservedRunningTime="2025-02-13 22:51:56.125940589 +0000 UTC m=+22.656215576" Feb 13 22:51:59.125118 systemd-networkd[1439]: cilium_host: Link UP Feb 13 22:51:59.125551 systemd-networkd[1439]: cilium_net: Link UP Feb 13 22:51:59.127045 systemd-networkd[1439]: cilium_net: Gained carrier Feb 13 22:51:59.127324 systemd-networkd[1439]: cilium_host: Gained carrier Feb 13 22:51:59.299469 systemd-networkd[1439]: cilium_vxlan: Link UP Feb 13 22:51:59.299987 systemd-networkd[1439]: cilium_vxlan: Gained carrier Feb 13 22:51:59.847702 kernel: NET: Registered PF_ALG protocol family Feb 13 22:51:59.926943 systemd-networkd[1439]: cilium_host: Gained IPv6LL Feb 13 22:51:59.990940 systemd-networkd[1439]: cilium_net: Gained IPv6LL Feb 13 22:52:00.503915 systemd-networkd[1439]: cilium_vxlan: Gained IPv6LL Feb 13 22:52:00.890727 systemd-networkd[1439]: lxc_health: Link UP Feb 13 22:52:00.918900 systemd-networkd[1439]: lxc_health: Gained carrier Feb 13 22:52:01.055580 kubelet[2741]: I0213 22:52:01.055395 2741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-hhfqg" podStartSLOduration=7.469125078 podStartE2EDuration="23.05535269s" podCreationTimestamp="2025-02-13 22:51:38 +0000 UTC" firstStartedPulling="2025-02-13 22:51:39.300012562 +0000 UTC m=+5.830287532" lastFinishedPulling="2025-02-13 22:51:54.88624018 +0000 UTC m=+21.416515144" observedRunningTime="2025-02-13 22:51:56.126580792 +0000 UTC m=+22.656855767" watchObservedRunningTime="2025-02-13 22:52:01.05535269 +0000 UTC m=+27.585627672" Feb 13 22:52:01.555338 systemd-networkd[1439]: lxc6d639299ea06: Link UP Feb 13 22:52:01.574662 kernel: eth0: renamed from tmp765c2 Feb 13 22:52:01.586691 systemd-networkd[1439]: lxc5b999d895b07: Link UP Feb 13 22:52:01.598118 systemd-networkd[1439]: lxc6d639299ea06: Gained carrier Feb 13 22:52:01.608563 kernel: eth0: renamed from tmpe0937 Feb 13 22:52:01.617275 systemd-networkd[1439]: lxc5b999d895b07: Gained carrier Feb 13 22:52:02.102833 systemd-networkd[1439]: lxc_health: Gained IPv6LL Feb 13 22:52:02.369811 kubelet[2741]: I0213 22:52:02.368784 2741 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 22:52:03.254852 systemd-networkd[1439]: lxc6d639299ea06: Gained IPv6LL Feb 13 22:52:03.446787 systemd-networkd[1439]: lxc5b999d895b07: Gained IPv6LL Feb 13 22:52:07.195126 containerd[1513]: time="2025-02-13T22:52:07.194562898Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 22:52:07.195126 containerd[1513]: time="2025-02-13T22:52:07.194826267Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 22:52:07.195126 containerd[1513]: time="2025-02-13T22:52:07.194855597Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 22:52:07.195126 containerd[1513]: time="2025-02-13T22:52:07.195022805Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 22:52:07.279883 systemd[1]: Started cri-containerd-e0937465d9fb21b585b41e8f683b1dd3b3bd5cd6fb0bda795e8502d0b4b760ce.scope - libcontainer container e0937465d9fb21b585b41e8f683b1dd3b3bd5cd6fb0bda795e8502d0b4b760ce. Feb 13 22:52:07.285399 containerd[1513]: time="2025-02-13T22:52:07.284990309Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 22:52:07.285819 containerd[1513]: time="2025-02-13T22:52:07.285199748Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 22:52:07.285819 containerd[1513]: time="2025-02-13T22:52:07.285268266Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 22:52:07.286157 containerd[1513]: time="2025-02-13T22:52:07.285670688Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 22:52:07.333437 systemd[1]: run-containerd-runc-k8s.io-765c2dad224603a21f253b9dd3bd204ecd69c03fbb5068795adc8d9c058ed035-runc.G6t2h1.mount: Deactivated successfully. Feb 13 22:52:07.347846 systemd[1]: Started cri-containerd-765c2dad224603a21f253b9dd3bd204ecd69c03fbb5068795adc8d9c058ed035.scope - libcontainer container 765c2dad224603a21f253b9dd3bd204ecd69c03fbb5068795adc8d9c058ed035. Feb 13 22:52:07.407584 containerd[1513]: time="2025-02-13T22:52:07.407365079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tmq5j,Uid:29a758b3-cc0e-4523-ac90-2caf1cd144f3,Namespace:kube-system,Attempt:0,} returns sandbox id \"e0937465d9fb21b585b41e8f683b1dd3b3bd5cd6fb0bda795e8502d0b4b760ce\"" Feb 13 22:52:07.417819 containerd[1513]: time="2025-02-13T22:52:07.417740832Z" level=info msg="CreateContainer within sandbox \"e0937465d9fb21b585b41e8f683b1dd3b3bd5cd6fb0bda795e8502d0b4b760ce\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 22:52:07.451885 containerd[1513]: time="2025-02-13T22:52:07.451554317Z" level=info msg="CreateContainer within sandbox \"e0937465d9fb21b585b41e8f683b1dd3b3bd5cd6fb0bda795e8502d0b4b760ce\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"728bf8053d25079077e87c6e4e478c93fcb3adc61be6e0da3a6bd9f4ddd5d64d\"" Feb 13 22:52:07.453753 containerd[1513]: time="2025-02-13T22:52:07.452938344Z" level=info msg="StartContainer for \"728bf8053d25079077e87c6e4e478c93fcb3adc61be6e0da3a6bd9f4ddd5d64d\"" Feb 13 22:52:07.464992 containerd[1513]: time="2025-02-13T22:52:07.464937558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8bdht,Uid:b72b6109-5b22-4a5c-b0de-5060138bb8db,Namespace:kube-system,Attempt:0,} returns sandbox id \"765c2dad224603a21f253b9dd3bd204ecd69c03fbb5068795adc8d9c058ed035\"" Feb 13 22:52:07.471798 containerd[1513]: time="2025-02-13T22:52:07.471751179Z" level=info msg="CreateContainer within sandbox \"765c2dad224603a21f253b9dd3bd204ecd69c03fbb5068795adc8d9c058ed035\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 22:52:07.487914 containerd[1513]: time="2025-02-13T22:52:07.487851598Z" level=info msg="CreateContainer within sandbox \"765c2dad224603a21f253b9dd3bd204ecd69c03fbb5068795adc8d9c058ed035\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"db928676d1f353ee42db9de7cf6ab53985f3de228a661263c8e97a3a891397f1\"" Feb 13 22:52:07.492763 containerd[1513]: time="2025-02-13T22:52:07.492625772Z" level=info msg="StartContainer for \"db928676d1f353ee42db9de7cf6ab53985f3de228a661263c8e97a3a891397f1\"" Feb 13 22:52:07.507847 systemd[1]: Started cri-containerd-728bf8053d25079077e87c6e4e478c93fcb3adc61be6e0da3a6bd9f4ddd5d64d.scope - libcontainer container 728bf8053d25079077e87c6e4e478c93fcb3adc61be6e0da3a6bd9f4ddd5d64d. Feb 13 22:52:07.545809 systemd[1]: Started cri-containerd-db928676d1f353ee42db9de7cf6ab53985f3de228a661263c8e97a3a891397f1.scope - libcontainer container db928676d1f353ee42db9de7cf6ab53985f3de228a661263c8e97a3a891397f1. Feb 13 22:52:07.567493 containerd[1513]: time="2025-02-13T22:52:07.567442117Z" level=info msg="StartContainer for \"728bf8053d25079077e87c6e4e478c93fcb3adc61be6e0da3a6bd9f4ddd5d64d\" returns successfully" Feb 13 22:52:07.610053 containerd[1513]: time="2025-02-13T22:52:07.609999972Z" level=info msg="StartContainer for \"db928676d1f353ee42db9de7cf6ab53985f3de228a661263c8e97a3a891397f1\" returns successfully" Feb 13 22:52:07.961851 kubelet[2741]: I0213 22:52:07.961667 2741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-tmq5j" podStartSLOduration=29.9616415 podStartE2EDuration="29.9616415s" podCreationTimestamp="2025-02-13 22:51:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 22:52:07.960251898 +0000 UTC m=+34.490526904" watchObservedRunningTime="2025-02-13 22:52:07.9616415 +0000 UTC m=+34.491916470" Feb 13 22:52:07.963513 kubelet[2741]: I0213 22:52:07.962060 2741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-8bdht" podStartSLOduration=29.96204927 podStartE2EDuration="29.96204927s" podCreationTimestamp="2025-02-13 22:51:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 22:52:07.938743602 +0000 UTC m=+34.469018598" watchObservedRunningTime="2025-02-13 22:52:07.96204927 +0000 UTC m=+34.492324262" Feb 13 22:52:45.238102 systemd[1]: Started sshd@10-10.230.69.138:22-147.75.109.163:60474.service - OpenSSH per-connection server daemon (147.75.109.163:60474). Feb 13 22:52:46.153797 sshd[4128]: Accepted publickey for core from 147.75.109.163 port 60474 ssh2: RSA SHA256:Yx7fWtREze/vjbfbVXgsOsi8+bAvCeghI7ZLGsIJS+I Feb 13 22:52:46.156634 sshd-session[4128]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 22:52:46.164841 systemd-logind[1489]: New session 12 of user core. Feb 13 22:52:46.169832 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 22:52:47.300933 sshd[4130]: Connection closed by 147.75.109.163 port 60474 Feb 13 22:52:47.300737 sshd-session[4128]: pam_unix(sshd:session): session closed for user core Feb 13 22:52:47.306157 systemd[1]: sshd@10-10.230.69.138:22-147.75.109.163:60474.service: Deactivated successfully. Feb 13 22:52:47.308487 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 22:52:47.309468 systemd-logind[1489]: Session 12 logged out. Waiting for processes to exit. Feb 13 22:52:47.311179 systemd-logind[1489]: Removed session 12. Feb 13 22:52:52.458950 systemd[1]: Started sshd@11-10.230.69.138:22-147.75.109.163:53182.service - OpenSSH per-connection server daemon (147.75.109.163:53182). Feb 13 22:52:53.356420 sshd[4146]: Accepted publickey for core from 147.75.109.163 port 53182 ssh2: RSA SHA256:Yx7fWtREze/vjbfbVXgsOsi8+bAvCeghI7ZLGsIJS+I Feb 13 22:52:53.358476 sshd-session[4146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 22:52:53.365884 systemd-logind[1489]: New session 13 of user core. Feb 13 22:52:53.371830 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 22:52:54.069618 sshd[4148]: Connection closed by 147.75.109.163 port 53182 Feb 13 22:52:54.070573 sshd-session[4146]: pam_unix(sshd:session): session closed for user core Feb 13 22:52:54.075171 systemd-logind[1489]: Session 13 logged out. Waiting for processes to exit. Feb 13 22:52:54.076548 systemd[1]: sshd@11-10.230.69.138:22-147.75.109.163:53182.service: Deactivated successfully. Feb 13 22:52:54.079282 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 22:52:54.080994 systemd-logind[1489]: Removed session 13. Feb 13 22:52:59.236058 systemd[1]: Started sshd@12-10.230.69.138:22-147.75.109.163:53190.service - OpenSSH per-connection server daemon (147.75.109.163:53190). Feb 13 22:53:00.134049 sshd[4161]: Accepted publickey for core from 147.75.109.163 port 53190 ssh2: RSA SHA256:Yx7fWtREze/vjbfbVXgsOsi8+bAvCeghI7ZLGsIJS+I Feb 13 22:53:00.136627 sshd-session[4161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 22:53:00.146955 systemd-logind[1489]: New session 14 of user core. Feb 13 22:53:00.151815 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 22:53:00.866050 sshd[4163]: Connection closed by 147.75.109.163 port 53190 Feb 13 22:53:00.865227 sshd-session[4161]: pam_unix(sshd:session): session closed for user core Feb 13 22:53:00.869675 systemd[1]: sshd@12-10.230.69.138:22-147.75.109.163:53190.service: Deactivated successfully. Feb 13 22:53:00.872446 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 22:53:00.876318 systemd-logind[1489]: Session 14 logged out. Waiting for processes to exit. Feb 13 22:53:00.879418 systemd-logind[1489]: Removed session 14. Feb 13 22:53:06.023025 systemd[1]: Started sshd@13-10.230.69.138:22-147.75.109.163:46662.service - OpenSSH per-connection server daemon (147.75.109.163:46662). Feb 13 22:53:06.919998 sshd[4176]: Accepted publickey for core from 147.75.109.163 port 46662 ssh2: RSA SHA256:Yx7fWtREze/vjbfbVXgsOsi8+bAvCeghI7ZLGsIJS+I Feb 13 22:53:06.922955 sshd-session[4176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 22:53:06.930069 systemd-logind[1489]: New session 15 of user core. Feb 13 22:53:06.939838 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 22:53:07.632037 sshd[4178]: Connection closed by 147.75.109.163 port 46662 Feb 13 22:53:07.633024 sshd-session[4176]: pam_unix(sshd:session): session closed for user core Feb 13 22:53:07.637962 systemd[1]: sshd@13-10.230.69.138:22-147.75.109.163:46662.service: Deactivated successfully. Feb 13 22:53:07.640151 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 22:53:07.641126 systemd-logind[1489]: Session 15 logged out. Waiting for processes to exit. Feb 13 22:53:07.642743 systemd-logind[1489]: Removed session 15. Feb 13 22:53:07.786116 systemd[1]: Started sshd@14-10.230.69.138:22-147.75.109.163:46664.service - OpenSSH per-connection server daemon (147.75.109.163:46664). Feb 13 22:53:08.685619 sshd[4191]: Accepted publickey for core from 147.75.109.163 port 46664 ssh2: RSA SHA256:Yx7fWtREze/vjbfbVXgsOsi8+bAvCeghI7ZLGsIJS+I Feb 13 22:53:08.687697 sshd-session[4191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 22:53:08.694135 systemd-logind[1489]: New session 16 of user core. Feb 13 22:53:08.705875 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 22:53:09.450676 sshd[4193]: Connection closed by 147.75.109.163 port 46664 Feb 13 22:53:09.451667 sshd-session[4191]: pam_unix(sshd:session): session closed for user core Feb 13 22:53:09.470750 systemd-logind[1489]: Session 16 logged out. Waiting for processes to exit. Feb 13 22:53:09.471885 systemd[1]: sshd@14-10.230.69.138:22-147.75.109.163:46664.service: Deactivated successfully. Feb 13 22:53:09.474257 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 22:53:09.475529 systemd-logind[1489]: Removed session 16. Feb 13 22:53:09.619267 systemd[1]: Started sshd@15-10.230.69.138:22-147.75.109.163:49466.service - OpenSSH per-connection server daemon (147.75.109.163:49466). Feb 13 22:53:10.528540 sshd[4201]: Accepted publickey for core from 147.75.109.163 port 49466 ssh2: RSA SHA256:Yx7fWtREze/vjbfbVXgsOsi8+bAvCeghI7ZLGsIJS+I Feb 13 22:53:10.530454 sshd-session[4201]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 22:53:10.536553 systemd-logind[1489]: New session 17 of user core. Feb 13 22:53:10.541765 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 22:53:11.252683 sshd[4205]: Connection closed by 147.75.109.163 port 49466 Feb 13 22:53:11.254114 sshd-session[4201]: pam_unix(sshd:session): session closed for user core Feb 13 22:53:11.258867 systemd-logind[1489]: Session 17 logged out. Waiting for processes to exit. Feb 13 22:53:11.260008 systemd[1]: sshd@15-10.230.69.138:22-147.75.109.163:49466.service: Deactivated successfully. Feb 13 22:53:11.262485 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 22:53:11.264040 systemd-logind[1489]: Removed session 17. Feb 13 22:53:16.410954 systemd[1]: Started sshd@16-10.230.69.138:22-147.75.109.163:49472.service - OpenSSH per-connection server daemon (147.75.109.163:49472). Feb 13 22:53:17.318173 sshd[4215]: Accepted publickey for core from 147.75.109.163 port 49472 ssh2: RSA SHA256:Yx7fWtREze/vjbfbVXgsOsi8+bAvCeghI7ZLGsIJS+I Feb 13 22:53:17.320626 sshd-session[4215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 22:53:17.328013 systemd-logind[1489]: New session 18 of user core. Feb 13 22:53:17.333803 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 22:53:18.027665 sshd[4217]: Connection closed by 147.75.109.163 port 49472 Feb 13 22:53:18.028553 sshd-session[4215]: pam_unix(sshd:session): session closed for user core Feb 13 22:53:18.034011 systemd[1]: sshd@16-10.230.69.138:22-147.75.109.163:49472.service: Deactivated successfully. Feb 13 22:53:18.036675 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 22:53:18.037774 systemd-logind[1489]: Session 18 logged out. Waiting for processes to exit. Feb 13 22:53:18.039392 systemd-logind[1489]: Removed session 18. Feb 13 22:53:23.185990 systemd[1]: Started sshd@17-10.230.69.138:22-147.75.109.163:52656.service - OpenSSH per-connection server daemon (147.75.109.163:52656). Feb 13 22:53:24.069971 sshd[4228]: Accepted publickey for core from 147.75.109.163 port 52656 ssh2: RSA SHA256:Yx7fWtREze/vjbfbVXgsOsi8+bAvCeghI7ZLGsIJS+I Feb 13 22:53:24.071928 sshd-session[4228]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 22:53:24.078521 systemd-logind[1489]: New session 19 of user core. Feb 13 22:53:24.083822 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 22:53:24.767673 sshd[4230]: Connection closed by 147.75.109.163 port 52656 Feb 13 22:53:24.769503 sshd-session[4228]: pam_unix(sshd:session): session closed for user core Feb 13 22:53:24.779189 systemd[1]: sshd@17-10.230.69.138:22-147.75.109.163:52656.service: Deactivated successfully. Feb 13 22:53:24.781263 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 22:53:24.782213 systemd-logind[1489]: Session 19 logged out. Waiting for processes to exit. Feb 13 22:53:24.784221 systemd-logind[1489]: Removed session 19. Feb 13 22:53:24.930975 systemd[1]: Started sshd@18-10.230.69.138:22-147.75.109.163:52668.service - OpenSSH per-connection server daemon (147.75.109.163:52668). Feb 13 22:53:25.825691 sshd[4241]: Accepted publickey for core from 147.75.109.163 port 52668 ssh2: RSA SHA256:Yx7fWtREze/vjbfbVXgsOsi8+bAvCeghI7ZLGsIJS+I Feb 13 22:53:25.828360 sshd-session[4241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 22:53:25.835256 systemd-logind[1489]: New session 20 of user core. Feb 13 22:53:25.840784 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 22:53:26.898446 sshd[4243]: Connection closed by 147.75.109.163 port 52668 Feb 13 22:53:26.900169 sshd-session[4241]: pam_unix(sshd:session): session closed for user core Feb 13 22:53:26.906857 systemd[1]: sshd@18-10.230.69.138:22-147.75.109.163:52668.service: Deactivated successfully. Feb 13 22:53:26.909723 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 22:53:26.910845 systemd-logind[1489]: Session 20 logged out. Waiting for processes to exit. Feb 13 22:53:26.912447 systemd-logind[1489]: Removed session 20. Feb 13 22:53:27.061975 systemd[1]: Started sshd@19-10.230.69.138:22-147.75.109.163:52670.service - OpenSSH per-connection server daemon (147.75.109.163:52670). Feb 13 22:53:27.957791 sshd[4252]: Accepted publickey for core from 147.75.109.163 port 52670 ssh2: RSA SHA256:Yx7fWtREze/vjbfbVXgsOsi8+bAvCeghI7ZLGsIJS+I Feb 13 22:53:27.959988 sshd-session[4252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 22:53:27.967142 systemd-logind[1489]: New session 21 of user core. Feb 13 22:53:27.975800 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 22:53:29.665133 sshd[4255]: Connection closed by 147.75.109.163 port 52670 Feb 13 22:53:29.666125 sshd-session[4252]: pam_unix(sshd:session): session closed for user core Feb 13 22:53:29.670680 systemd[1]: sshd@19-10.230.69.138:22-147.75.109.163:52670.service: Deactivated successfully. Feb 13 22:53:29.671158 systemd-logind[1489]: Session 21 logged out. Waiting for processes to exit. Feb 13 22:53:29.673989 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 22:53:29.678869 systemd-logind[1489]: Removed session 21. Feb 13 22:53:29.828015 systemd[1]: Started sshd@20-10.230.69.138:22-147.75.109.163:49624.service - OpenSSH per-connection server daemon (147.75.109.163:49624). Feb 13 22:53:30.722716 sshd[4271]: Accepted publickey for core from 147.75.109.163 port 49624 ssh2: RSA SHA256:Yx7fWtREze/vjbfbVXgsOsi8+bAvCeghI7ZLGsIJS+I Feb 13 22:53:30.724685 sshd-session[4271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 22:53:30.732477 systemd-logind[1489]: New session 22 of user core. Feb 13 22:53:30.741901 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 22:53:31.613528 sshd[4273]: Connection closed by 147.75.109.163 port 49624 Feb 13 22:53:31.614097 sshd-session[4271]: pam_unix(sshd:session): session closed for user core Feb 13 22:53:31.618971 systemd[1]: sshd@20-10.230.69.138:22-147.75.109.163:49624.service: Deactivated successfully. Feb 13 22:53:31.621780 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 22:53:31.624044 systemd-logind[1489]: Session 22 logged out. Waiting for processes to exit. Feb 13 22:53:31.625383 systemd-logind[1489]: Removed session 22. Feb 13 22:53:31.784920 systemd[1]: Started sshd@21-10.230.69.138:22-147.75.109.163:49630.service - OpenSSH per-connection server daemon (147.75.109.163:49630). Feb 13 22:53:32.685213 sshd[4282]: Accepted publickey for core from 147.75.109.163 port 49630 ssh2: RSA SHA256:Yx7fWtREze/vjbfbVXgsOsi8+bAvCeghI7ZLGsIJS+I Feb 13 22:53:32.687583 sshd-session[4282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 22:53:32.695383 systemd-logind[1489]: New session 23 of user core. Feb 13 22:53:32.703848 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 22:53:33.385186 sshd[4284]: Connection closed by 147.75.109.163 port 49630 Feb 13 22:53:33.386246 sshd-session[4282]: pam_unix(sshd:session): session closed for user core Feb 13 22:53:33.391799 systemd[1]: sshd@21-10.230.69.138:22-147.75.109.163:49630.service: Deactivated successfully. Feb 13 22:53:33.394945 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 22:53:33.396071 systemd-logind[1489]: Session 23 logged out. Waiting for processes to exit. Feb 13 22:53:33.397376 systemd-logind[1489]: Removed session 23. Feb 13 22:53:38.547993 systemd[1]: Started sshd@22-10.230.69.138:22-147.75.109.163:49636.service - OpenSSH per-connection server daemon (147.75.109.163:49636). Feb 13 22:53:38.639946 systemd[1]: Started sshd@23-10.230.69.138:22-162.19.243.145:42922.service - OpenSSH per-connection server daemon (162.19.243.145:42922). Feb 13 22:53:38.855403 sshd[4302]: Invalid user adi from 162.19.243.145 port 42922 Feb 13 22:53:38.887296 sshd[4302]: Received disconnect from 162.19.243.145 port 42922:11: Bye Bye [preauth] Feb 13 22:53:38.887296 sshd[4302]: Disconnected from invalid user adi 162.19.243.145 port 42922 [preauth] Feb 13 22:53:38.890185 systemd[1]: sshd@23-10.230.69.138:22-162.19.243.145:42922.service: Deactivated successfully. Feb 13 22:53:39.445062 sshd[4299]: Accepted publickey for core from 147.75.109.163 port 49636 ssh2: RSA SHA256:Yx7fWtREze/vjbfbVXgsOsi8+bAvCeghI7ZLGsIJS+I Feb 13 22:53:39.447115 sshd-session[4299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 22:53:39.453663 systemd-logind[1489]: New session 24 of user core. Feb 13 22:53:39.465830 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 22:53:40.157819 sshd[4307]: Connection closed by 147.75.109.163 port 49636 Feb 13 22:53:40.158897 sshd-session[4299]: pam_unix(sshd:session): session closed for user core Feb 13 22:53:40.163148 systemd[1]: sshd@22-10.230.69.138:22-147.75.109.163:49636.service: Deactivated successfully. Feb 13 22:53:40.166174 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 22:53:40.168175 systemd-logind[1489]: Session 24 logged out. Waiting for processes to exit. Feb 13 22:53:40.169873 systemd-logind[1489]: Removed session 24. Feb 13 22:53:45.311941 systemd[1]: Started sshd@24-10.230.69.138:22-147.75.109.163:41278.service - OpenSSH per-connection server daemon (147.75.109.163:41278). Feb 13 22:53:46.208044 sshd[4319]: Accepted publickey for core from 147.75.109.163 port 41278 ssh2: RSA SHA256:Yx7fWtREze/vjbfbVXgsOsi8+bAvCeghI7ZLGsIJS+I Feb 13 22:53:46.210643 sshd-session[4319]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 22:53:46.217451 systemd-logind[1489]: New session 25 of user core. Feb 13 22:53:46.223928 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 22:53:46.934628 sshd[4321]: Connection closed by 147.75.109.163 port 41278 Feb 13 22:53:46.935583 sshd-session[4319]: pam_unix(sshd:session): session closed for user core Feb 13 22:53:46.940949 systemd-logind[1489]: Session 25 logged out. Waiting for processes to exit. Feb 13 22:53:46.941441 systemd[1]: sshd@24-10.230.69.138:22-147.75.109.163:41278.service: Deactivated successfully. Feb 13 22:53:46.946483 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 22:53:46.962218 systemd-logind[1489]: Removed session 25. Feb 13 22:53:52.093989 systemd[1]: Started sshd@25-10.230.69.138:22-147.75.109.163:52614.service - OpenSSH per-connection server daemon (147.75.109.163:52614). Feb 13 22:53:52.983736 sshd[4331]: Accepted publickey for core from 147.75.109.163 port 52614 ssh2: RSA SHA256:Yx7fWtREze/vjbfbVXgsOsi8+bAvCeghI7ZLGsIJS+I Feb 13 22:53:52.985674 sshd-session[4331]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 22:53:52.992347 systemd-logind[1489]: New session 26 of user core. Feb 13 22:53:53.001915 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 22:53:53.680701 sshd[4333]: Connection closed by 147.75.109.163 port 52614 Feb 13 22:53:53.681202 sshd-session[4331]: pam_unix(sshd:session): session closed for user core Feb 13 22:53:53.686262 systemd[1]: sshd@25-10.230.69.138:22-147.75.109.163:52614.service: Deactivated successfully. Feb 13 22:53:53.689266 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 22:53:53.690555 systemd-logind[1489]: Session 26 logged out. Waiting for processes to exit. Feb 13 22:53:53.692180 systemd-logind[1489]: Removed session 26. Feb 13 22:53:53.839032 systemd[1]: Started sshd@26-10.230.69.138:22-147.75.109.163:52626.service - OpenSSH per-connection server daemon (147.75.109.163:52626). Feb 13 22:53:54.724061 sshd[4343]: Accepted publickey for core from 147.75.109.163 port 52626 ssh2: RSA SHA256:Yx7fWtREze/vjbfbVXgsOsi8+bAvCeghI7ZLGsIJS+I Feb 13 22:53:54.726118 sshd-session[4343]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 22:53:54.733890 systemd-logind[1489]: New session 27 of user core. Feb 13 22:53:54.740961 systemd[1]: Started session-27.scope - Session 27 of User core. Feb 13 22:53:56.759469 containerd[1513]: time="2025-02-13T22:53:56.759369887Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 22:53:56.789713 containerd[1513]: time="2025-02-13T22:53:56.789497651Z" level=info msg="StopContainer for \"ebb74a4e47a3f3e51d16f284cc6a88a866fc6d7f4d298ca3932038982109977b\" with timeout 2 (s)" Feb 13 22:53:56.790761 containerd[1513]: time="2025-02-13T22:53:56.790507028Z" level=info msg="StopContainer for \"91f258626d0d63f12926a776cd42c53b05e9bd5abfc9e985726de6e8cca24535\" with timeout 30 (s)" Feb 13 22:53:56.792682 containerd[1513]: time="2025-02-13T22:53:56.791657149Z" level=info msg="Stop container \"ebb74a4e47a3f3e51d16f284cc6a88a866fc6d7f4d298ca3932038982109977b\" with signal terminated" Feb 13 22:53:56.809288 containerd[1513]: time="2025-02-13T22:53:56.809209590Z" level=info msg="Stop container \"91f258626d0d63f12926a776cd42c53b05e9bd5abfc9e985726de6e8cca24535\" with signal terminated" Feb 13 22:53:56.809710 systemd-networkd[1439]: lxc_health: Link DOWN Feb 13 22:53:56.809724 systemd-networkd[1439]: lxc_health: Lost carrier Feb 13 22:53:56.838405 systemd[1]: cri-containerd-91f258626d0d63f12926a776cd42c53b05e9bd5abfc9e985726de6e8cca24535.scope: Deactivated successfully. Feb 13 22:53:56.847191 systemd[1]: cri-containerd-ebb74a4e47a3f3e51d16f284cc6a88a866fc6d7f4d298ca3932038982109977b.scope: Deactivated successfully. Feb 13 22:53:56.848203 systemd[1]: cri-containerd-ebb74a4e47a3f3e51d16f284cc6a88a866fc6d7f4d298ca3932038982109977b.scope: Consumed 9.942s CPU time. Feb 13 22:53:56.884544 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-91f258626d0d63f12926a776cd42c53b05e9bd5abfc9e985726de6e8cca24535-rootfs.mount: Deactivated successfully. Feb 13 22:53:56.892545 containerd[1513]: time="2025-02-13T22:53:56.892413142Z" level=info msg="shim disconnected" id=91f258626d0d63f12926a776cd42c53b05e9bd5abfc9e985726de6e8cca24535 namespace=k8s.io Feb 13 22:53:56.892843 containerd[1513]: time="2025-02-13T22:53:56.892669518Z" level=warning msg="cleaning up after shim disconnected" id=91f258626d0d63f12926a776cd42c53b05e9bd5abfc9e985726de6e8cca24535 namespace=k8s.io Feb 13 22:53:56.893160 containerd[1513]: time="2025-02-13T22:53:56.893030148Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 22:53:56.897683 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ebb74a4e47a3f3e51d16f284cc6a88a866fc6d7f4d298ca3932038982109977b-rootfs.mount: Deactivated successfully. Feb 13 22:53:56.913826 containerd[1513]: time="2025-02-13T22:53:56.913487050Z" level=info msg="shim disconnected" id=ebb74a4e47a3f3e51d16f284cc6a88a866fc6d7f4d298ca3932038982109977b namespace=k8s.io Feb 13 22:53:56.913826 containerd[1513]: time="2025-02-13T22:53:56.913620419Z" level=warning msg="cleaning up after shim disconnected" id=ebb74a4e47a3f3e51d16f284cc6a88a866fc6d7f4d298ca3932038982109977b namespace=k8s.io Feb 13 22:53:56.913826 containerd[1513]: time="2025-02-13T22:53:56.913638509Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 22:53:56.945879 containerd[1513]: time="2025-02-13T22:53:56.945804796Z" level=info msg="StopContainer for \"ebb74a4e47a3f3e51d16f284cc6a88a866fc6d7f4d298ca3932038982109977b\" returns successfully" Feb 13 22:53:56.946127 containerd[1513]: time="2025-02-13T22:53:56.946078832Z" level=info msg="StopContainer for \"91f258626d0d63f12926a776cd42c53b05e9bd5abfc9e985726de6e8cca24535\" returns successfully" Feb 13 22:53:56.951939 containerd[1513]: time="2025-02-13T22:53:56.951782350Z" level=info msg="StopPodSandbox for \"fe524400ab5759d4ac6df93612289220948c8f51939667a8ce94e020884ae21f\"" Feb 13 22:53:56.952195 containerd[1513]: time="2025-02-13T22:53:56.952128078Z" level=info msg="StopPodSandbox for \"44024082abf2ed3847030e66c247dd5da4f3bb5f2d94348290750d890bf2f2f5\"" Feb 13 22:53:56.957919 containerd[1513]: time="2025-02-13T22:53:56.953329152Z" level=info msg="Container to stop \"91f258626d0d63f12926a776cd42c53b05e9bd5abfc9e985726de6e8cca24535\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 22:53:56.958377 containerd[1513]: time="2025-02-13T22:53:56.953322680Z" level=info msg="Container to stop \"b4805739ea3d5c9896724654fdfcd579acdea589c8e425f9e7e80f4fb47596b4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 22:53:56.958377 containerd[1513]: time="2025-02-13T22:53:56.958240578Z" level=info msg="Container to stop \"7f529c91fa669345efa8da0c3d8499697fde35a565ab7fd7840b9011981ad625\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 22:53:56.958377 containerd[1513]: time="2025-02-13T22:53:56.958259777Z" level=info msg="Container to stop \"ef6d04928657abef93a24546df42c9ed8d75325b63f0eda63e3e06b0f2114e7a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 22:53:56.958377 containerd[1513]: time="2025-02-13T22:53:56.958288159Z" level=info msg="Container to stop \"ebb74a4e47a3f3e51d16f284cc6a88a866fc6d7f4d298ca3932038982109977b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 22:53:56.958377 containerd[1513]: time="2025-02-13T22:53:56.958305242Z" level=info msg="Container to stop \"b768a2095461ba0354cc34f7ff914037ee4becec741a9b40519408530fae5073\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 22:53:56.960856 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fe524400ab5759d4ac6df93612289220948c8f51939667a8ce94e020884ae21f-shm.mount: Deactivated successfully. Feb 13 22:53:56.961041 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-44024082abf2ed3847030e66c247dd5da4f3bb5f2d94348290750d890bf2f2f5-shm.mount: Deactivated successfully. Feb 13 22:53:56.977091 systemd[1]: cri-containerd-fe524400ab5759d4ac6df93612289220948c8f51939667a8ce94e020884ae21f.scope: Deactivated successfully. Feb 13 22:53:56.985517 systemd[1]: cri-containerd-44024082abf2ed3847030e66c247dd5da4f3bb5f2d94348290750d890bf2f2f5.scope: Deactivated successfully. Feb 13 22:53:57.028762 containerd[1513]: time="2025-02-13T22:53:57.028319521Z" level=info msg="shim disconnected" id=fe524400ab5759d4ac6df93612289220948c8f51939667a8ce94e020884ae21f namespace=k8s.io Feb 13 22:53:57.028762 containerd[1513]: time="2025-02-13T22:53:57.028391661Z" level=warning msg="cleaning up after shim disconnected" id=fe524400ab5759d4ac6df93612289220948c8f51939667a8ce94e020884ae21f namespace=k8s.io Feb 13 22:53:57.028762 containerd[1513]: time="2025-02-13T22:53:57.028406486Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 22:53:57.035919 containerd[1513]: time="2025-02-13T22:53:57.035760756Z" level=info msg="shim disconnected" id=44024082abf2ed3847030e66c247dd5da4f3bb5f2d94348290750d890bf2f2f5 namespace=k8s.io Feb 13 22:53:57.035919 containerd[1513]: time="2025-02-13T22:53:57.035905700Z" level=warning msg="cleaning up after shim disconnected" id=44024082abf2ed3847030e66c247dd5da4f3bb5f2d94348290750d890bf2f2f5 namespace=k8s.io Feb 13 22:53:57.035919 containerd[1513]: time="2025-02-13T22:53:57.035923094Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 22:53:57.066662 containerd[1513]: time="2025-02-13T22:53:57.066353317Z" level=warning msg="cleanup warnings time=\"2025-02-13T22:53:57Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 22:53:57.070336 containerd[1513]: time="2025-02-13T22:53:57.070081904Z" level=info msg="TearDown network for sandbox \"fe524400ab5759d4ac6df93612289220948c8f51939667a8ce94e020884ae21f\" successfully" Feb 13 22:53:57.070336 containerd[1513]: time="2025-02-13T22:53:57.070115329Z" level=info msg="StopPodSandbox for \"fe524400ab5759d4ac6df93612289220948c8f51939667a8ce94e020884ae21f\" returns successfully" Feb 13 22:53:57.082579 containerd[1513]: time="2025-02-13T22:53:57.080858536Z" level=info msg="TearDown network for sandbox \"44024082abf2ed3847030e66c247dd5da4f3bb5f2d94348290750d890bf2f2f5\" successfully" Feb 13 22:53:57.082579 containerd[1513]: time="2025-02-13T22:53:57.080895978Z" level=info msg="StopPodSandbox for \"44024082abf2ed3847030e66c247dd5da4f3bb5f2d94348290750d890bf2f2f5\" returns successfully" Feb 13 22:53:57.221406 kubelet[2741]: I0213 22:53:57.221189 2741 scope.go:117] "RemoveContainer" containerID="ebb74a4e47a3f3e51d16f284cc6a88a866fc6d7f4d298ca3932038982109977b" Feb 13 22:53:57.227822 kubelet[2741]: I0213 22:53:57.226709 2741 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/039cfe1d-50d6-49a8-83ef-f60be1c2e6d5-clustermesh-secrets\") pod \"039cfe1d-50d6-49a8-83ef-f60be1c2e6d5\" (UID: \"039cfe1d-50d6-49a8-83ef-f60be1c2e6d5\") " Feb 13 22:53:57.227822 kubelet[2741]: I0213 22:53:57.226769 2741 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/039cfe1d-50d6-49a8-83ef-f60be1c2e6d5-cilium-run\") pod \"039cfe1d-50d6-49a8-83ef-f60be1c2e6d5\" (UID: \"039cfe1d-50d6-49a8-83ef-f60be1c2e6d5\") " Feb 13 22:53:57.227822 kubelet[2741]: I0213 22:53:57.226802 2741 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/039cfe1d-50d6-49a8-83ef-f60be1c2e6d5-etc-cni-netd\") pod \"039cfe1d-50d6-49a8-83ef-f60be1c2e6d5\" (UID: \"039cfe1d-50d6-49a8-83ef-f60be1c2e6d5\") " Feb 13 22:53:57.227822 kubelet[2741]: I0213 22:53:57.226826 2741 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/039cfe1d-50d6-49a8-83ef-f60be1c2e6d5-host-proc-sys-kernel\") pod \"039cfe1d-50d6-49a8-83ef-f60be1c2e6d5\" (UID: \"039cfe1d-50d6-49a8-83ef-f60be1c2e6d5\") " Feb 13 22:53:57.227822 kubelet[2741]: I0213 22:53:57.226854 2741 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/039cfe1d-50d6-49a8-83ef-f60be1c2e6d5-hostproc\") pod \"039cfe1d-50d6-49a8-83ef-f60be1c2e6d5\" (UID: \"039cfe1d-50d6-49a8-83ef-f60be1c2e6d5\") " Feb 13 22:53:57.227822 kubelet[2741]: I0213 22:53:57.226884 2741 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/039cfe1d-50d6-49a8-83ef-f60be1c2e6d5-cilium-config-path\") pod \"039cfe1d-50d6-49a8-83ef-f60be1c2e6d5\" (UID: \"039cfe1d-50d6-49a8-83ef-f60be1c2e6d5\") " Feb 13 22:53:57.228378 kubelet[2741]: I0213 22:53:57.226924 2741 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/039cfe1d-50d6-49a8-83ef-f60be1c2e6d5-cni-path\") pod \"039cfe1d-50d6-49a8-83ef-f60be1c2e6d5\" (UID: \"039cfe1d-50d6-49a8-83ef-f60be1c2e6d5\") " Feb 13 22:53:57.228378 kubelet[2741]: I0213 22:53:57.226954 2741 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-knrdj\" (UniqueName: \"kubernetes.io/projected/5adf25ac-5cb9-4335-be07-d75abc733b91-kube-api-access-knrdj\") pod \"5adf25ac-5cb9-4335-be07-d75abc733b91\" (UID: \"5adf25ac-5cb9-4335-be07-d75abc733b91\") " Feb 13 22:53:57.228378 kubelet[2741]: I0213 22:53:57.226981 2741 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/039cfe1d-50d6-49a8-83ef-f60be1c2e6d5-cilium-cgroup\") pod \"039cfe1d-50d6-49a8-83ef-f60be1c2e6d5\" (UID: \"039cfe1d-50d6-49a8-83ef-f60be1c2e6d5\") " Feb 13 22:53:57.228378 kubelet[2741]: I0213 22:53:57.227003 2741 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/039cfe1d-50d6-49a8-83ef-f60be1c2e6d5-lib-modules\") pod \"039cfe1d-50d6-49a8-83ef-f60be1c2e6d5\" (UID: \"039cfe1d-50d6-49a8-83ef-f60be1c2e6d5\") " Feb 13 22:53:57.228378 kubelet[2741]: I0213 22:53:57.227027 2741 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xphgl\" (UniqueName: \"kubernetes.io/projected/039cfe1d-50d6-49a8-83ef-f60be1c2e6d5-kube-api-access-xphgl\") pod \"039cfe1d-50d6-49a8-83ef-f60be1c2e6d5\" (UID: \"039cfe1d-50d6-49a8-83ef-f60be1c2e6d5\") " Feb 13 22:53:57.228378 kubelet[2741]: I0213 22:53:57.227051 2741 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/039cfe1d-50d6-49a8-83ef-f60be1c2e6d5-host-proc-sys-net\") pod \"039cfe1d-50d6-49a8-83ef-f60be1c2e6d5\" (UID: \"039cfe1d-50d6-49a8-83ef-f60be1c2e6d5\") " Feb 13 22:53:57.229119 containerd[1513]: time="2025-02-13T22:53:57.228324378Z" level=info msg="RemoveContainer for \"ebb74a4e47a3f3e51d16f284cc6a88a866fc6d7f4d298ca3932038982109977b\"" Feb 13 22:53:57.229452 kubelet[2741]: I0213 22:53:57.227077 2741 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5adf25ac-5cb9-4335-be07-d75abc733b91-cilium-config-path\") pod \"5adf25ac-5cb9-4335-be07-d75abc733b91\" (UID: \"5adf25ac-5cb9-4335-be07-d75abc733b91\") " Feb 13 22:53:57.229452 kubelet[2741]: I0213 22:53:57.227100 2741 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/039cfe1d-50d6-49a8-83ef-f60be1c2e6d5-bpf-maps\") pod \"039cfe1d-50d6-49a8-83ef-f60be1c2e6d5\" (UID: \"039cfe1d-50d6-49a8-83ef-f60be1c2e6d5\") " Feb 13 22:53:57.229452 kubelet[2741]: I0213 22:53:57.227122 2741 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/039cfe1d-50d6-49a8-83ef-f60be1c2e6d5-xtables-lock\") pod \"039cfe1d-50d6-49a8-83ef-f60be1c2e6d5\" (UID: \"039cfe1d-50d6-49a8-83ef-f60be1c2e6d5\") " Feb 13 22:53:57.229452 kubelet[2741]: I0213 22:53:57.227147 2741 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/039cfe1d-50d6-49a8-83ef-f60be1c2e6d5-hubble-tls\") pod \"039cfe1d-50d6-49a8-83ef-f60be1c2e6d5\" (UID: \"039cfe1d-50d6-49a8-83ef-f60be1c2e6d5\") " Feb 13 22:53:57.234448 containerd[1513]: time="2025-02-13T22:53:57.234368146Z" level=info msg="RemoveContainer for \"ebb74a4e47a3f3e51d16f284cc6a88a866fc6d7f4d298ca3932038982109977b\" returns successfully" Feb 13 22:53:57.241111 kubelet[2741]: I0213 22:53:57.240094 2741 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/039cfe1d-50d6-49a8-83ef-f60be1c2e6d5-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "039cfe1d-50d6-49a8-83ef-f60be1c2e6d5" (UID: "039cfe1d-50d6-49a8-83ef-f60be1c2e6d5"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 22:53:57.241235 kubelet[2741]: I0213 22:53:57.239994 2741 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5adf25ac-5cb9-4335-be07-d75abc733b91-kube-api-access-knrdj" (OuterVolumeSpecName: "kube-api-access-knrdj") pod "5adf25ac-5cb9-4335-be07-d75abc733b91" (UID: "5adf25ac-5cb9-4335-be07-d75abc733b91"). InnerVolumeSpecName "kube-api-access-knrdj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 13 22:53:57.241235 kubelet[2741]: I0213 22:53:57.241190 2741 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/039cfe1d-50d6-49a8-83ef-f60be1c2e6d5-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "039cfe1d-50d6-49a8-83ef-f60be1c2e6d5" (UID: "039cfe1d-50d6-49a8-83ef-f60be1c2e6d5"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 22:53:57.245543 kubelet[2741]: I0213 22:53:57.244512 2741 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/039cfe1d-50d6-49a8-83ef-f60be1c2e6d5-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "039cfe1d-50d6-49a8-83ef-f60be1c2e6d5" (UID: "039cfe1d-50d6-49a8-83ef-f60be1c2e6d5"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 13 22:53:57.245543 kubelet[2741]: I0213 22:53:57.244541 2741 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/039cfe1d-50d6-49a8-83ef-f60be1c2e6d5-kube-api-access-xphgl" (OuterVolumeSpecName: "kube-api-access-xphgl") pod "039cfe1d-50d6-49a8-83ef-f60be1c2e6d5" (UID: "039cfe1d-50d6-49a8-83ef-f60be1c2e6d5"). InnerVolumeSpecName "kube-api-access-xphgl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 13 22:53:57.245543 kubelet[2741]: I0213 22:53:57.244575 2741 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/039cfe1d-50d6-49a8-83ef-f60be1c2e6d5-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "039cfe1d-50d6-49a8-83ef-f60be1c2e6d5" (UID: "039cfe1d-50d6-49a8-83ef-f60be1c2e6d5"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 22:53:57.245543 kubelet[2741]: I0213 22:53:57.244608 2741 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/039cfe1d-50d6-49a8-83ef-f60be1c2e6d5-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "039cfe1d-50d6-49a8-83ef-f60be1c2e6d5" (UID: "039cfe1d-50d6-49a8-83ef-f60be1c2e6d5"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 22:53:57.245543 kubelet[2741]: I0213 22:53:57.244638 2741 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/039cfe1d-50d6-49a8-83ef-f60be1c2e6d5-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "039cfe1d-50d6-49a8-83ef-f60be1c2e6d5" (UID: "039cfe1d-50d6-49a8-83ef-f60be1c2e6d5"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 22:53:57.245804 kubelet[2741]: I0213 22:53:57.244672 2741 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/039cfe1d-50d6-49a8-83ef-f60be1c2e6d5-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "039cfe1d-50d6-49a8-83ef-f60be1c2e6d5" (UID: "039cfe1d-50d6-49a8-83ef-f60be1c2e6d5"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 22:53:57.245804 kubelet[2741]: I0213 22:53:57.244702 2741 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/039cfe1d-50d6-49a8-83ef-f60be1c2e6d5-hostproc" (OuterVolumeSpecName: "hostproc") pod "039cfe1d-50d6-49a8-83ef-f60be1c2e6d5" (UID: "039cfe1d-50d6-49a8-83ef-f60be1c2e6d5"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 22:53:57.249019 kubelet[2741]: I0213 22:53:57.248975 2741 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5adf25ac-5cb9-4335-be07-d75abc733b91-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5adf25ac-5cb9-4335-be07-d75abc733b91" (UID: "5adf25ac-5cb9-4335-be07-d75abc733b91"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 13 22:53:57.249116 kubelet[2741]: I0213 22:53:57.249075 2741 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/039cfe1d-50d6-49a8-83ef-f60be1c2e6d5-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "039cfe1d-50d6-49a8-83ef-f60be1c2e6d5" (UID: "039cfe1d-50d6-49a8-83ef-f60be1c2e6d5"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 22:53:57.249203 kubelet[2741]: I0213 22:53:57.249117 2741 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/039cfe1d-50d6-49a8-83ef-f60be1c2e6d5-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "039cfe1d-50d6-49a8-83ef-f60be1c2e6d5" (UID: "039cfe1d-50d6-49a8-83ef-f60be1c2e6d5"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 22:53:57.249203 kubelet[2741]: I0213 22:53:57.249138 2741 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/039cfe1d-50d6-49a8-83ef-f60be1c2e6d5-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "039cfe1d-50d6-49a8-83ef-f60be1c2e6d5" (UID: "039cfe1d-50d6-49a8-83ef-f60be1c2e6d5"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 13 22:53:57.249367 kubelet[2741]: I0213 22:53:57.249338 2741 scope.go:117] "RemoveContainer" containerID="7f529c91fa669345efa8da0c3d8499697fde35a565ab7fd7840b9011981ad625" Feb 13 22:53:57.249688 kubelet[2741]: I0213 22:53:57.249484 2741 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/039cfe1d-50d6-49a8-83ef-f60be1c2e6d5-cni-path" (OuterVolumeSpecName: "cni-path") pod "039cfe1d-50d6-49a8-83ef-f60be1c2e6d5" (UID: "039cfe1d-50d6-49a8-83ef-f60be1c2e6d5"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 22:53:57.250295 kubelet[2741]: I0213 22:53:57.250253 2741 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/039cfe1d-50d6-49a8-83ef-f60be1c2e6d5-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "039cfe1d-50d6-49a8-83ef-f60be1c2e6d5" (UID: "039cfe1d-50d6-49a8-83ef-f60be1c2e6d5"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 13 22:53:57.251050 containerd[1513]: time="2025-02-13T22:53:57.250982319Z" level=info msg="RemoveContainer for \"7f529c91fa669345efa8da0c3d8499697fde35a565ab7fd7840b9011981ad625\"" Feb 13 22:53:57.254517 containerd[1513]: time="2025-02-13T22:53:57.254484337Z" level=info msg="RemoveContainer for \"7f529c91fa669345efa8da0c3d8499697fde35a565ab7fd7840b9011981ad625\" returns successfully" Feb 13 22:53:57.254735 kubelet[2741]: I0213 22:53:57.254708 2741 scope.go:117] "RemoveContainer" containerID="b768a2095461ba0354cc34f7ff914037ee4becec741a9b40519408530fae5073" Feb 13 22:53:57.256476 containerd[1513]: time="2025-02-13T22:53:57.256315488Z" level=info msg="RemoveContainer for \"b768a2095461ba0354cc34f7ff914037ee4becec741a9b40519408530fae5073\"" Feb 13 22:53:57.259900 containerd[1513]: time="2025-02-13T22:53:57.259839042Z" level=info msg="RemoveContainer for \"b768a2095461ba0354cc34f7ff914037ee4becec741a9b40519408530fae5073\" returns successfully" Feb 13 22:53:57.260215 kubelet[2741]: I0213 22:53:57.260087 2741 scope.go:117] "RemoveContainer" containerID="b4805739ea3d5c9896724654fdfcd579acdea589c8e425f9e7e80f4fb47596b4" Feb 13 22:53:57.261568 containerd[1513]: time="2025-02-13T22:53:57.261536271Z" level=info msg="RemoveContainer for \"b4805739ea3d5c9896724654fdfcd579acdea589c8e425f9e7e80f4fb47596b4\"" Feb 13 22:53:57.264786 containerd[1513]: time="2025-02-13T22:53:57.264753314Z" level=info msg="RemoveContainer for \"b4805739ea3d5c9896724654fdfcd579acdea589c8e425f9e7e80f4fb47596b4\" returns successfully" Feb 13 22:53:57.265746 kubelet[2741]: I0213 22:53:57.265168 2741 scope.go:117] "RemoveContainer" containerID="ef6d04928657abef93a24546df42c9ed8d75325b63f0eda63e3e06b0f2114e7a" Feb 13 22:53:57.266454 containerd[1513]: time="2025-02-13T22:53:57.266424862Z" level=info msg="RemoveContainer for \"ef6d04928657abef93a24546df42c9ed8d75325b63f0eda63e3e06b0f2114e7a\"" Feb 13 22:53:57.269031 containerd[1513]: time="2025-02-13T22:53:57.268994897Z" level=info msg="RemoveContainer for \"ef6d04928657abef93a24546df42c9ed8d75325b63f0eda63e3e06b0f2114e7a\" returns successfully" Feb 13 22:53:57.269251 kubelet[2741]: I0213 22:53:57.269159 2741 scope.go:117] "RemoveContainer" containerID="ebb74a4e47a3f3e51d16f284cc6a88a866fc6d7f4d298ca3932038982109977b" Feb 13 22:53:57.269430 containerd[1513]: time="2025-02-13T22:53:57.269376587Z" level=error msg="ContainerStatus for \"ebb74a4e47a3f3e51d16f284cc6a88a866fc6d7f4d298ca3932038982109977b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ebb74a4e47a3f3e51d16f284cc6a88a866fc6d7f4d298ca3932038982109977b\": not found" Feb 13 22:53:57.270178 kubelet[2741]: E0213 22:53:57.269957 2741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ebb74a4e47a3f3e51d16f284cc6a88a866fc6d7f4d298ca3932038982109977b\": not found" containerID="ebb74a4e47a3f3e51d16f284cc6a88a866fc6d7f4d298ca3932038982109977b" Feb 13 22:53:57.273712 kubelet[2741]: I0213 22:53:57.270988 2741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ebb74a4e47a3f3e51d16f284cc6a88a866fc6d7f4d298ca3932038982109977b"} err="failed to get container status \"ebb74a4e47a3f3e51d16f284cc6a88a866fc6d7f4d298ca3932038982109977b\": rpc error: code = NotFound desc = an error occurred when try to find container \"ebb74a4e47a3f3e51d16f284cc6a88a866fc6d7f4d298ca3932038982109977b\": not found" Feb 13 22:53:57.273712 kubelet[2741]: I0213 22:53:57.273587 2741 scope.go:117] "RemoveContainer" containerID="7f529c91fa669345efa8da0c3d8499697fde35a565ab7fd7840b9011981ad625" Feb 13 22:53:57.274157 containerd[1513]: time="2025-02-13T22:53:57.274045562Z" level=error msg="ContainerStatus for \"7f529c91fa669345efa8da0c3d8499697fde35a565ab7fd7840b9011981ad625\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7f529c91fa669345efa8da0c3d8499697fde35a565ab7fd7840b9011981ad625\": not found" Feb 13 22:53:57.274883 kubelet[2741]: E0213 22:53:57.274744 2741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7f529c91fa669345efa8da0c3d8499697fde35a565ab7fd7840b9011981ad625\": not found" containerID="7f529c91fa669345efa8da0c3d8499697fde35a565ab7fd7840b9011981ad625" Feb 13 22:53:57.275099 kubelet[2741]: I0213 22:53:57.275015 2741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7f529c91fa669345efa8da0c3d8499697fde35a565ab7fd7840b9011981ad625"} err="failed to get container status \"7f529c91fa669345efa8da0c3d8499697fde35a565ab7fd7840b9011981ad625\": rpc error: code = NotFound desc = an error occurred when try to find container \"7f529c91fa669345efa8da0c3d8499697fde35a565ab7fd7840b9011981ad625\": not found" Feb 13 22:53:57.275485 kubelet[2741]: I0213 22:53:57.275189 2741 scope.go:117] "RemoveContainer" containerID="b768a2095461ba0354cc34f7ff914037ee4becec741a9b40519408530fae5073" Feb 13 22:53:57.275564 containerd[1513]: time="2025-02-13T22:53:57.275500497Z" level=error msg="ContainerStatus for \"b768a2095461ba0354cc34f7ff914037ee4becec741a9b40519408530fae5073\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b768a2095461ba0354cc34f7ff914037ee4becec741a9b40519408530fae5073\": not found" Feb 13 22:53:57.275846 kubelet[2741]: E0213 22:53:57.275666 2741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b768a2095461ba0354cc34f7ff914037ee4becec741a9b40519408530fae5073\": not found" containerID="b768a2095461ba0354cc34f7ff914037ee4becec741a9b40519408530fae5073" Feb 13 22:53:57.275846 kubelet[2741]: I0213 22:53:57.275694 2741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b768a2095461ba0354cc34f7ff914037ee4becec741a9b40519408530fae5073"} err="failed to get container status \"b768a2095461ba0354cc34f7ff914037ee4becec741a9b40519408530fae5073\": rpc error: code = NotFound desc = an error occurred when try to find container \"b768a2095461ba0354cc34f7ff914037ee4becec741a9b40519408530fae5073\": not found" Feb 13 22:53:57.275846 kubelet[2741]: I0213 22:53:57.275714 2741 scope.go:117] "RemoveContainer" containerID="b4805739ea3d5c9896724654fdfcd579acdea589c8e425f9e7e80f4fb47596b4" Feb 13 22:53:57.276418 containerd[1513]: time="2025-02-13T22:53:57.276115221Z" level=error msg="ContainerStatus for \"b4805739ea3d5c9896724654fdfcd579acdea589c8e425f9e7e80f4fb47596b4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b4805739ea3d5c9896724654fdfcd579acdea589c8e425f9e7e80f4fb47596b4\": not found" Feb 13 22:53:57.276487 kubelet[2741]: E0213 22:53:57.276339 2741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b4805739ea3d5c9896724654fdfcd579acdea589c8e425f9e7e80f4fb47596b4\": not found" containerID="b4805739ea3d5c9896724654fdfcd579acdea589c8e425f9e7e80f4fb47596b4" Feb 13 22:53:57.276487 kubelet[2741]: I0213 22:53:57.276367 2741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b4805739ea3d5c9896724654fdfcd579acdea589c8e425f9e7e80f4fb47596b4"} err="failed to get container status \"b4805739ea3d5c9896724654fdfcd579acdea589c8e425f9e7e80f4fb47596b4\": rpc error: code = NotFound desc = an error occurred when try to find container \"b4805739ea3d5c9896724654fdfcd579acdea589c8e425f9e7e80f4fb47596b4\": not found" Feb 13 22:53:57.277034 kubelet[2741]: I0213 22:53:57.276388 2741 scope.go:117] "RemoveContainer" containerID="ef6d04928657abef93a24546df42c9ed8d75325b63f0eda63e3e06b0f2114e7a" Feb 13 22:53:57.277414 containerd[1513]: time="2025-02-13T22:53:57.276961286Z" level=error msg="ContainerStatus for \"ef6d04928657abef93a24546df42c9ed8d75325b63f0eda63e3e06b0f2114e7a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ef6d04928657abef93a24546df42c9ed8d75325b63f0eda63e3e06b0f2114e7a\": not found" Feb 13 22:53:57.277475 kubelet[2741]: E0213 22:53:57.277221 2741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ef6d04928657abef93a24546df42c9ed8d75325b63f0eda63e3e06b0f2114e7a\": not found" containerID="ef6d04928657abef93a24546df42c9ed8d75325b63f0eda63e3e06b0f2114e7a" Feb 13 22:53:57.277475 kubelet[2741]: I0213 22:53:57.277255 2741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ef6d04928657abef93a24546df42c9ed8d75325b63f0eda63e3e06b0f2114e7a"} err="failed to get container status \"ef6d04928657abef93a24546df42c9ed8d75325b63f0eda63e3e06b0f2114e7a\": rpc error: code = NotFound desc = an error occurred when try to find container \"ef6d04928657abef93a24546df42c9ed8d75325b63f0eda63e3e06b0f2114e7a\": not found" Feb 13 22:53:57.277475 kubelet[2741]: I0213 22:53:57.277289 2741 scope.go:117] "RemoveContainer" containerID="91f258626d0d63f12926a776cd42c53b05e9bd5abfc9e985726de6e8cca24535" Feb 13 22:53:57.279165 containerd[1513]: time="2025-02-13T22:53:57.279064856Z" level=info msg="RemoveContainer for \"91f258626d0d63f12926a776cd42c53b05e9bd5abfc9e985726de6e8cca24535\"" Feb 13 22:53:57.284043 containerd[1513]: time="2025-02-13T22:53:57.284006019Z" level=info msg="RemoveContainer for \"91f258626d0d63f12926a776cd42c53b05e9bd5abfc9e985726de6e8cca24535\" returns successfully" Feb 13 22:53:57.284736 kubelet[2741]: I0213 22:53:57.284375 2741 scope.go:117] "RemoveContainer" containerID="91f258626d0d63f12926a776cd42c53b05e9bd5abfc9e985726de6e8cca24535" Feb 13 22:53:57.284805 containerd[1513]: time="2025-02-13T22:53:57.284643958Z" level=error msg="ContainerStatus for \"91f258626d0d63f12926a776cd42c53b05e9bd5abfc9e985726de6e8cca24535\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"91f258626d0d63f12926a776cd42c53b05e9bd5abfc9e985726de6e8cca24535\": not found" Feb 13 22:53:57.285118 kubelet[2741]: E0213 22:53:57.284994 2741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"91f258626d0d63f12926a776cd42c53b05e9bd5abfc9e985726de6e8cca24535\": not found" containerID="91f258626d0d63f12926a776cd42c53b05e9bd5abfc9e985726de6e8cca24535" Feb 13 22:53:57.285118 kubelet[2741]: I0213 22:53:57.285033 2741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"91f258626d0d63f12926a776cd42c53b05e9bd5abfc9e985726de6e8cca24535"} err="failed to get container status \"91f258626d0d63f12926a776cd42c53b05e9bd5abfc9e985726de6e8cca24535\": rpc error: code = NotFound desc = an error occurred when try to find container \"91f258626d0d63f12926a776cd42c53b05e9bd5abfc9e985726de6e8cca24535\": not found" Feb 13 22:53:57.327463 kubelet[2741]: I0213 22:53:57.327396 2741 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/039cfe1d-50d6-49a8-83ef-f60be1c2e6d5-hostproc\") on node \"srv-uk33m.gb1.brightbox.com\" DevicePath \"\"" Feb 13 22:53:57.327463 kubelet[2741]: I0213 22:53:57.327461 2741 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/039cfe1d-50d6-49a8-83ef-f60be1c2e6d5-etc-cni-netd\") on node \"srv-uk33m.gb1.brightbox.com\" DevicePath \"\"" Feb 13 22:53:57.327463 kubelet[2741]: I0213 22:53:57.327482 2741 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/039cfe1d-50d6-49a8-83ef-f60be1c2e6d5-host-proc-sys-kernel\") on node \"srv-uk33m.gb1.brightbox.com\" DevicePath \"\"" Feb 13 22:53:57.327875 kubelet[2741]: I0213 22:53:57.327501 2741 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/039cfe1d-50d6-49a8-83ef-f60be1c2e6d5-cni-path\") on node \"srv-uk33m.gb1.brightbox.com\" DevicePath \"\"" Feb 13 22:53:57.327875 kubelet[2741]: I0213 22:53:57.327518 2741 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/039cfe1d-50d6-49a8-83ef-f60be1c2e6d5-cilium-config-path\") on node \"srv-uk33m.gb1.brightbox.com\" DevicePath \"\"" Feb 13 22:53:57.327875 kubelet[2741]: I0213 22:53:57.327533 2741 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/039cfe1d-50d6-49a8-83ef-f60be1c2e6d5-cilium-cgroup\") on node \"srv-uk33m.gb1.brightbox.com\" DevicePath \"\"" Feb 13 22:53:57.327875 kubelet[2741]: I0213 22:53:57.327548 2741 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/039cfe1d-50d6-49a8-83ef-f60be1c2e6d5-lib-modules\") on node \"srv-uk33m.gb1.brightbox.com\" DevicePath \"\"" Feb 13 22:53:57.327875 kubelet[2741]: I0213 22:53:57.327563 2741 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-knrdj\" (UniqueName: \"kubernetes.io/projected/5adf25ac-5cb9-4335-be07-d75abc733b91-kube-api-access-knrdj\") on node \"srv-uk33m.gb1.brightbox.com\" DevicePath \"\"" Feb 13 22:53:57.327875 kubelet[2741]: I0213 22:53:57.327578 2741 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xphgl\" (UniqueName: \"kubernetes.io/projected/039cfe1d-50d6-49a8-83ef-f60be1c2e6d5-kube-api-access-xphgl\") on node \"srv-uk33m.gb1.brightbox.com\" DevicePath \"\"" Feb 13 22:53:57.327875 kubelet[2741]: I0213 22:53:57.327628 2741 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/039cfe1d-50d6-49a8-83ef-f60be1c2e6d5-host-proc-sys-net\") on node \"srv-uk33m.gb1.brightbox.com\" DevicePath \"\"" Feb 13 22:53:57.328380 kubelet[2741]: I0213 22:53:57.327646 2741 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5adf25ac-5cb9-4335-be07-d75abc733b91-cilium-config-path\") on node \"srv-uk33m.gb1.brightbox.com\" DevicePath \"\"" Feb 13 22:53:57.328380 kubelet[2741]: I0213 22:53:57.327662 2741 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/039cfe1d-50d6-49a8-83ef-f60be1c2e6d5-hubble-tls\") on node \"srv-uk33m.gb1.brightbox.com\" DevicePath \"\"" Feb 13 22:53:57.328380 kubelet[2741]: I0213 22:53:57.327696 2741 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/039cfe1d-50d6-49a8-83ef-f60be1c2e6d5-bpf-maps\") on node \"srv-uk33m.gb1.brightbox.com\" DevicePath \"\"" Feb 13 22:53:57.328380 kubelet[2741]: I0213 22:53:57.327711 2741 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/039cfe1d-50d6-49a8-83ef-f60be1c2e6d5-xtables-lock\") on node \"srv-uk33m.gb1.brightbox.com\" DevicePath \"\"" Feb 13 22:53:57.328380 kubelet[2741]: I0213 22:53:57.327727 2741 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/039cfe1d-50d6-49a8-83ef-f60be1c2e6d5-cilium-run\") on node \"srv-uk33m.gb1.brightbox.com\" DevicePath \"\"" Feb 13 22:53:57.328380 kubelet[2741]: I0213 22:53:57.327743 2741 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/039cfe1d-50d6-49a8-83ef-f60be1c2e6d5-clustermesh-secrets\") on node \"srv-uk33m.gb1.brightbox.com\" DevicePath \"\"" Feb 13 22:53:57.509070 systemd[1]: Removed slice kubepods-burstable-pod039cfe1d_50d6_49a8_83ef_f60be1c2e6d5.slice - libcontainer container kubepods-burstable-pod039cfe1d_50d6_49a8_83ef_f60be1c2e6d5.slice. Feb 13 22:53:57.509617 systemd[1]: kubepods-burstable-pod039cfe1d_50d6_49a8_83ef_f60be1c2e6d5.slice: Consumed 10.068s CPU time. Feb 13 22:53:57.520638 systemd[1]: Removed slice kubepods-besteffort-pod5adf25ac_5cb9_4335_be07_d75abc733b91.slice - libcontainer container kubepods-besteffort-pod5adf25ac_5cb9_4335_be07_d75abc733b91.slice. Feb 13 22:53:57.680355 kubelet[2741]: I0213 22:53:57.679830 2741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="039cfe1d-50d6-49a8-83ef-f60be1c2e6d5" path="/var/lib/kubelet/pods/039cfe1d-50d6-49a8-83ef-f60be1c2e6d5/volumes" Feb 13 22:53:57.681295 kubelet[2741]: I0213 22:53:57.681239 2741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5adf25ac-5cb9-4335-be07-d75abc733b91" path="/var/lib/kubelet/pods/5adf25ac-5cb9-4335-be07-d75abc733b91/volumes" Feb 13 22:53:57.723516 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fe524400ab5759d4ac6df93612289220948c8f51939667a8ce94e020884ae21f-rootfs.mount: Deactivated successfully. Feb 13 22:53:57.723759 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-44024082abf2ed3847030e66c247dd5da4f3bb5f2d94348290750d890bf2f2f5-rootfs.mount: Deactivated successfully. Feb 13 22:53:57.723897 systemd[1]: var-lib-kubelet-pods-5adf25ac\x2d5cb9\x2d4335\x2dbe07\x2dd75abc733b91-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dknrdj.mount: Deactivated successfully. Feb 13 22:53:57.724055 systemd[1]: var-lib-kubelet-pods-039cfe1d\x2d50d6\x2d49a8\x2d83ef\x2df60be1c2e6d5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxphgl.mount: Deactivated successfully. Feb 13 22:53:57.724172 systemd[1]: var-lib-kubelet-pods-039cfe1d\x2d50d6\x2d49a8\x2d83ef\x2df60be1c2e6d5-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 22:53:57.724307 systemd[1]: var-lib-kubelet-pods-039cfe1d\x2d50d6\x2d49a8\x2d83ef\x2df60be1c2e6d5-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 22:53:58.739652 sshd[4345]: Connection closed by 147.75.109.163 port 52626 Feb 13 22:53:58.741045 sshd-session[4343]: pam_unix(sshd:session): session closed for user core Feb 13 22:53:58.746953 systemd[1]: sshd@26-10.230.69.138:22-147.75.109.163:52626.service: Deactivated successfully. Feb 13 22:53:58.749538 systemd[1]: session-27.scope: Deactivated successfully. Feb 13 22:53:58.750682 systemd-logind[1489]: Session 27 logged out. Waiting for processes to exit. Feb 13 22:53:58.752169 systemd-logind[1489]: Removed session 27. Feb 13 22:53:58.901059 kubelet[2741]: E0213 22:53:58.900981 2741 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 22:53:58.902080 systemd[1]: Started sshd@27-10.230.69.138:22-147.75.109.163:52632.service - OpenSSH per-connection server daemon (147.75.109.163:52632). Feb 13 22:53:59.806460 sshd[4508]: Accepted publickey for core from 147.75.109.163 port 52632 ssh2: RSA SHA256:Yx7fWtREze/vjbfbVXgsOsi8+bAvCeghI7ZLGsIJS+I Feb 13 22:53:59.808843 sshd-session[4508]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 22:53:59.816517 systemd-logind[1489]: New session 28 of user core. Feb 13 22:53:59.824857 systemd[1]: Started session-28.scope - Session 28 of User core. Feb 13 22:54:01.576761 kubelet[2741]: I0213 22:54:01.571988 2741 memory_manager.go:355] "RemoveStaleState removing state" podUID="039cfe1d-50d6-49a8-83ef-f60be1c2e6d5" containerName="cilium-agent" Feb 13 22:54:01.576761 kubelet[2741]: I0213 22:54:01.576758 2741 memory_manager.go:355] "RemoveStaleState removing state" podUID="5adf25ac-5cb9-4335-be07-d75abc733b91" containerName="cilium-operator" Feb 13 22:54:01.611206 systemd[1]: Created slice kubepods-burstable-pod5c90da55_bfc0_49cb_9e80_0ce25abf7620.slice - libcontainer container kubepods-burstable-pod5c90da55_bfc0_49cb_9e80_0ce25abf7620.slice. Feb 13 22:54:01.659628 kubelet[2741]: I0213 22:54:01.658719 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5c90da55-bfc0-49cb-9e80-0ce25abf7620-host-proc-sys-net\") pod \"cilium-kp8lk\" (UID: \"5c90da55-bfc0-49cb-9e80-0ce25abf7620\") " pod="kube-system/cilium-kp8lk" Feb 13 22:54:01.659628 kubelet[2741]: I0213 22:54:01.658787 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5c90da55-bfc0-49cb-9e80-0ce25abf7620-lib-modules\") pod \"cilium-kp8lk\" (UID: \"5c90da55-bfc0-49cb-9e80-0ce25abf7620\") " pod="kube-system/cilium-kp8lk" Feb 13 22:54:01.659628 kubelet[2741]: I0213 22:54:01.658841 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5c90da55-bfc0-49cb-9e80-0ce25abf7620-host-proc-sys-kernel\") pod \"cilium-kp8lk\" (UID: \"5c90da55-bfc0-49cb-9e80-0ce25abf7620\") " pod="kube-system/cilium-kp8lk" Feb 13 22:54:01.659628 kubelet[2741]: I0213 22:54:01.658876 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5c90da55-bfc0-49cb-9e80-0ce25abf7620-cni-path\") pod \"cilium-kp8lk\" (UID: \"5c90da55-bfc0-49cb-9e80-0ce25abf7620\") " pod="kube-system/cilium-kp8lk" Feb 13 22:54:01.659628 kubelet[2741]: I0213 22:54:01.658910 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5c90da55-bfc0-49cb-9e80-0ce25abf7620-bpf-maps\") pod \"cilium-kp8lk\" (UID: \"5c90da55-bfc0-49cb-9e80-0ce25abf7620\") " pod="kube-system/cilium-kp8lk" Feb 13 22:54:01.659628 kubelet[2741]: I0213 22:54:01.658946 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5c90da55-bfc0-49cb-9e80-0ce25abf7620-hostproc\") pod \"cilium-kp8lk\" (UID: \"5c90da55-bfc0-49cb-9e80-0ce25abf7620\") " pod="kube-system/cilium-kp8lk" Feb 13 22:54:01.660060 kubelet[2741]: I0213 22:54:01.658971 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5c90da55-bfc0-49cb-9e80-0ce25abf7620-etc-cni-netd\") pod \"cilium-kp8lk\" (UID: \"5c90da55-bfc0-49cb-9e80-0ce25abf7620\") " pod="kube-system/cilium-kp8lk" Feb 13 22:54:01.660060 kubelet[2741]: I0213 22:54:01.659004 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5c90da55-bfc0-49cb-9e80-0ce25abf7620-xtables-lock\") pod \"cilium-kp8lk\" (UID: \"5c90da55-bfc0-49cb-9e80-0ce25abf7620\") " pod="kube-system/cilium-kp8lk" Feb 13 22:54:01.660060 kubelet[2741]: I0213 22:54:01.659044 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5c90da55-bfc0-49cb-9e80-0ce25abf7620-clustermesh-secrets\") pod \"cilium-kp8lk\" (UID: \"5c90da55-bfc0-49cb-9e80-0ce25abf7620\") " pod="kube-system/cilium-kp8lk" Feb 13 22:54:01.660060 kubelet[2741]: I0213 22:54:01.659073 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mw4w\" (UniqueName: \"kubernetes.io/projected/5c90da55-bfc0-49cb-9e80-0ce25abf7620-kube-api-access-2mw4w\") pod \"cilium-kp8lk\" (UID: \"5c90da55-bfc0-49cb-9e80-0ce25abf7620\") " pod="kube-system/cilium-kp8lk" Feb 13 22:54:01.660060 kubelet[2741]: I0213 22:54:01.659106 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5c90da55-bfc0-49cb-9e80-0ce25abf7620-cilium-config-path\") pod \"cilium-kp8lk\" (UID: \"5c90da55-bfc0-49cb-9e80-0ce25abf7620\") " pod="kube-system/cilium-kp8lk" Feb 13 22:54:01.660276 kubelet[2741]: I0213 22:54:01.659141 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5c90da55-bfc0-49cb-9e80-0ce25abf7620-cilium-cgroup\") pod \"cilium-kp8lk\" (UID: \"5c90da55-bfc0-49cb-9e80-0ce25abf7620\") " pod="kube-system/cilium-kp8lk" Feb 13 22:54:01.660276 kubelet[2741]: I0213 22:54:01.659167 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5c90da55-bfc0-49cb-9e80-0ce25abf7620-hubble-tls\") pod \"cilium-kp8lk\" (UID: \"5c90da55-bfc0-49cb-9e80-0ce25abf7620\") " pod="kube-system/cilium-kp8lk" Feb 13 22:54:01.660276 kubelet[2741]: I0213 22:54:01.659211 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5c90da55-bfc0-49cb-9e80-0ce25abf7620-cilium-run\") pod \"cilium-kp8lk\" (UID: \"5c90da55-bfc0-49cb-9e80-0ce25abf7620\") " pod="kube-system/cilium-kp8lk" Feb 13 22:54:01.660276 kubelet[2741]: I0213 22:54:01.659252 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5c90da55-bfc0-49cb-9e80-0ce25abf7620-cilium-ipsec-secrets\") pod \"cilium-kp8lk\" (UID: \"5c90da55-bfc0-49cb-9e80-0ce25abf7620\") " pod="kube-system/cilium-kp8lk" Feb 13 22:54:01.699378 sshd[4510]: Connection closed by 147.75.109.163 port 52632 Feb 13 22:54:01.701646 sshd-session[4508]: pam_unix(sshd:session): session closed for user core Feb 13 22:54:01.708213 systemd-logind[1489]: Session 28 logged out. Waiting for processes to exit. Feb 13 22:54:01.713441 systemd[1]: sshd@27-10.230.69.138:22-147.75.109.163:52632.service: Deactivated successfully. Feb 13 22:54:01.718387 systemd[1]: session-28.scope: Deactivated successfully. Feb 13 22:54:01.719271 systemd[1]: session-28.scope: Consumed 1.186s CPU time. Feb 13 22:54:01.720790 systemd-logind[1489]: Removed session 28. Feb 13 22:54:01.864064 systemd[1]: Started sshd@28-10.230.69.138:22-147.75.109.163:49860.service - OpenSSH per-connection server daemon (147.75.109.163:49860). Feb 13 22:54:01.919125 containerd[1513]: time="2025-02-13T22:54:01.918990212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kp8lk,Uid:5c90da55-bfc0-49cb-9e80-0ce25abf7620,Namespace:kube-system,Attempt:0,}" Feb 13 22:54:01.958873 containerd[1513]: time="2025-02-13T22:54:01.958455087Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 22:54:01.958873 containerd[1513]: time="2025-02-13T22:54:01.958565130Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 22:54:01.958873 containerd[1513]: time="2025-02-13T22:54:01.958586488Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 22:54:01.960671 containerd[1513]: time="2025-02-13T22:54:01.960471866Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 22:54:01.985872 systemd[1]: Started cri-containerd-4cf2cf2daa0d5ae40dcb86a6f82bd3bf8b1349560a8bbb00434e865a470f6df3.scope - libcontainer container 4cf2cf2daa0d5ae40dcb86a6f82bd3bf8b1349560a8bbb00434e865a470f6df3. Feb 13 22:54:02.027143 containerd[1513]: time="2025-02-13T22:54:02.027075990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kp8lk,Uid:5c90da55-bfc0-49cb-9e80-0ce25abf7620,Namespace:kube-system,Attempt:0,} returns sandbox id \"4cf2cf2daa0d5ae40dcb86a6f82bd3bf8b1349560a8bbb00434e865a470f6df3\"" Feb 13 22:54:02.033512 containerd[1513]: time="2025-02-13T22:54:02.033348189Z" level=info msg="CreateContainer within sandbox \"4cf2cf2daa0d5ae40dcb86a6f82bd3bf8b1349560a8bbb00434e865a470f6df3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 22:54:02.059030 containerd[1513]: time="2025-02-13T22:54:02.058871209Z" level=info msg="CreateContainer within sandbox \"4cf2cf2daa0d5ae40dcb86a6f82bd3bf8b1349560a8bbb00434e865a470f6df3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c7ab6f65ce7c0d5b1c8b89c6798d2a2889ff6f56c7d60c982bffe2ee6b58b86e\"" Feb 13 22:54:02.061321 containerd[1513]: time="2025-02-13T22:54:02.060023609Z" level=info msg="StartContainer for \"c7ab6f65ce7c0d5b1c8b89c6798d2a2889ff6f56c7d60c982bffe2ee6b58b86e\"" Feb 13 22:54:02.102951 systemd[1]: Started cri-containerd-c7ab6f65ce7c0d5b1c8b89c6798d2a2889ff6f56c7d60c982bffe2ee6b58b86e.scope - libcontainer container c7ab6f65ce7c0d5b1c8b89c6798d2a2889ff6f56c7d60c982bffe2ee6b58b86e. Feb 13 22:54:02.165135 containerd[1513]: time="2025-02-13T22:54:02.164915875Z" level=info msg="StartContainer for \"c7ab6f65ce7c0d5b1c8b89c6798d2a2889ff6f56c7d60c982bffe2ee6b58b86e\" returns successfully" Feb 13 22:54:02.186378 systemd[1]: cri-containerd-c7ab6f65ce7c0d5b1c8b89c6798d2a2889ff6f56c7d60c982bffe2ee6b58b86e.scope: Deactivated successfully. Feb 13 22:54:02.244262 containerd[1513]: time="2025-02-13T22:54:02.244128627Z" level=info msg="shim disconnected" id=c7ab6f65ce7c0d5b1c8b89c6798d2a2889ff6f56c7d60c982bffe2ee6b58b86e namespace=k8s.io Feb 13 22:54:02.245185 containerd[1513]: time="2025-02-13T22:54:02.245042563Z" level=warning msg="cleaning up after shim disconnected" id=c7ab6f65ce7c0d5b1c8b89c6798d2a2889ff6f56c7d60c982bffe2ee6b58b86e namespace=k8s.io Feb 13 22:54:02.245185 containerd[1513]: time="2025-02-13T22:54:02.245071012Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 22:54:02.773132 sshd[4525]: Accepted publickey for core from 147.75.109.163 port 49860 ssh2: RSA SHA256:Yx7fWtREze/vjbfbVXgsOsi8+bAvCeghI7ZLGsIJS+I Feb 13 22:54:02.775084 sshd-session[4525]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 22:54:02.788543 systemd-logind[1489]: New session 29 of user core. Feb 13 22:54:02.798957 systemd[1]: Started session-29.scope - Session 29 of User core. Feb 13 22:54:03.247428 containerd[1513]: time="2025-02-13T22:54:03.246649954Z" level=info msg="CreateContainer within sandbox \"4cf2cf2daa0d5ae40dcb86a6f82bd3bf8b1349560a8bbb00434e865a470f6df3\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 22:54:03.278561 containerd[1513]: time="2025-02-13T22:54:03.278455650Z" level=info msg="CreateContainer within sandbox \"4cf2cf2daa0d5ae40dcb86a6f82bd3bf8b1349560a8bbb00434e865a470f6df3\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0ec723021de13cbb6e1983e8965956a528d8f0de2b5282d827c22b20874dfef6\"" Feb 13 22:54:03.280897 containerd[1513]: time="2025-02-13T22:54:03.280849081Z" level=info msg="StartContainer for \"0ec723021de13cbb6e1983e8965956a528d8f0de2b5282d827c22b20874dfef6\"" Feb 13 22:54:03.387909 systemd[1]: Started cri-containerd-0ec723021de13cbb6e1983e8965956a528d8f0de2b5282d827c22b20874dfef6.scope - libcontainer container 0ec723021de13cbb6e1983e8965956a528d8f0de2b5282d827c22b20874dfef6. Feb 13 22:54:03.404865 sshd[4625]: Connection closed by 147.75.109.163 port 49860 Feb 13 22:54:03.404007 sshd-session[4525]: pam_unix(sshd:session): session closed for user core Feb 13 22:54:03.412975 systemd[1]: sshd@28-10.230.69.138:22-147.75.109.163:49860.service: Deactivated successfully. Feb 13 22:54:03.417859 systemd[1]: session-29.scope: Deactivated successfully. Feb 13 22:54:03.422148 systemd-logind[1489]: Session 29 logged out. Waiting for processes to exit. Feb 13 22:54:03.427119 systemd-logind[1489]: Removed session 29. Feb 13 22:54:03.464477 containerd[1513]: time="2025-02-13T22:54:03.464380975Z" level=info msg="StartContainer for \"0ec723021de13cbb6e1983e8965956a528d8f0de2b5282d827c22b20874dfef6\" returns successfully" Feb 13 22:54:03.475516 systemd[1]: cri-containerd-0ec723021de13cbb6e1983e8965956a528d8f0de2b5282d827c22b20874dfef6.scope: Deactivated successfully. Feb 13 22:54:03.509075 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0ec723021de13cbb6e1983e8965956a528d8f0de2b5282d827c22b20874dfef6-rootfs.mount: Deactivated successfully. Feb 13 22:54:03.514842 containerd[1513]: time="2025-02-13T22:54:03.514721364Z" level=info msg="shim disconnected" id=0ec723021de13cbb6e1983e8965956a528d8f0de2b5282d827c22b20874dfef6 namespace=k8s.io Feb 13 22:54:03.514842 containerd[1513]: time="2025-02-13T22:54:03.514818145Z" level=warning msg="cleaning up after shim disconnected" id=0ec723021de13cbb6e1983e8965956a528d8f0de2b5282d827c22b20874dfef6 namespace=k8s.io Feb 13 22:54:03.514842 containerd[1513]: time="2025-02-13T22:54:03.514835965Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 22:54:03.573240 systemd[1]: Started sshd@29-10.230.69.138:22-147.75.109.163:49874.service - OpenSSH per-connection server daemon (147.75.109.163:49874). Feb 13 22:54:03.904008 kubelet[2741]: E0213 22:54:03.903757 2741 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 22:54:04.247089 containerd[1513]: time="2025-02-13T22:54:04.247026944Z" level=info msg="CreateContainer within sandbox \"4cf2cf2daa0d5ae40dcb86a6f82bd3bf8b1349560a8bbb00434e865a470f6df3\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 22:54:04.272113 containerd[1513]: time="2025-02-13T22:54:04.271874542Z" level=info msg="CreateContainer within sandbox \"4cf2cf2daa0d5ae40dcb86a6f82bd3bf8b1349560a8bbb00434e865a470f6df3\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"303dd975ecd389fa370ea9467402c6fc669e9fbf17bf07cdc922c8f9a0e9851d\"" Feb 13 22:54:04.273632 containerd[1513]: time="2025-02-13T22:54:04.273141551Z" level=info msg="StartContainer for \"303dd975ecd389fa370ea9467402c6fc669e9fbf17bf07cdc922c8f9a0e9851d\"" Feb 13 22:54:04.317984 systemd[1]: run-containerd-runc-k8s.io-303dd975ecd389fa370ea9467402c6fc669e9fbf17bf07cdc922c8f9a0e9851d-runc.nKZPzj.mount: Deactivated successfully. Feb 13 22:54:04.329876 systemd[1]: Started cri-containerd-303dd975ecd389fa370ea9467402c6fc669e9fbf17bf07cdc922c8f9a0e9851d.scope - libcontainer container 303dd975ecd389fa370ea9467402c6fc669e9fbf17bf07cdc922c8f9a0e9851d. Feb 13 22:54:04.385064 containerd[1513]: time="2025-02-13T22:54:04.384949318Z" level=info msg="StartContainer for \"303dd975ecd389fa370ea9467402c6fc669e9fbf17bf07cdc922c8f9a0e9851d\" returns successfully" Feb 13 22:54:04.393637 systemd[1]: cri-containerd-303dd975ecd389fa370ea9467402c6fc669e9fbf17bf07cdc922c8f9a0e9851d.scope: Deactivated successfully. Feb 13 22:54:04.429235 containerd[1513]: time="2025-02-13T22:54:04.429075827Z" level=info msg="shim disconnected" id=303dd975ecd389fa370ea9467402c6fc669e9fbf17bf07cdc922c8f9a0e9851d namespace=k8s.io Feb 13 22:54:04.429235 containerd[1513]: time="2025-02-13T22:54:04.429210176Z" level=warning msg="cleaning up after shim disconnected" id=303dd975ecd389fa370ea9467402c6fc669e9fbf17bf07cdc922c8f9a0e9851d namespace=k8s.io Feb 13 22:54:04.429235 containerd[1513]: time="2025-02-13T22:54:04.429240072Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 22:54:04.489379 sshd[4693]: Accepted publickey for core from 147.75.109.163 port 49874 ssh2: RSA SHA256:Yx7fWtREze/vjbfbVXgsOsi8+bAvCeghI7ZLGsIJS+I Feb 13 22:54:04.491791 sshd-session[4693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 22:54:04.500469 systemd-logind[1489]: New session 30 of user core. Feb 13 22:54:04.506316 systemd[1]: Started session-30.scope - Session 30 of User core. Feb 13 22:54:05.251908 containerd[1513]: time="2025-02-13T22:54:05.251851037Z" level=info msg="CreateContainer within sandbox \"4cf2cf2daa0d5ae40dcb86a6f82bd3bf8b1349560a8bbb00434e865a470f6df3\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 22:54:05.266894 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-303dd975ecd389fa370ea9467402c6fc669e9fbf17bf07cdc922c8f9a0e9851d-rootfs.mount: Deactivated successfully. Feb 13 22:54:05.268639 containerd[1513]: time="2025-02-13T22:54:05.268574305Z" level=info msg="CreateContainer within sandbox \"4cf2cf2daa0d5ae40dcb86a6f82bd3bf8b1349560a8bbb00434e865a470f6df3\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"67195b21c9a441ef4db99db70532bada9a74f8c1d14f06dcbf1f5f1cb802ba0f\"" Feb 13 22:54:05.270990 containerd[1513]: time="2025-02-13T22:54:05.270952852Z" level=info msg="StartContainer for \"67195b21c9a441ef4db99db70532bada9a74f8c1d14f06dcbf1f5f1cb802ba0f\"" Feb 13 22:54:05.320931 systemd[1]: Started cri-containerd-67195b21c9a441ef4db99db70532bada9a74f8c1d14f06dcbf1f5f1cb802ba0f.scope - libcontainer container 67195b21c9a441ef4db99db70532bada9a74f8c1d14f06dcbf1f5f1cb802ba0f. Feb 13 22:54:05.367634 containerd[1513]: time="2025-02-13T22:54:05.367265160Z" level=info msg="StartContainer for \"67195b21c9a441ef4db99db70532bada9a74f8c1d14f06dcbf1f5f1cb802ba0f\" returns successfully" Feb 13 22:54:05.370981 systemd[1]: cri-containerd-67195b21c9a441ef4db99db70532bada9a74f8c1d14f06dcbf1f5f1cb802ba0f.scope: Deactivated successfully. Feb 13 22:54:05.400570 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-67195b21c9a441ef4db99db70532bada9a74f8c1d14f06dcbf1f5f1cb802ba0f-rootfs.mount: Deactivated successfully. Feb 13 22:54:05.408323 containerd[1513]: time="2025-02-13T22:54:05.408199808Z" level=info msg="shim disconnected" id=67195b21c9a441ef4db99db70532bada9a74f8c1d14f06dcbf1f5f1cb802ba0f namespace=k8s.io Feb 13 22:54:05.408323 containerd[1513]: time="2025-02-13T22:54:05.408313127Z" level=warning msg="cleaning up after shim disconnected" id=67195b21c9a441ef4db99db70532bada9a74f8c1d14f06dcbf1f5f1cb802ba0f namespace=k8s.io Feb 13 22:54:05.408323 containerd[1513]: time="2025-02-13T22:54:05.408331913Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 22:54:06.261516 containerd[1513]: time="2025-02-13T22:54:06.260941980Z" level=info msg="CreateContainer within sandbox \"4cf2cf2daa0d5ae40dcb86a6f82bd3bf8b1349560a8bbb00434e865a470f6df3\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 22:54:06.282173 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2730629590.mount: Deactivated successfully. Feb 13 22:54:06.290570 containerd[1513]: time="2025-02-13T22:54:06.289214601Z" level=info msg="CreateContainer within sandbox \"4cf2cf2daa0d5ae40dcb86a6f82bd3bf8b1349560a8bbb00434e865a470f6df3\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"12ec40bf59444d8bf4b72f0ed806fe6a061ca442b95c88c21ce9dfa10c942977\"" Feb 13 22:54:06.291969 containerd[1513]: time="2025-02-13T22:54:06.291937528Z" level=info msg="StartContainer for \"12ec40bf59444d8bf4b72f0ed806fe6a061ca442b95c88c21ce9dfa10c942977\"" Feb 13 22:54:06.332983 systemd[1]: Started cri-containerd-12ec40bf59444d8bf4b72f0ed806fe6a061ca442b95c88c21ce9dfa10c942977.scope - libcontainer container 12ec40bf59444d8bf4b72f0ed806fe6a061ca442b95c88c21ce9dfa10c942977. Feb 13 22:54:06.383898 containerd[1513]: time="2025-02-13T22:54:06.383820904Z" level=info msg="StartContainer for \"12ec40bf59444d8bf4b72f0ed806fe6a061ca442b95c88c21ce9dfa10c942977\" returns successfully" Feb 13 22:54:07.093958 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 13 22:54:07.110862 kubelet[2741]: I0213 22:54:07.110159 2741 setters.go:602] "Node became not ready" node="srv-uk33m.gb1.brightbox.com" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-02-13T22:54:07Z","lastTransitionTime":"2025-02-13T22:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 13 22:54:07.291374 kubelet[2741]: I0213 22:54:07.291298 2741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-kp8lk" podStartSLOduration=6.291271892 podStartE2EDuration="6.291271892s" podCreationTimestamp="2025-02-13 22:54:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 22:54:07.289461554 +0000 UTC m=+153.819736550" watchObservedRunningTime="2025-02-13 22:54:07.291271892 +0000 UTC m=+153.821546875" Feb 13 22:54:10.892363 systemd-networkd[1439]: lxc_health: Link UP Feb 13 22:54:10.923818 systemd-networkd[1439]: lxc_health: Gained carrier Feb 13 22:54:12.086930 systemd-networkd[1439]: lxc_health: Gained IPv6LL Feb 13 22:54:14.301576 systemd[1]: run-containerd-runc-k8s.io-12ec40bf59444d8bf4b72f0ed806fe6a061ca442b95c88c21ce9dfa10c942977-runc.txLnQb.mount: Deactivated successfully. Feb 13 22:54:16.475532 systemd[1]: run-containerd-runc-k8s.io-12ec40bf59444d8bf4b72f0ed806fe6a061ca442b95c88c21ce9dfa10c942977-runc.QtQq9U.mount: Deactivated successfully. Feb 13 22:54:16.560955 kubelet[2741]: E0213 22:54:16.560878 2741 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:44240->127.0.0.1:33565: write tcp 127.0.0.1:44240->127.0.0.1:33565: write: broken pipe Feb 13 22:54:16.744631 sshd[4752]: Connection closed by 147.75.109.163 port 49874 Feb 13 22:54:16.747370 sshd-session[4693]: pam_unix(sshd:session): session closed for user core Feb 13 22:54:16.752631 systemd-logind[1489]: Session 30 logged out. Waiting for processes to exit. Feb 13 22:54:16.755520 systemd[1]: sshd@29-10.230.69.138:22-147.75.109.163:49874.service: Deactivated successfully. Feb 13 22:54:16.760318 systemd[1]: session-30.scope: Deactivated successfully. Feb 13 22:54:16.766537 systemd-logind[1489]: Removed session 30.