Jan 29 12:57:07.015822 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 09:36:13 -00 2025 Jan 29 12:57:07.015904 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=519b8fded83181f8e61f734d5291f916d7548bfba9487c78bcb50d002d81719d Jan 29 12:57:07.015917 kernel: BIOS-provided physical RAM map: Jan 29 12:57:07.015933 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 29 12:57:07.015943 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 29 12:57:07.015952 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 29 12:57:07.015980 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Jan 29 12:57:07.015990 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Jan 29 12:57:07.015999 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 29 12:57:07.016008 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 29 12:57:07.016017 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 29 12:57:07.016026 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 29 12:57:07.016053 kernel: NX (Execute Disable) protection: active Jan 29 12:57:07.016063 kernel: APIC: Static calls initialized Jan 29 12:57:07.016074 kernel: SMBIOS 2.8 present. Jan 29 12:57:07.016085 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.16.0-3.module_el8.7.0+3346+68867adb 04/01/2014 Jan 29 12:57:07.016107 kernel: Hypervisor detected: KVM Jan 29 12:57:07.016121 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 29 12:57:07.016132 kernel: kvm-clock: using sched offset of 4525095020 cycles Jan 29 12:57:07.016142 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 29 12:57:07.016153 kernel: tsc: Detected 2799.998 MHz processor Jan 29 12:57:07.016163 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 29 12:57:07.016173 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 29 12:57:07.016183 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Jan 29 12:57:07.016193 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 29 12:57:07.016203 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 29 12:57:07.016218 kernel: Using GB pages for direct mapping Jan 29 12:57:07.016228 kernel: ACPI: Early table checksum verification disabled Jan 29 12:57:07.016238 kernel: ACPI: RSDP 0x00000000000F59E0 000014 (v00 BOCHS ) Jan 29 12:57:07.016248 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 12:57:07.016258 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 12:57:07.016268 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 12:57:07.016278 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Jan 29 12:57:07.016288 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 12:57:07.016298 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 12:57:07.016312 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 12:57:07.016322 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 12:57:07.016333 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Jan 29 12:57:07.016342 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Jan 29 12:57:07.016353 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Jan 29 12:57:07.016375 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Jan 29 12:57:07.016398 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Jan 29 12:57:07.016412 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Jan 29 12:57:07.016423 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Jan 29 12:57:07.016433 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 29 12:57:07.016443 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 29 12:57:07.016466 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Jan 29 12:57:07.016477 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Jan 29 12:57:07.016490 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Jan 29 12:57:07.016511 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Jan 29 12:57:07.016527 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Jan 29 12:57:07.016537 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Jan 29 12:57:07.016548 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Jan 29 12:57:07.016558 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Jan 29 12:57:07.016568 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Jan 29 12:57:07.016579 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Jan 29 12:57:07.016589 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Jan 29 12:57:07.016599 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Jan 29 12:57:07.016610 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Jan 29 12:57:07.016624 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Jan 29 12:57:07.016635 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 29 12:57:07.016645 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 29 12:57:07.016656 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Jan 29 12:57:07.016667 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Jan 29 12:57:07.016677 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Jan 29 12:57:07.016688 kernel: Zone ranges: Jan 29 12:57:07.016698 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 29 12:57:07.016709 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Jan 29 12:57:07.016719 kernel: Normal empty Jan 29 12:57:07.016734 kernel: Movable zone start for each node Jan 29 12:57:07.016745 kernel: Early memory node ranges Jan 29 12:57:07.016755 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 29 12:57:07.016765 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Jan 29 12:57:07.016776 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Jan 29 12:57:07.017128 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 29 12:57:07.017140 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 29 12:57:07.017150 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Jan 29 12:57:07.017161 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 29 12:57:07.017177 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 29 12:57:07.017187 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 29 12:57:07.017198 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 29 12:57:07.017208 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 29 12:57:07.017218 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 29 12:57:07.017229 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 29 12:57:07.017239 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 29 12:57:07.017249 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 29 12:57:07.017259 kernel: TSC deadline timer available Jan 29 12:57:07.017274 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Jan 29 12:57:07.017284 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 29 12:57:07.017294 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 29 12:57:07.017305 kernel: Booting paravirtualized kernel on KVM Jan 29 12:57:07.017315 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 29 12:57:07.017326 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Jan 29 12:57:07.017336 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Jan 29 12:57:07.017346 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Jan 29 12:57:07.017356 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jan 29 12:57:07.017371 kernel: kvm-guest: PV spinlocks enabled Jan 29 12:57:07.017381 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 29 12:57:07.017393 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=519b8fded83181f8e61f734d5291f916d7548bfba9487c78bcb50d002d81719d Jan 29 12:57:07.017410 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 12:57:07.017420 kernel: random: crng init done Jan 29 12:57:07.017430 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 12:57:07.017440 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 29 12:57:07.017450 kernel: Fallback order for Node 0: 0 Jan 29 12:57:07.017472 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Jan 29 12:57:07.017483 kernel: Policy zone: DMA32 Jan 29 12:57:07.017493 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 12:57:07.017516 kernel: software IO TLB: area num 16. Jan 29 12:57:07.017535 kernel: Memory: 1901532K/2096616K available (12288K kernel code, 2301K rwdata, 22736K rodata, 42972K init, 2220K bss, 194824K reserved, 0K cma-reserved) Jan 29 12:57:07.017546 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jan 29 12:57:07.017556 kernel: Kernel/User page tables isolation: enabled Jan 29 12:57:07.017566 kernel: ftrace: allocating 37923 entries in 149 pages Jan 29 12:57:07.017576 kernel: ftrace: allocated 149 pages with 4 groups Jan 29 12:57:07.017591 kernel: Dynamic Preempt: voluntary Jan 29 12:57:07.017602 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 12:57:07.017613 kernel: rcu: RCU event tracing is enabled. Jan 29 12:57:07.017624 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jan 29 12:57:07.017634 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 12:57:07.017654 kernel: Rude variant of Tasks RCU enabled. Jan 29 12:57:07.017669 kernel: Tracing variant of Tasks RCU enabled. Jan 29 12:57:07.017680 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 12:57:07.017691 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jan 29 12:57:07.017701 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Jan 29 12:57:07.017712 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 12:57:07.017723 kernel: Console: colour VGA+ 80x25 Jan 29 12:57:07.017737 kernel: printk: console [tty0] enabled Jan 29 12:57:07.017748 kernel: printk: console [ttyS0] enabled Jan 29 12:57:07.017759 kernel: ACPI: Core revision 20230628 Jan 29 12:57:07.017770 kernel: APIC: Switch to symmetric I/O mode setup Jan 29 12:57:07.019813 kernel: x2apic enabled Jan 29 12:57:07.019841 kernel: APIC: Switched APIC routing to: physical x2apic Jan 29 12:57:07.019854 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Jan 29 12:57:07.019866 kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998) Jan 29 12:57:07.019878 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 29 12:57:07.019890 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 29 12:57:07.019901 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 29 12:57:07.019913 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 29 12:57:07.019936 kernel: Spectre V2 : Mitigation: Retpolines Jan 29 12:57:07.019947 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 29 12:57:07.019963 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 29 12:57:07.019980 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 29 12:57:07.019991 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 29 12:57:07.020014 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 29 12:57:07.020025 kernel: MDS: Mitigation: Clear CPU buffers Jan 29 12:57:07.020037 kernel: MMIO Stale Data: Unknown: No mitigations Jan 29 12:57:07.020048 kernel: SRBDS: Unknown: Dependent on hypervisor status Jan 29 12:57:07.020059 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 29 12:57:07.020071 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 29 12:57:07.020082 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 29 12:57:07.020093 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 29 12:57:07.020105 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 29 12:57:07.020121 kernel: Freeing SMP alternatives memory: 32K Jan 29 12:57:07.020140 kernel: pid_max: default: 32768 minimum: 301 Jan 29 12:57:07.020151 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 12:57:07.020163 kernel: landlock: Up and running. Jan 29 12:57:07.020174 kernel: SELinux: Initializing. Jan 29 12:57:07.020185 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 29 12:57:07.020202 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 29 12:57:07.020213 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Jan 29 12:57:07.020225 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 29 12:57:07.020236 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 29 12:57:07.020252 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 29 12:57:07.020264 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Jan 29 12:57:07.020276 kernel: signal: max sigframe size: 1776 Jan 29 12:57:07.020287 kernel: rcu: Hierarchical SRCU implementation. Jan 29 12:57:07.020299 kernel: rcu: Max phase no-delay instances is 400. Jan 29 12:57:07.020311 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 29 12:57:07.020322 kernel: smp: Bringing up secondary CPUs ... Jan 29 12:57:07.020333 kernel: smpboot: x86: Booting SMP configuration: Jan 29 12:57:07.020345 kernel: .... node #0, CPUs: #1 Jan 29 12:57:07.020372 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Jan 29 12:57:07.020383 kernel: smp: Brought up 1 node, 2 CPUs Jan 29 12:57:07.020395 kernel: smpboot: Max logical packages: 16 Jan 29 12:57:07.020406 kernel: smpboot: Total of 2 processors activated (11199.99 BogoMIPS) Jan 29 12:57:07.020429 kernel: devtmpfs: initialized Jan 29 12:57:07.020440 kernel: x86/mm: Memory block size: 128MB Jan 29 12:57:07.020450 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 12:57:07.020461 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jan 29 12:57:07.020484 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 12:57:07.020509 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 12:57:07.020521 kernel: audit: initializing netlink subsys (disabled) Jan 29 12:57:07.020533 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 12:57:07.020544 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 29 12:57:07.020555 kernel: audit: type=2000 audit(1738155425.191:1): state=initialized audit_enabled=0 res=1 Jan 29 12:57:07.020566 kernel: cpuidle: using governor menu Jan 29 12:57:07.020577 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 12:57:07.020588 kernel: dca service started, version 1.12.1 Jan 29 12:57:07.020599 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 29 12:57:07.020615 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 29 12:57:07.020627 kernel: PCI: Using configuration type 1 for base access Jan 29 12:57:07.020638 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 29 12:57:07.020649 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 12:57:07.020660 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 12:57:07.020672 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 12:57:07.020683 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 12:57:07.020694 kernel: ACPI: Added _OSI(Module Device) Jan 29 12:57:07.020705 kernel: ACPI: Added _OSI(Processor Device) Jan 29 12:57:07.020720 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 12:57:07.020744 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 12:57:07.020756 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 12:57:07.020767 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 29 12:57:07.020778 kernel: ACPI: Interpreter enabled Jan 29 12:57:07.020790 kernel: ACPI: PM: (supports S0 S5) Jan 29 12:57:07.020814 kernel: ACPI: Using IOAPIC for interrupt routing Jan 29 12:57:07.020826 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 29 12:57:07.020838 kernel: PCI: Using E820 reservations for host bridge windows Jan 29 12:57:07.020854 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 29 12:57:07.020866 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 29 12:57:07.021135 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 29 12:57:07.021306 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 29 12:57:07.021486 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 29 12:57:07.021515 kernel: PCI host bridge to bus 0000:00 Jan 29 12:57:07.021702 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 29 12:57:07.022654 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 29 12:57:07.022830 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 29 12:57:07.022991 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Jan 29 12:57:07.023127 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 29 12:57:07.023266 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Jan 29 12:57:07.023397 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 29 12:57:07.023597 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 29 12:57:07.023815 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Jan 29 12:57:07.024014 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Jan 29 12:57:07.024189 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Jan 29 12:57:07.024345 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Jan 29 12:57:07.024551 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 29 12:57:07.029986 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jan 29 12:57:07.030180 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Jan 29 12:57:07.030381 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jan 29 12:57:07.030578 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Jan 29 12:57:07.030748 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jan 29 12:57:07.030944 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Jan 29 12:57:07.031118 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jan 29 12:57:07.031268 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Jan 29 12:57:07.031477 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jan 29 12:57:07.031664 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Jan 29 12:57:07.033973 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jan 29 12:57:07.034150 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Jan 29 12:57:07.034322 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jan 29 12:57:07.034487 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Jan 29 12:57:07.034688 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jan 29 12:57:07.034871 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Jan 29 12:57:07.035060 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 29 12:57:07.035209 kernel: pci 0000:00:03.0: reg 0x10: [io 0xd0c0-0xd0df] Jan 29 12:57:07.035368 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Jan 29 12:57:07.035559 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Jan 29 12:57:07.035723 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Jan 29 12:57:07.040355 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jan 29 12:57:07.040566 kernel: pci 0000:00:04.0: reg 0x10: [io 0xd000-0xd07f] Jan 29 12:57:07.040726 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Jan 29 12:57:07.040925 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Jan 29 12:57:07.041104 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 29 12:57:07.041254 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 29 12:57:07.041433 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 29 12:57:07.041615 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xd0e0-0xd0ff] Jan 29 12:57:07.041772 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Jan 29 12:57:07.041994 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 29 12:57:07.042148 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 29 12:57:07.042322 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Jan 29 12:57:07.042480 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Jan 29 12:57:07.042659 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 29 12:57:07.042829 kernel: pci 0000:00:02.0: bridge window [io 0xc000-0xcfff] Jan 29 12:57:07.042986 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 29 12:57:07.043150 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 29 12:57:07.043360 kernel: pci_bus 0000:02: extended config space not accessible Jan 29 12:57:07.043583 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Jan 29 12:57:07.043754 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Jan 29 12:57:07.047529 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 29 12:57:07.047702 kernel: pci 0000:01:00.0: bridge window [io 0xc000-0xcfff] Jan 29 12:57:07.047893 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 29 12:57:07.048057 kernel: pci 0000:01:00.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 29 12:57:07.048250 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Jan 29 12:57:07.048426 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Jan 29 12:57:07.048600 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 29 12:57:07.048756 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 29 12:57:07.048925 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 29 12:57:07.049109 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Jan 29 12:57:07.049285 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Jan 29 12:57:07.049457 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 29 12:57:07.049634 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 29 12:57:07.053646 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 29 12:57:07.053876 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 29 12:57:07.054041 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 29 12:57:07.054201 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 29 12:57:07.054362 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 29 12:57:07.054532 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 29 12:57:07.054690 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 29 12:57:07.054887 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 29 12:57:07.055060 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 29 12:57:07.055214 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 29 12:57:07.055386 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 29 12:57:07.055563 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 29 12:57:07.055714 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 29 12:57:07.057043 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 29 12:57:07.057211 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 29 12:57:07.057365 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 29 12:57:07.057392 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 29 12:57:07.057405 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 29 12:57:07.057417 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 29 12:57:07.057429 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 29 12:57:07.057441 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 29 12:57:07.057453 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 29 12:57:07.057465 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 29 12:57:07.057477 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 29 12:57:07.057505 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 29 12:57:07.057521 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 29 12:57:07.057533 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 29 12:57:07.057545 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 29 12:57:07.057556 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 29 12:57:07.057568 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 29 12:57:07.057580 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 29 12:57:07.057592 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 29 12:57:07.057604 kernel: iommu: Default domain type: Translated Jan 29 12:57:07.057616 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 29 12:57:07.057634 kernel: PCI: Using ACPI for IRQ routing Jan 29 12:57:07.057646 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 29 12:57:07.057658 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 29 12:57:07.057670 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Jan 29 12:57:07.058859 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 29 12:57:07.059025 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 29 12:57:07.059182 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 29 12:57:07.059214 kernel: vgaarb: loaded Jan 29 12:57:07.059234 kernel: clocksource: Switched to clocksource kvm-clock Jan 29 12:57:07.059246 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 12:57:07.059265 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 12:57:07.059277 kernel: pnp: PnP ACPI init Jan 29 12:57:07.059456 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 29 12:57:07.059475 kernel: pnp: PnP ACPI: found 5 devices Jan 29 12:57:07.059507 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 29 12:57:07.059521 kernel: NET: Registered PF_INET protocol family Jan 29 12:57:07.059540 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 12:57:07.059553 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 29 12:57:07.059565 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 12:57:07.059577 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 29 12:57:07.059589 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 29 12:57:07.059601 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 29 12:57:07.059613 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 29 12:57:07.059624 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 29 12:57:07.059636 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 12:57:07.059653 kernel: NET: Registered PF_XDP protocol family Jan 29 12:57:07.061843 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 29 12:57:07.062038 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 29 12:57:07.062210 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 29 12:57:07.062368 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 29 12:57:07.062540 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 29 12:57:07.062706 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 29 12:57:07.062893 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 29 12:57:07.063055 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x1000-0x1fff] Jan 29 12:57:07.063208 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x2000-0x2fff] Jan 29 12:57:07.063360 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x3000-0x3fff] Jan 29 12:57:07.063533 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x4000-0x4fff] Jan 29 12:57:07.063690 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x5000-0x5fff] Jan 29 12:57:07.064912 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x6000-0x6fff] Jan 29 12:57:07.065115 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x7000-0x7fff] Jan 29 12:57:07.065307 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 29 12:57:07.065469 kernel: pci 0000:01:00.0: bridge window [io 0xc000-0xcfff] Jan 29 12:57:07.065655 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 29 12:57:07.065846 kernel: pci 0000:01:00.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 29 12:57:07.066012 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 29 12:57:07.066167 kernel: pci 0000:00:02.0: bridge window [io 0xc000-0xcfff] Jan 29 12:57:07.066349 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 29 12:57:07.066516 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 29 12:57:07.066683 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 29 12:57:07.066905 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x1fff] Jan 29 12:57:07.067062 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 29 12:57:07.067229 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 29 12:57:07.067493 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 29 12:57:07.067670 kernel: pci 0000:00:02.2: bridge window [io 0x2000-0x2fff] Jan 29 12:57:07.067898 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 29 12:57:07.068054 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 29 12:57:07.068235 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 29 12:57:07.068389 kernel: pci 0000:00:02.3: bridge window [io 0x3000-0x3fff] Jan 29 12:57:07.068565 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 29 12:57:07.068746 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 29 12:57:07.068930 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 29 12:57:07.069084 kernel: pci 0000:00:02.4: bridge window [io 0x4000-0x4fff] Jan 29 12:57:07.069263 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 29 12:57:07.069441 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 29 12:57:07.069616 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 29 12:57:07.069834 kernel: pci 0000:00:02.5: bridge window [io 0x5000-0x5fff] Jan 29 12:57:07.069990 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 29 12:57:07.070150 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 29 12:57:07.070302 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 29 12:57:07.070452 kernel: pci 0000:00:02.6: bridge window [io 0x6000-0x6fff] Jan 29 12:57:07.070618 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 29 12:57:07.070771 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 29 12:57:07.070972 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 29 12:57:07.071133 kernel: pci 0000:00:02.7: bridge window [io 0x7000-0x7fff] Jan 29 12:57:07.071284 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 29 12:57:07.071434 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 29 12:57:07.071638 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 29 12:57:07.071849 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 29 12:57:07.071988 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 29 12:57:07.072149 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Jan 29 12:57:07.072297 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 29 12:57:07.072435 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Jan 29 12:57:07.072608 kernel: pci_bus 0000:01: resource 0 [io 0xc000-0xcfff] Jan 29 12:57:07.072755 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Jan 29 12:57:07.072941 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Jan 29 12:57:07.073107 kernel: pci_bus 0000:02: resource 0 [io 0xc000-0xcfff] Jan 29 12:57:07.073259 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Jan 29 12:57:07.073412 kernel: pci_bus 0000:02: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Jan 29 12:57:07.073616 kernel: pci_bus 0000:03: resource 0 [io 0x1000-0x1fff] Jan 29 12:57:07.073788 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Jan 29 12:57:07.073999 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 29 12:57:07.074171 kernel: pci_bus 0000:04: resource 0 [io 0x2000-0x2fff] Jan 29 12:57:07.074316 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Jan 29 12:57:07.074471 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 29 12:57:07.074646 kernel: pci_bus 0000:05: resource 0 [io 0x3000-0x3fff] Jan 29 12:57:07.074810 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Jan 29 12:57:07.074961 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 29 12:57:07.075151 kernel: pci_bus 0000:06: resource 0 [io 0x4000-0x4fff] Jan 29 12:57:07.075315 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Jan 29 12:57:07.075455 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 29 12:57:07.075640 kernel: pci_bus 0000:07: resource 0 [io 0x5000-0x5fff] Jan 29 12:57:07.075836 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Jan 29 12:57:07.076004 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 29 12:57:07.076149 kernel: pci_bus 0000:08: resource 0 [io 0x6000-0x6fff] Jan 29 12:57:07.076302 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Jan 29 12:57:07.076438 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 29 12:57:07.076612 kernel: pci_bus 0000:09: resource 0 [io 0x7000-0x7fff] Jan 29 12:57:07.076757 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Jan 29 12:57:07.076952 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 29 12:57:07.076972 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 29 12:57:07.076991 kernel: PCI: CLS 0 bytes, default 64 Jan 29 12:57:07.077004 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 29 12:57:07.077017 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Jan 29 12:57:07.077038 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 29 12:57:07.077051 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Jan 29 12:57:07.077063 kernel: Initialise system trusted keyrings Jan 29 12:57:07.077075 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 29 12:57:07.077087 kernel: Key type asymmetric registered Jan 29 12:57:07.077099 kernel: Asymmetric key parser 'x509' registered Jan 29 12:57:07.077116 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 29 12:57:07.077128 kernel: io scheduler mq-deadline registered Jan 29 12:57:07.077140 kernel: io scheduler kyber registered Jan 29 12:57:07.077152 kernel: io scheduler bfq registered Jan 29 12:57:07.077327 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jan 29 12:57:07.077493 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jan 29 12:57:07.077665 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 12:57:07.077885 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jan 29 12:57:07.078030 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jan 29 12:57:07.078182 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 12:57:07.078345 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jan 29 12:57:07.078517 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jan 29 12:57:07.078672 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 12:57:07.078863 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jan 29 12:57:07.079021 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jan 29 12:57:07.079209 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 12:57:07.079364 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jan 29 12:57:07.079555 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jan 29 12:57:07.079702 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 12:57:07.079901 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jan 29 12:57:07.080040 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jan 29 12:57:07.080211 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 12:57:07.080358 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jan 29 12:57:07.080531 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jan 29 12:57:07.080685 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 12:57:07.080870 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jan 29 12:57:07.081021 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jan 29 12:57:07.081196 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 12:57:07.081215 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 29 12:57:07.081228 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 29 12:57:07.081240 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 29 12:57:07.081252 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 12:57:07.081265 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 29 12:57:07.081300 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 29 12:57:07.081320 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 29 12:57:07.081332 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 29 12:57:07.081518 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 29 12:57:07.081540 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 29 12:57:07.081681 kernel: rtc_cmos 00:03: registered as rtc0 Jan 29 12:57:07.081889 kernel: rtc_cmos 00:03: setting system clock to 2025-01-29T12:57:06 UTC (1738155426) Jan 29 12:57:07.082026 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 29 12:57:07.082043 kernel: intel_pstate: CPU model not supported Jan 29 12:57:07.082062 kernel: NET: Registered PF_INET6 protocol family Jan 29 12:57:07.082074 kernel: Segment Routing with IPv6 Jan 29 12:57:07.082090 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 12:57:07.082112 kernel: NET: Registered PF_PACKET protocol family Jan 29 12:57:07.082124 kernel: Key type dns_resolver registered Jan 29 12:57:07.082135 kernel: IPI shorthand broadcast: enabled Jan 29 12:57:07.082147 kernel: sched_clock: Marking stable (1271003747, 224709416)->(1620807019, -125093856) Jan 29 12:57:07.082159 kernel: registered taskstats version 1 Jan 29 12:57:07.082176 kernel: Loading compiled-in X.509 certificates Jan 29 12:57:07.082192 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: de92a621108c58f5771c86c5c3ccb1aa0728ed55' Jan 29 12:57:07.082204 kernel: Key type .fscrypt registered Jan 29 12:57:07.082215 kernel: Key type fscrypt-provisioning registered Jan 29 12:57:07.082244 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 12:57:07.082256 kernel: ima: Allocated hash algorithm: sha1 Jan 29 12:57:07.082268 kernel: ima: No architecture policies found Jan 29 12:57:07.082280 kernel: clk: Disabling unused clocks Jan 29 12:57:07.082305 kernel: Freeing unused kernel image (initmem) memory: 42972K Jan 29 12:57:07.082317 kernel: Write protecting the kernel read-only data: 36864k Jan 29 12:57:07.082333 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Jan 29 12:57:07.082345 kernel: Run /init as init process Jan 29 12:57:07.082357 kernel: with arguments: Jan 29 12:57:07.082369 kernel: /init Jan 29 12:57:07.082380 kernel: with environment: Jan 29 12:57:07.082392 kernel: HOME=/ Jan 29 12:57:07.082403 kernel: TERM=linux Jan 29 12:57:07.082414 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 12:57:07.082436 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 12:57:07.082458 systemd[1]: Detected virtualization kvm. Jan 29 12:57:07.082471 systemd[1]: Detected architecture x86-64. Jan 29 12:57:07.082506 systemd[1]: Running in initrd. Jan 29 12:57:07.082520 systemd[1]: No hostname configured, using default hostname. Jan 29 12:57:07.082533 systemd[1]: Hostname set to . Jan 29 12:57:07.082546 systemd[1]: Initializing machine ID from VM UUID. Jan 29 12:57:07.082559 systemd[1]: Queued start job for default target initrd.target. Jan 29 12:57:07.082578 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 12:57:07.082592 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 12:57:07.082606 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 12:57:07.082620 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 12:57:07.082633 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 12:57:07.082647 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 12:57:07.082662 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 12:57:07.082681 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 12:57:07.082694 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 12:57:07.082708 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 12:57:07.082721 systemd[1]: Reached target paths.target - Path Units. Jan 29 12:57:07.082735 systemd[1]: Reached target slices.target - Slice Units. Jan 29 12:57:07.082748 systemd[1]: Reached target swap.target - Swaps. Jan 29 12:57:07.082761 systemd[1]: Reached target timers.target - Timer Units. Jan 29 12:57:07.082801 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 12:57:07.082824 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 12:57:07.082839 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 12:57:07.082857 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 12:57:07.082871 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 12:57:07.082884 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 12:57:07.082909 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 12:57:07.082922 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 12:57:07.082935 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 12:57:07.082947 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 12:57:07.082968 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 12:57:07.082994 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 12:57:07.083007 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 12:57:07.083027 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 12:57:07.083089 systemd-journald[203]: Collecting audit messages is disabled. Jan 29 12:57:07.083126 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:57:07.083141 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 12:57:07.083154 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 12:57:07.083168 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 12:57:07.083191 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 12:57:07.083211 systemd-journald[203]: Journal started Jan 29 12:57:07.083236 systemd-journald[203]: Runtime Journal (/run/log/journal/63262d5399b94086bfc0e60c94162f0b) is 4.7M, max 38.0M, 33.2M free. Jan 29 12:57:07.052183 systemd-modules-load[204]: Inserted module 'overlay' Jan 29 12:57:07.129030 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 12:57:07.129080 kernel: Bridge firewalling registered Jan 29 12:57:07.129099 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 12:57:07.112108 systemd-modules-load[204]: Inserted module 'br_netfilter' Jan 29 12:57:07.131681 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 12:57:07.138351 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:57:07.139797 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 12:57:07.148024 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 12:57:07.150099 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 12:57:07.155031 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 12:57:07.156605 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 12:57:07.177142 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 12:57:07.183930 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 12:57:07.187164 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:57:07.200037 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 12:57:07.202425 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 12:57:07.207192 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 12:57:07.220394 dracut-cmdline[235]: dracut-dracut-053 Jan 29 12:57:07.224164 dracut-cmdline[235]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=519b8fded83181f8e61f734d5291f916d7548bfba9487c78bcb50d002d81719d Jan 29 12:57:07.263775 systemd-resolved[241]: Positive Trust Anchors: Jan 29 12:57:07.263896 systemd-resolved[241]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 12:57:07.263958 systemd-resolved[241]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 12:57:07.268002 systemd-resolved[241]: Defaulting to hostname 'linux'. Jan 29 12:57:07.270881 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 12:57:07.276253 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 12:57:07.338876 kernel: SCSI subsystem initialized Jan 29 12:57:07.350869 kernel: Loading iSCSI transport class v2.0-870. Jan 29 12:57:07.362816 kernel: iscsi: registered transport (tcp) Jan 29 12:57:07.389375 kernel: iscsi: registered transport (qla4xxx) Jan 29 12:57:07.389431 kernel: QLogic iSCSI HBA Driver Jan 29 12:57:07.445045 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 12:57:07.454028 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 12:57:07.494856 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 12:57:07.494941 kernel: device-mapper: uevent: version 1.0.3 Jan 29 12:57:07.497473 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 12:57:07.544828 kernel: raid6: sse2x4 gen() 13166 MB/s Jan 29 12:57:07.562878 kernel: raid6: sse2x2 gen() 9314 MB/s Jan 29 12:57:07.581573 kernel: raid6: sse2x1 gen() 6290 MB/s Jan 29 12:57:07.581641 kernel: raid6: using algorithm sse2x4 gen() 13166 MB/s Jan 29 12:57:07.600540 kernel: raid6: .... xor() 4649 MB/s, rmw enabled Jan 29 12:57:07.600619 kernel: raid6: using ssse3x2 recovery algorithm Jan 29 12:57:07.626830 kernel: xor: automatically using best checksumming function avx Jan 29 12:57:07.821828 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 12:57:07.836741 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 12:57:07.844016 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 12:57:07.870937 systemd-udevd[421]: Using default interface naming scheme 'v255'. Jan 29 12:57:07.877914 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 12:57:07.887438 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 12:57:07.912833 dracut-pre-trigger[427]: rd.md=0: removing MD RAID activation Jan 29 12:57:07.952956 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 12:57:07.960096 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 12:57:08.062982 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 12:57:08.071994 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 12:57:08.095901 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 12:57:08.098409 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 12:57:08.099928 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 12:57:08.101881 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 12:57:08.112946 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 12:57:08.135759 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 12:57:08.182831 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Jan 29 12:57:08.285508 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 29 12:57:08.285734 kernel: cryptd: max_cpu_qlen set to 1000 Jan 29 12:57:08.285807 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 12:57:08.285830 kernel: GPT:17805311 != 125829119 Jan 29 12:57:08.285847 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 12:57:08.285872 kernel: GPT:17805311 != 125829119 Jan 29 12:57:08.285888 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 12:57:08.285915 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 12:57:08.285933 kernel: ACPI: bus type USB registered Jan 29 12:57:08.285949 kernel: usbcore: registered new interface driver usbfs Jan 29 12:57:08.285966 kernel: usbcore: registered new interface driver hub Jan 29 12:57:08.285983 kernel: usbcore: registered new device driver usb Jan 29 12:57:08.285999 kernel: AVX version of gcm_enc/dec engaged. Jan 29 12:57:08.286016 kernel: AES CTR mode by8 optimization enabled Jan 29 12:57:08.246651 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 12:57:08.246865 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:57:08.249884 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 12:57:08.254522 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 12:57:08.295036 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 29 12:57:08.338331 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Jan 29 12:57:08.338564 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 29 12:57:08.338762 kernel: libata version 3.00 loaded. Jan 29 12:57:08.340487 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 29 12:57:08.340792 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Jan 29 12:57:08.342173 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Jan 29 12:57:08.342396 kernel: ahci 0000:00:1f.2: version 3.0 Jan 29 12:57:08.397947 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 29 12:57:08.397980 kernel: hub 1-0:1.0: USB hub found Jan 29 12:57:08.398267 kernel: hub 1-0:1.0: 4 ports detected Jan 29 12:57:08.398502 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 29 12:57:08.399043 kernel: hub 2-0:1.0: USB hub found Jan 29 12:57:08.399307 kernel: hub 2-0:1.0: 4 ports detected Jan 29 12:57:08.399530 kernel: BTRFS: device fsid 5ba3c9ea-61f2-4fe6-a507-2966757f6d44 devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (485) Jan 29 12:57:08.399551 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 29 12:57:08.399738 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 29 12:57:08.400121 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (473) Jan 29 12:57:08.400142 kernel: scsi host0: ahci Jan 29 12:57:08.400331 kernel: scsi host1: ahci Jan 29 12:57:08.400556 kernel: scsi host2: ahci Jan 29 12:57:08.400738 kernel: scsi host3: ahci Jan 29 12:57:08.400938 kernel: scsi host4: ahci Jan 29 12:57:08.401163 kernel: scsi host5: ahci Jan 29 12:57:08.401344 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 41 Jan 29 12:57:08.401364 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 41 Jan 29 12:57:08.401388 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 41 Jan 29 12:57:08.401406 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 41 Jan 29 12:57:08.401423 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 41 Jan 29 12:57:08.401440 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 41 Jan 29 12:57:08.254699 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:57:08.255526 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:57:08.268112 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:57:08.360622 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 29 12:57:08.461855 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 29 12:57:08.462697 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 29 12:57:08.464704 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:57:08.477710 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 29 12:57:08.493944 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 12:57:08.509116 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 12:57:08.513956 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 12:57:08.517703 disk-uuid[565]: Primary Header is updated. Jan 29 12:57:08.517703 disk-uuid[565]: Secondary Entries is updated. Jan 29 12:57:08.517703 disk-uuid[565]: Secondary Header is updated. Jan 29 12:57:08.523829 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 12:57:08.531242 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 12:57:08.546068 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:57:08.571888 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 29 12:57:08.708829 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 29 12:57:08.713600 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 29 12:57:08.713638 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 29 12:57:08.714312 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 29 12:57:08.714340 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 29 12:57:08.716821 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 29 12:57:08.719780 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 29 12:57:08.728413 kernel: usbcore: registered new interface driver usbhid Jan 29 12:57:08.728464 kernel: usbhid: USB HID core driver Jan 29 12:57:08.736173 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input3 Jan 29 12:57:08.736230 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Jan 29 12:57:09.538274 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 12:57:09.538924 disk-uuid[566]: The operation has completed successfully. Jan 29 12:57:09.589581 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 12:57:09.589751 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 12:57:09.615019 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 12:57:09.619024 sh[586]: Success Jan 29 12:57:09.635882 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Jan 29 12:57:09.703030 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 12:57:09.708918 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 12:57:09.710650 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 12:57:09.746199 kernel: BTRFS info (device dm-0): first mount of filesystem 5ba3c9ea-61f2-4fe6-a507-2966757f6d44 Jan 29 12:57:09.746276 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 29 12:57:09.748328 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 12:57:09.751587 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 12:57:09.751636 kernel: BTRFS info (device dm-0): using free space tree Jan 29 12:57:09.762887 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 12:57:09.764485 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 12:57:09.771064 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 12:57:09.773958 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 12:57:09.796833 kernel: BTRFS info (device vda6): first mount of filesystem 46e45d4d-e07d-4ebc-bafb-221646b0ed58 Jan 29 12:57:09.796895 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 12:57:09.796916 kernel: BTRFS info (device vda6): using free space tree Jan 29 12:57:09.801868 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 12:57:09.816288 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 12:57:09.819573 kernel: BTRFS info (device vda6): last unmount of filesystem 46e45d4d-e07d-4ebc-bafb-221646b0ed58 Jan 29 12:57:09.832621 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 12:57:09.843090 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 12:57:09.953509 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 12:57:09.967057 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 12:57:09.981593 ignition[698]: Ignition 2.20.0 Jan 29 12:57:09.981615 ignition[698]: Stage: fetch-offline Jan 29 12:57:09.981753 ignition[698]: no configs at "/usr/lib/ignition/base.d" Jan 29 12:57:09.981771 ignition[698]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 12:57:09.986399 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 12:57:09.984159 ignition[698]: parsed url from cmdline: "" Jan 29 12:57:09.984165 ignition[698]: no config URL provided Jan 29 12:57:09.984174 ignition[698]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 12:57:09.984220 ignition[698]: no config at "/usr/lib/ignition/user.ign" Jan 29 12:57:09.984246 ignition[698]: failed to fetch config: resource requires networking Jan 29 12:57:09.984644 ignition[698]: Ignition finished successfully Jan 29 12:57:10.004343 systemd-networkd[774]: lo: Link UP Jan 29 12:57:10.004359 systemd-networkd[774]: lo: Gained carrier Jan 29 12:57:10.006695 systemd-networkd[774]: Enumeration completed Jan 29 12:57:10.006842 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 12:57:10.007250 systemd-networkd[774]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 12:57:10.007256 systemd-networkd[774]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 12:57:10.007699 systemd[1]: Reached target network.target - Network. Jan 29 12:57:10.009423 systemd-networkd[774]: eth0: Link UP Jan 29 12:57:10.009429 systemd-networkd[774]: eth0: Gained carrier Jan 29 12:57:10.009440 systemd-networkd[774]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 12:57:10.018130 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 29 12:57:10.036240 ignition[777]: Ignition 2.20.0 Jan 29 12:57:10.036254 ignition[777]: Stage: fetch Jan 29 12:57:10.036527 ignition[777]: no configs at "/usr/lib/ignition/base.d" Jan 29 12:57:10.036547 ignition[777]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 12:57:10.036707 ignition[777]: parsed url from cmdline: "" Jan 29 12:57:10.036713 ignition[777]: no config URL provided Jan 29 12:57:10.036722 ignition[777]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 12:57:10.036749 ignition[777]: no config at "/usr/lib/ignition/user.ign" Jan 29 12:57:10.036964 ignition[777]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 29 12:57:10.037145 ignition[777]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 29 12:57:10.037184 ignition[777]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 29 12:57:10.037273 ignition[777]: GET error: Get "http://169.254.169.254/openstack/latest/user_data": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 29 12:57:10.077902 systemd-networkd[774]: eth0: DHCPv4 address 10.243.84.18/30, gateway 10.243.84.17 acquired from 10.243.84.17 Jan 29 12:57:10.237747 ignition[777]: GET http://169.254.169.254/openstack/latest/user_data: attempt #2 Jan 29 12:57:10.251317 ignition[777]: GET result: OK Jan 29 12:57:10.251500 ignition[777]: parsing config with SHA512: 0cc73bf442915502ced8804b442d0a95a2d271f9405fa2eb827c80d9422b5c3ec6f4664ced0e800a9f5d13588cac0d49f07ca0bb97d20ef0c348d81ba2e69615 Jan 29 12:57:10.258457 unknown[777]: fetched base config from "system" Jan 29 12:57:10.259079 ignition[777]: fetch: fetch complete Jan 29 12:57:10.258484 unknown[777]: fetched base config from "system" Jan 29 12:57:10.259087 ignition[777]: fetch: fetch passed Jan 29 12:57:10.258494 unknown[777]: fetched user config from "openstack" Jan 29 12:57:10.259157 ignition[777]: Ignition finished successfully Jan 29 12:57:10.261004 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 29 12:57:10.273652 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 12:57:10.291868 ignition[785]: Ignition 2.20.0 Jan 29 12:57:10.291891 ignition[785]: Stage: kargs Jan 29 12:57:10.292145 ignition[785]: no configs at "/usr/lib/ignition/base.d" Jan 29 12:57:10.292163 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 12:57:10.296625 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 12:57:10.293499 ignition[785]: kargs: kargs passed Jan 29 12:57:10.293568 ignition[785]: Ignition finished successfully Jan 29 12:57:10.305085 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 12:57:10.322652 ignition[792]: Ignition 2.20.0 Jan 29 12:57:10.323680 ignition[792]: Stage: disks Jan 29 12:57:10.323916 ignition[792]: no configs at "/usr/lib/ignition/base.d" Jan 29 12:57:10.323936 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 12:57:10.327287 ignition[792]: disks: disks passed Jan 29 12:57:10.327357 ignition[792]: Ignition finished successfully Jan 29 12:57:10.328485 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 12:57:10.330134 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 12:57:10.330932 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 12:57:10.332588 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 12:57:10.334120 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 12:57:10.335484 systemd[1]: Reached target basic.target - Basic System. Jan 29 12:57:10.342985 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 12:57:10.363096 systemd-fsck[801]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 29 12:57:10.377175 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 12:57:10.384949 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 12:57:10.498829 kernel: EXT4-fs (vda9): mounted filesystem 2fbf9359-701e-4995-b3f7-74280bd2b1c9 r/w with ordered data mode. Quota mode: none. Jan 29 12:57:10.500384 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 12:57:10.502606 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 12:57:10.512917 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 12:57:10.515913 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 12:57:10.518703 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 29 12:57:10.520971 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 29 12:57:10.523552 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 12:57:10.523626 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 12:57:10.531980 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (809) Jan 29 12:57:10.531657 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 12:57:10.539377 kernel: BTRFS info (device vda6): first mount of filesystem 46e45d4d-e07d-4ebc-bafb-221646b0ed58 Jan 29 12:57:10.539420 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 12:57:10.539439 kernel: BTRFS info (device vda6): using free space tree Jan 29 12:57:10.538535 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 12:57:10.547167 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 12:57:10.552997 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 12:57:10.625422 initrd-setup-root[837]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 12:57:10.633816 initrd-setup-root[844]: cut: /sysroot/etc/group: No such file or directory Jan 29 12:57:10.640620 initrd-setup-root[851]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 12:57:10.649287 initrd-setup-root[858]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 12:57:10.751483 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 12:57:10.764992 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 12:57:10.770151 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 12:57:10.776789 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 12:57:10.779388 kernel: BTRFS info (device vda6): last unmount of filesystem 46e45d4d-e07d-4ebc-bafb-221646b0ed58 Jan 29 12:57:10.809897 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 12:57:10.813564 ignition[925]: INFO : Ignition 2.20.0 Jan 29 12:57:10.815887 ignition[925]: INFO : Stage: mount Jan 29 12:57:10.815887 ignition[925]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 12:57:10.815887 ignition[925]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 12:57:10.819066 ignition[925]: INFO : mount: mount passed Jan 29 12:57:10.819066 ignition[925]: INFO : Ignition finished successfully Jan 29 12:57:10.817019 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 12:57:11.360178 systemd-networkd[774]: eth0: Gained IPv6LL Jan 29 12:57:12.868508 systemd-networkd[774]: eth0: Ignoring DHCPv6 address 2a02:1348:17c:d504:24:19ff:fef3:5412/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17c:d504:24:19ff:fef3:5412/64 assigned by NDisc. Jan 29 12:57:12.868524 systemd-networkd[774]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 29 12:57:17.689168 coreos-metadata[811]: Jan 29 12:57:17.688 WARN failed to locate config-drive, using the metadata service API instead Jan 29 12:57:17.713346 coreos-metadata[811]: Jan 29 12:57:17.713 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 29 12:57:17.726346 coreos-metadata[811]: Jan 29 12:57:17.726 INFO Fetch successful Jan 29 12:57:17.727200 coreos-metadata[811]: Jan 29 12:57:17.726 INFO wrote hostname srv-i7wtu.gb1.brightbox.com to /sysroot/etc/hostname Jan 29 12:57:17.729561 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 29 12:57:17.729759 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 29 12:57:17.740939 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 12:57:17.758250 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 12:57:17.770124 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (942) Jan 29 12:57:17.770178 kernel: BTRFS info (device vda6): first mount of filesystem 46e45d4d-e07d-4ebc-bafb-221646b0ed58 Jan 29 12:57:17.772349 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 12:57:17.774142 kernel: BTRFS info (device vda6): using free space tree Jan 29 12:57:17.780015 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 12:57:17.782179 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 12:57:17.806743 ignition[960]: INFO : Ignition 2.20.0 Jan 29 12:57:17.808919 ignition[960]: INFO : Stage: files Jan 29 12:57:17.808919 ignition[960]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 12:57:17.808919 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 12:57:17.811415 ignition[960]: DEBUG : files: compiled without relabeling support, skipping Jan 29 12:57:17.811415 ignition[960]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 12:57:17.811415 ignition[960]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 12:57:17.814459 ignition[960]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 12:57:17.815438 ignition[960]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 12:57:17.815438 ignition[960]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 12:57:17.815164 unknown[960]: wrote ssh authorized keys file for user: core Jan 29 12:57:17.818292 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 12:57:17.818292 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 29 12:57:18.102498 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 29 12:57:18.541852 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 12:57:18.541852 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 29 12:57:18.552180 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 29 12:57:19.126308 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 29 12:57:19.402962 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 29 12:57:19.405483 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 29 12:57:19.405483 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 12:57:19.405483 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 12:57:19.405483 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 12:57:19.405483 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 12:57:19.411995 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 12:57:19.411995 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 12:57:19.411995 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 12:57:19.411995 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 12:57:19.411995 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 12:57:19.411995 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 12:57:19.418992 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 12:57:19.418992 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 12:57:19.418992 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Jan 29 12:57:19.888687 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 29 12:57:20.858112 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 12:57:20.858112 ignition[960]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 29 12:57:20.862200 ignition[960]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 12:57:20.862200 ignition[960]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 12:57:20.862200 ignition[960]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 29 12:57:20.862200 ignition[960]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 29 12:57:20.862200 ignition[960]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 12:57:20.862200 ignition[960]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 12:57:20.862200 ignition[960]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 12:57:20.862200 ignition[960]: INFO : files: files passed Jan 29 12:57:20.862200 ignition[960]: INFO : Ignition finished successfully Jan 29 12:57:20.863968 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 12:57:20.876081 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 12:57:20.884111 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 12:57:20.889996 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 12:57:20.890149 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 12:57:20.903541 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 12:57:20.903541 initrd-setup-root-after-ignition[989]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 12:57:20.906902 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 12:57:20.907893 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 12:57:20.909742 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 12:57:20.916025 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 12:57:20.950075 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 12:57:20.951175 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 12:57:20.952521 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 12:57:20.953307 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 12:57:20.955090 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 12:57:20.960007 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 12:57:20.978501 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 12:57:20.987077 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 12:57:20.999235 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 12:57:21.000151 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 12:57:21.001865 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 12:57:21.003354 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 12:57:21.003547 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 12:57:21.005322 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 12:57:21.006200 systemd[1]: Stopped target basic.target - Basic System. Jan 29 12:57:21.007584 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 12:57:21.008992 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 12:57:21.010346 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 12:57:21.011925 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 12:57:21.013417 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 12:57:21.014999 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 12:57:21.016471 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 12:57:21.018100 systemd[1]: Stopped target swap.target - Swaps. Jan 29 12:57:21.019432 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 12:57:21.019610 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 12:57:21.021349 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 12:57:21.022299 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 12:57:21.023605 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 12:57:21.024013 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 12:57:21.025156 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 12:57:21.025331 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 12:57:21.027328 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 12:57:21.027486 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 12:57:21.030084 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 12:57:21.030240 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 12:57:21.039082 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 12:57:21.042453 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 12:57:21.042648 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 12:57:21.048079 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 12:57:21.049545 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 12:57:21.049729 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 12:57:21.052231 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 12:57:21.052444 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 12:57:21.066441 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 12:57:21.066596 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 12:57:21.070884 ignition[1013]: INFO : Ignition 2.20.0 Jan 29 12:57:21.070884 ignition[1013]: INFO : Stage: umount Jan 29 12:57:21.074037 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 12:57:21.074037 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 12:57:21.074037 ignition[1013]: INFO : umount: umount passed Jan 29 12:57:21.074037 ignition[1013]: INFO : Ignition finished successfully Jan 29 12:57:21.075892 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 12:57:21.076308 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 12:57:21.078447 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 12:57:21.078527 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 12:57:21.081671 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 12:57:21.081733 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 12:57:21.082963 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 29 12:57:21.083024 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 29 12:57:21.085296 systemd[1]: Stopped target network.target - Network. Jan 29 12:57:21.085967 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 12:57:21.086051 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 12:57:21.086755 systemd[1]: Stopped target paths.target - Path Units. Jan 29 12:57:21.087366 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 12:57:21.090003 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 12:57:21.091554 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 12:57:21.093567 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 12:57:21.095106 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 12:57:21.095174 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 12:57:21.097836 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 12:57:21.097890 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 12:57:21.099147 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 12:57:21.099209 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 12:57:21.100754 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 12:57:21.100895 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 12:57:21.102712 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 12:57:21.103993 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 12:57:21.107395 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 12:57:21.108190 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 12:57:21.108312 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 12:57:21.109888 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 12:57:21.109995 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 12:57:21.110948 systemd-networkd[774]: eth0: DHCPv6 lease lost Jan 29 12:57:21.112808 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 12:57:21.113056 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 12:57:21.117689 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 12:57:21.117912 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 12:57:21.121159 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 12:57:21.121600 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 12:57:21.129929 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 12:57:21.130637 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 12:57:21.130704 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 12:57:21.133976 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 12:57:21.134053 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 12:57:21.135451 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 12:57:21.135523 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 12:57:21.137066 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 12:57:21.137126 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 12:57:21.138692 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 12:57:21.154369 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 12:57:21.154592 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 12:57:21.156430 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 12:57:21.156559 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 12:57:21.159136 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 12:57:21.159250 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 12:57:21.160519 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 12:57:21.160584 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 12:57:21.162084 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 12:57:21.162152 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 12:57:21.164301 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 12:57:21.164365 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 12:57:21.165872 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 12:57:21.165959 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:57:21.177038 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 12:57:21.179198 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 12:57:21.179292 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 12:57:21.180033 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 12:57:21.180106 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:57:21.185127 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 12:57:21.185266 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 12:57:21.187272 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 12:57:21.197384 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 12:57:21.205971 systemd[1]: Switching root. Jan 29 12:57:21.240218 systemd-journald[203]: Journal stopped Jan 29 12:57:22.610568 systemd-journald[203]: Received SIGTERM from PID 1 (systemd). Jan 29 12:57:22.610705 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 12:57:22.610752 kernel: SELinux: policy capability open_perms=1 Jan 29 12:57:22.610770 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 12:57:22.610827 kernel: SELinux: policy capability always_check_network=0 Jan 29 12:57:22.610855 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 12:57:22.610874 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 12:57:22.610892 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 12:57:22.610922 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 12:57:22.610953 kernel: audit: type=1403 audit(1738155441.479:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 12:57:22.610976 systemd[1]: Successfully loaded SELinux policy in 55.493ms. Jan 29 12:57:22.611010 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 20.541ms. Jan 29 12:57:22.611039 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 12:57:22.611060 systemd[1]: Detected virtualization kvm. Jan 29 12:57:22.611080 systemd[1]: Detected architecture x86-64. Jan 29 12:57:22.611106 systemd[1]: Detected first boot. Jan 29 12:57:22.611132 systemd[1]: Hostname set to . Jan 29 12:57:22.611164 systemd[1]: Initializing machine ID from VM UUID. Jan 29 12:57:22.611185 zram_generator::config[1057]: No configuration found. Jan 29 12:57:22.611205 systemd[1]: Populated /etc with preset unit settings. Jan 29 12:57:22.611225 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 29 12:57:22.611244 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 29 12:57:22.611263 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 29 12:57:22.611284 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 12:57:22.611303 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 12:57:22.611336 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 12:57:22.611378 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 12:57:22.611408 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 12:57:22.611429 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 12:57:22.611455 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 12:57:22.611475 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 12:57:22.611496 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 12:57:22.611515 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 12:57:22.611534 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 12:57:22.611565 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 12:57:22.611587 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 12:57:22.611607 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 12:57:22.611626 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 29 12:57:22.611646 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 12:57:22.611670 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 29 12:57:22.611691 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 29 12:57:22.611724 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 29 12:57:22.611768 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 12:57:22.611816 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 12:57:22.611839 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 12:57:22.611859 systemd[1]: Reached target slices.target - Slice Units. Jan 29 12:57:22.611878 systemd[1]: Reached target swap.target - Swaps. Jan 29 12:57:22.611930 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 12:57:22.611952 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 12:57:22.611972 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 12:57:22.611998 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 12:57:22.612019 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 12:57:22.612038 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 12:57:22.612064 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 12:57:22.612085 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 12:57:22.612104 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 12:57:22.612136 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:57:22.612157 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 12:57:22.612182 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 12:57:22.612203 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 12:57:22.612238 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 12:57:22.612260 systemd[1]: Reached target machines.target - Containers. Jan 29 12:57:22.612287 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 12:57:22.612308 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 12:57:22.612340 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 12:57:22.612362 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 12:57:22.612382 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 12:57:22.612408 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 12:57:22.612438 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 12:57:22.612464 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 12:57:22.612484 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 12:57:22.612504 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 12:57:22.612536 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 29 12:57:22.612557 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 29 12:57:22.612576 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 29 12:57:22.612595 systemd[1]: Stopped systemd-fsck-usr.service. Jan 29 12:57:22.612614 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 12:57:22.612632 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 12:57:22.612652 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 12:57:22.612672 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 12:57:22.612690 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 12:57:22.612726 systemd[1]: verity-setup.service: Deactivated successfully. Jan 29 12:57:22.612748 systemd[1]: Stopped verity-setup.service. Jan 29 12:57:22.612768 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:57:22.612812 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 12:57:22.612835 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 12:57:22.612884 systemd-journald[1146]: Collecting audit messages is disabled. Jan 29 12:57:22.612939 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 12:57:22.612962 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 12:57:22.613008 systemd-journald[1146]: Journal started Jan 29 12:57:22.613046 systemd-journald[1146]: Runtime Journal (/run/log/journal/63262d5399b94086bfc0e60c94162f0b) is 4.7M, max 38.0M, 33.2M free. Jan 29 12:57:22.252960 systemd[1]: Queued start job for default target multi-user.target. Jan 29 12:57:22.273248 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 29 12:57:22.274028 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 29 12:57:22.617931 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 12:57:22.620661 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 12:57:22.623312 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 12:57:22.624319 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 12:57:22.625993 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 12:57:22.626623 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 12:57:22.629269 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 12:57:22.629474 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 12:57:22.630548 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 12:57:22.630750 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 12:57:22.652553 kernel: loop: module loaded Jan 29 12:57:22.642794 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 12:57:22.644488 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 12:57:22.645668 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 12:57:22.662324 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 12:57:22.663150 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 12:57:22.666174 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 12:57:22.670805 kernel: fuse: init (API version 7.39) Jan 29 12:57:22.678862 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 12:57:22.680178 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 12:57:22.680232 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 12:57:22.684064 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 29 12:57:22.684809 kernel: ACPI: bus type drm_connector registered Jan 29 12:57:22.692976 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 12:57:22.700005 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 12:57:22.702008 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 12:57:22.707372 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 12:57:22.714996 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 12:57:22.715840 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 12:57:22.722986 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 12:57:22.724628 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 12:57:22.732032 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 12:57:22.736028 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 12:57:22.742247 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 12:57:22.744686 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 12:57:22.744979 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 12:57:22.747223 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 12:57:22.752945 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 12:57:22.754018 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 12:57:22.756839 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 12:57:22.777275 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 12:57:22.794688 kernel: loop0: detected capacity change from 0 to 8 Jan 29 12:57:22.794884 systemd-journald[1146]: Time spent on flushing to /var/log/journal/63262d5399b94086bfc0e60c94162f0b is 95.837ms for 1148 entries. Jan 29 12:57:22.794884 systemd-journald[1146]: System Journal (/var/log/journal/63262d5399b94086bfc0e60c94162f0b) is 8.0M, max 584.8M, 576.8M free. Jan 29 12:57:22.908962 systemd-journald[1146]: Received client request to flush runtime journal. Jan 29 12:57:22.909022 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 12:57:22.909059 kernel: loop1: detected capacity change from 0 to 140992 Jan 29 12:57:22.784523 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 12:57:22.788746 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 12:57:22.814685 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 12:57:22.817648 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 12:57:22.829118 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 29 12:57:22.915041 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 12:57:22.918750 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 12:57:22.934312 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 12:57:22.938333 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 29 12:57:22.954851 kernel: loop2: detected capacity change from 0 to 138184 Jan 29 12:57:22.957767 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 12:57:22.978027 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 12:57:23.011488 kernel: loop3: detected capacity change from 0 to 205544 Jan 29 12:57:23.074815 kernel: loop4: detected capacity change from 0 to 8 Jan 29 12:57:23.078271 systemd-tmpfiles[1209]: ACLs are not supported, ignoring. Jan 29 12:57:23.078788 systemd-tmpfiles[1209]: ACLs are not supported, ignoring. Jan 29 12:57:23.086995 kernel: loop5: detected capacity change from 0 to 140992 Jan 29 12:57:23.108533 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 12:57:23.119808 kernel: loop6: detected capacity change from 0 to 138184 Jan 29 12:57:23.148013 kernel: loop7: detected capacity change from 0 to 205544 Jan 29 12:57:23.161759 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 12:57:23.176055 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 12:57:23.189749 (sd-merge)[1213]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jan 29 12:57:23.190573 (sd-merge)[1213]: Merged extensions into '/usr'. Jan 29 12:57:23.203136 systemd[1]: Reloading requested from client PID 1186 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 12:57:23.203169 systemd[1]: Reloading... Jan 29 12:57:23.343813 zram_generator::config[1240]: No configuration found. Jan 29 12:57:23.528549 ldconfig[1181]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 12:57:23.581628 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 12:57:23.650885 systemd[1]: Reloading finished in 447 ms. Jan 29 12:57:23.691482 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 12:57:23.698907 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 12:57:23.710235 systemd[1]: Starting ensure-sysext.service... Jan 29 12:57:23.726292 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 12:57:23.730912 udevadm[1216]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 29 12:57:23.740819 systemd[1]: Reloading requested from client PID 1298 ('systemctl') (unit ensure-sysext.service)... Jan 29 12:57:23.740861 systemd[1]: Reloading... Jan 29 12:57:23.768331 systemd-tmpfiles[1299]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 12:57:23.769787 systemd-tmpfiles[1299]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 12:57:23.772201 systemd-tmpfiles[1299]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 12:57:23.772575 systemd-tmpfiles[1299]: ACLs are not supported, ignoring. Jan 29 12:57:23.772680 systemd-tmpfiles[1299]: ACLs are not supported, ignoring. Jan 29 12:57:23.778456 systemd-tmpfiles[1299]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 12:57:23.778573 systemd-tmpfiles[1299]: Skipping /boot Jan 29 12:57:23.798876 systemd-tmpfiles[1299]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 12:57:23.798896 systemd-tmpfiles[1299]: Skipping /boot Jan 29 12:57:23.885816 zram_generator::config[1327]: No configuration found. Jan 29 12:57:24.067086 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 12:57:24.132764 systemd[1]: Reloading finished in 391 ms. Jan 29 12:57:24.156887 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 12:57:24.167604 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 12:57:24.181025 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 12:57:24.194147 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 12:57:24.199991 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 12:57:24.206938 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 12:57:24.213609 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 12:57:24.217158 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 12:57:24.230079 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:57:24.230368 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 12:57:24.238202 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 12:57:24.242205 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 12:57:24.244455 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 12:57:24.245994 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 12:57:24.246216 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:57:24.258151 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 12:57:24.260534 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:57:24.260824 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 12:57:24.261053 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 12:57:24.261185 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:57:24.266219 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:57:24.266511 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 12:57:24.276133 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 12:57:24.277091 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 12:57:24.277306 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:57:24.280791 systemd[1]: Finished ensure-sysext.service. Jan 29 12:57:24.294103 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 29 12:57:24.305551 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 12:57:24.318727 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 12:57:24.324712 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 12:57:24.338334 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 12:57:24.338607 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 12:57:24.340710 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 12:57:24.342567 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 12:57:24.343405 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 12:57:24.345592 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 12:57:24.346876 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 12:57:24.348187 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 12:57:24.348879 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 12:57:24.355777 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 12:57:24.357648 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 12:57:24.372032 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 12:57:24.376188 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 12:57:24.399846 augenrules[1425]: No rules Jan 29 12:57:24.400038 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 12:57:24.400325 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 12:57:24.401198 systemd-udevd[1391]: Using default interface naming scheme 'v255'. Jan 29 12:57:24.403620 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 12:57:24.454578 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 12:57:24.465987 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 12:57:24.534089 systemd-resolved[1388]: Positive Trust Anchors: Jan 29 12:57:24.534580 systemd-resolved[1388]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 12:57:24.534726 systemd-resolved[1388]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 12:57:24.538234 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 29 12:57:24.539310 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 12:57:24.550995 systemd-resolved[1388]: Using system hostname 'srv-i7wtu.gb1.brightbox.com'. Jan 29 12:57:24.556072 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 12:57:24.557268 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 12:57:24.604559 systemd-networkd[1437]: lo: Link UP Jan 29 12:57:24.605341 systemd-networkd[1437]: lo: Gained carrier Jan 29 12:57:24.606977 systemd-networkd[1437]: Enumeration completed Jan 29 12:57:24.607111 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 12:57:24.608471 systemd[1]: Reached target network.target - Network. Jan 29 12:57:24.618158 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 12:57:24.671837 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 29 12:57:24.681747 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1444) Jan 29 12:57:24.744519 systemd-networkd[1437]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 12:57:24.744728 systemd-networkd[1437]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 12:57:24.748597 systemd-networkd[1437]: eth0: Link UP Jan 29 12:57:24.748844 systemd-networkd[1437]: eth0: Gained carrier Jan 29 12:57:24.748962 systemd-networkd[1437]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 12:57:24.778958 systemd-networkd[1437]: eth0: DHCPv4 address 10.243.84.18/30, gateway 10.243.84.17 acquired from 10.243.84.17 Jan 29 12:57:24.782582 systemd-timesyncd[1402]: Network configuration changed, trying to establish connection. Jan 29 12:57:24.801108 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jan 29 12:57:24.819212 kernel: ACPI: button: Power Button [PWRF] Jan 29 12:57:24.825124 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 12:57:24.832426 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 12:57:24.834820 kernel: mousedev: PS/2 mouse device common for all mice Jan 29 12:57:24.859768 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 12:57:24.886379 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input5 Jan 29 12:57:24.897853 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 29 12:57:24.904088 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 29 12:57:24.904369 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 29 12:57:24.946616 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:57:25.139971 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 12:57:25.142656 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:57:25.150047 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 12:57:25.182560 lvm[1475]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 12:57:25.224355 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 12:57:25.225650 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 12:57:25.226454 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 12:57:25.227353 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 12:57:25.228227 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 12:57:25.229338 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 12:57:25.230270 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 12:57:25.231205 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 12:57:25.231987 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 12:57:25.232038 systemd[1]: Reached target paths.target - Path Units. Jan 29 12:57:25.232706 systemd[1]: Reached target timers.target - Timer Units. Jan 29 12:57:25.234884 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 12:57:25.237698 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 12:57:25.244115 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 12:57:25.246867 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 12:57:25.248317 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 12:57:25.249159 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 12:57:25.249869 systemd[1]: Reached target basic.target - Basic System. Jan 29 12:57:25.250569 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 12:57:25.250623 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 12:57:25.253949 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 12:57:25.260884 lvm[1479]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 12:57:25.264005 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 29 12:57:25.267995 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 12:57:25.281902 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 12:57:25.285014 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 12:57:25.286104 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 12:57:25.290015 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 12:57:25.300917 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 29 12:57:25.308030 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 12:57:25.317002 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 12:57:25.319837 jq[1483]: false Jan 29 12:57:25.328004 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 12:57:25.329432 dbus-daemon[1482]: [system] SELinux support is enabled Jan 29 12:57:25.329459 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 12:57:25.331181 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 12:57:25.333057 dbus-daemon[1482]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1437 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 29 12:57:25.333121 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 12:57:25.338559 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 12:57:25.341560 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 12:57:25.348862 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 12:57:25.355384 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 12:57:25.355959 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 12:57:25.362915 jq[1495]: true Jan 29 12:57:25.375243 update_engine[1494]: I20250129 12:57:25.375126 1494 main.cc:92] Flatcar Update Engine starting Jan 29 12:57:25.375967 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 12:57:25.376564 dbus-daemon[1482]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 29 12:57:25.376060 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 12:57:25.379593 update_engine[1494]: I20250129 12:57:25.379356 1494 update_check_scheduler.cc:74] Next update check in 9m41s Jan 29 12:57:25.380346 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 12:57:25.380383 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 12:57:25.385308 systemd[1]: Started update-engine.service - Update Engine. Jan 29 12:57:25.400984 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 29 12:57:25.405113 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 12:57:25.416938 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 12:57:25.417287 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 12:57:25.419389 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 12:57:25.419616 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 12:57:25.437830 extend-filesystems[1484]: Found loop4 Jan 29 12:57:25.444896 extend-filesystems[1484]: Found loop5 Jan 29 12:57:25.444896 extend-filesystems[1484]: Found loop6 Jan 29 12:57:25.444896 extend-filesystems[1484]: Found loop7 Jan 29 12:57:25.444896 extend-filesystems[1484]: Found vda Jan 29 12:57:25.444896 extend-filesystems[1484]: Found vda1 Jan 29 12:57:25.444896 extend-filesystems[1484]: Found vda2 Jan 29 12:57:25.444896 extend-filesystems[1484]: Found vda3 Jan 29 12:57:25.444896 extend-filesystems[1484]: Found usr Jan 29 12:57:25.444896 extend-filesystems[1484]: Found vda4 Jan 29 12:57:25.444896 extend-filesystems[1484]: Found vda6 Jan 29 12:57:25.444896 extend-filesystems[1484]: Found vda7 Jan 29 12:57:25.444896 extend-filesystems[1484]: Found vda9 Jan 29 12:57:25.444896 extend-filesystems[1484]: Checking size of /dev/vda9 Jan 29 12:57:25.472336 (ntainerd)[1515]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 12:57:25.497186 tar[1497]: linux-amd64/helm Jan 29 12:57:25.497521 jq[1499]: true Jan 29 12:57:25.513601 extend-filesystems[1484]: Resized partition /dev/vda9 Jan 29 12:57:25.526195 extend-filesystems[1525]: resize2fs 1.47.1 (20-May-2024) Jan 29 12:57:25.537508 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Jan 29 12:57:25.571339 systemd-logind[1492]: Watching system buttons on /dev/input/event2 (Power Button) Jan 29 12:57:25.581110 systemd-logind[1492]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 29 12:57:25.589527 systemd-logind[1492]: New seat seat0. Jan 29 12:57:25.595179 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 12:57:25.654239 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1449) Jan 29 12:57:25.754285 bash[1544]: Updated "/home/core/.ssh/authorized_keys" Jan 29 12:57:25.768548 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 12:57:25.793934 systemd[1]: Starting sshkeys.service... Jan 29 12:57:25.794982 dbus-daemon[1482]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 29 12:57:25.797031 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 29 12:57:25.808381 dbus-daemon[1482]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1510 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 29 12:57:25.824987 systemd[1]: Starting polkit.service - Authorization Manager... Jan 29 12:57:25.845094 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 29 12:57:25.854855 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jan 29 12:57:25.857265 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 29 12:57:25.881775 polkitd[1549]: Started polkitd version 121 Jan 29 12:57:25.889816 extend-filesystems[1525]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 29 12:57:25.889816 extend-filesystems[1525]: old_desc_blocks = 1, new_desc_blocks = 8 Jan 29 12:57:25.889816 extend-filesystems[1525]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jan 29 12:57:25.898569 extend-filesystems[1484]: Resized filesystem in /dev/vda9 Jan 29 12:57:25.894344 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 12:57:25.894684 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 12:57:25.913184 polkitd[1549]: Loading rules from directory /etc/polkit-1/rules.d Jan 29 12:57:25.913302 polkitd[1549]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 29 12:57:25.931188 polkitd[1549]: Finished loading, compiling and executing 2 rules Jan 29 12:57:25.932180 dbus-daemon[1482]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 29 12:57:25.932464 systemd[1]: Started polkit.service - Authorization Manager. Jan 29 12:57:25.936473 polkitd[1549]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 29 12:57:25.957156 locksmithd[1511]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 12:57:25.973695 systemd-hostnamed[1510]: Hostname set to (static) Jan 29 12:57:25.983716 containerd[1515]: time="2025-01-29T12:57:25.983506322Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 29 12:57:26.055775 containerd[1515]: time="2025-01-29T12:57:26.055692594Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 12:57:26.061420 containerd[1515]: time="2025-01-29T12:57:26.061369399Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:57:26.061490 containerd[1515]: time="2025-01-29T12:57:26.061419334Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 12:57:26.061490 containerd[1515]: time="2025-01-29T12:57:26.061456998Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 12:57:26.061840 containerd[1515]: time="2025-01-29T12:57:26.061810741Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 12:57:26.061888 containerd[1515]: time="2025-01-29T12:57:26.061850027Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 12:57:26.061992 containerd[1515]: time="2025-01-29T12:57:26.061964058Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:57:26.062046 containerd[1515]: time="2025-01-29T12:57:26.061993019Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 12:57:26.062284 containerd[1515]: time="2025-01-29T12:57:26.062248525Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:57:26.062339 containerd[1515]: time="2025-01-29T12:57:26.062288280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 12:57:26.062339 containerd[1515]: time="2025-01-29T12:57:26.062308175Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:57:26.062339 containerd[1515]: time="2025-01-29T12:57:26.062323148Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 12:57:26.062493 containerd[1515]: time="2025-01-29T12:57:26.062456094Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 12:57:26.066513 containerd[1515]: time="2025-01-29T12:57:26.066465887Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 12:57:26.066668 containerd[1515]: time="2025-01-29T12:57:26.066636033Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:57:26.066771 containerd[1515]: time="2025-01-29T12:57:26.066669010Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 12:57:26.066861 containerd[1515]: time="2025-01-29T12:57:26.066844371Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 12:57:26.066950 containerd[1515]: time="2025-01-29T12:57:26.066924723Z" level=info msg="metadata content store policy set" policy=shared Jan 29 12:57:26.073660 containerd[1515]: time="2025-01-29T12:57:26.073622159Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 12:57:26.073914 containerd[1515]: time="2025-01-29T12:57:26.073723130Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 12:57:26.073986 containerd[1515]: time="2025-01-29T12:57:26.073757922Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 12:57:26.074029 containerd[1515]: time="2025-01-29T12:57:26.073999778Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 12:57:26.074071 containerd[1515]: time="2025-01-29T12:57:26.074025951Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 12:57:26.075794 containerd[1515]: time="2025-01-29T12:57:26.074298530Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 12:57:26.075794 containerd[1515]: time="2025-01-29T12:57:26.074612676Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 12:57:26.076567 containerd[1515]: time="2025-01-29T12:57:26.076521839Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 12:57:26.076636 containerd[1515]: time="2025-01-29T12:57:26.076573923Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 12:57:26.076636 containerd[1515]: time="2025-01-29T12:57:26.076598733Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 12:57:26.076636 containerd[1515]: time="2025-01-29T12:57:26.076623475Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 12:57:26.076751 containerd[1515]: time="2025-01-29T12:57:26.076643377Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 12:57:26.076751 containerd[1515]: time="2025-01-29T12:57:26.076672582Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 12:57:26.076751 containerd[1515]: time="2025-01-29T12:57:26.076694310Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 12:57:26.076751 containerd[1515]: time="2025-01-29T12:57:26.076715223Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 12:57:26.076751 containerd[1515]: time="2025-01-29T12:57:26.076743323Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 12:57:26.076968 containerd[1515]: time="2025-01-29T12:57:26.076777800Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 12:57:26.076968 containerd[1515]: time="2025-01-29T12:57:26.076824011Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 12:57:26.076968 containerd[1515]: time="2025-01-29T12:57:26.076863548Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 12:57:26.076968 containerd[1515]: time="2025-01-29T12:57:26.076887306Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 12:57:26.076968 containerd[1515]: time="2025-01-29T12:57:26.076906043Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 12:57:26.076968 containerd[1515]: time="2025-01-29T12:57:26.076926211Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 12:57:26.076968 containerd[1515]: time="2025-01-29T12:57:26.076944931Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 12:57:26.076968 containerd[1515]: time="2025-01-29T12:57:26.076965083Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 12:57:26.077244 containerd[1515]: time="2025-01-29T12:57:26.076983963Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 12:57:26.077244 containerd[1515]: time="2025-01-29T12:57:26.077003628Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 12:57:26.077244 containerd[1515]: time="2025-01-29T12:57:26.077022761Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 12:57:26.077244 containerd[1515]: time="2025-01-29T12:57:26.077054707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 12:57:26.077244 containerd[1515]: time="2025-01-29T12:57:26.077070041Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 12:57:26.077244 containerd[1515]: time="2025-01-29T12:57:26.077087323Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 12:57:26.077244 containerd[1515]: time="2025-01-29T12:57:26.077116698Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 12:57:26.077244 containerd[1515]: time="2025-01-29T12:57:26.077134878Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 12:57:26.077244 containerd[1515]: time="2025-01-29T12:57:26.077166931Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 12:57:26.077244 containerd[1515]: time="2025-01-29T12:57:26.077188291Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 12:57:26.077244 containerd[1515]: time="2025-01-29T12:57:26.077222393Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 12:57:26.080598 containerd[1515]: time="2025-01-29T12:57:26.080566294Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 12:57:26.080748 containerd[1515]: time="2025-01-29T12:57:26.080707557Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 12:57:26.080829 containerd[1515]: time="2025-01-29T12:57:26.080748926Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 12:57:26.080829 containerd[1515]: time="2025-01-29T12:57:26.080803728Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 12:57:26.080829 containerd[1515]: time="2025-01-29T12:57:26.080822327Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 12:57:26.080952 containerd[1515]: time="2025-01-29T12:57:26.080842218Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 12:57:26.080952 containerd[1515]: time="2025-01-29T12:57:26.080867743Z" level=info msg="NRI interface is disabled by configuration." Jan 29 12:57:26.080952 containerd[1515]: time="2025-01-29T12:57:26.080887756Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 12:57:26.082706 containerd[1515]: time="2025-01-29T12:57:26.081234851Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 12:57:26.082706 containerd[1515]: time="2025-01-29T12:57:26.081302169Z" level=info msg="Connect containerd service" Jan 29 12:57:26.082706 containerd[1515]: time="2025-01-29T12:57:26.081338367Z" level=info msg="using legacy CRI server" Jan 29 12:57:26.082706 containerd[1515]: time="2025-01-29T12:57:26.081363128Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 12:57:26.082706 containerd[1515]: time="2025-01-29T12:57:26.081986337Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 12:57:26.084378 containerd[1515]: time="2025-01-29T12:57:26.084343776Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 12:57:26.084650 containerd[1515]: time="2025-01-29T12:57:26.084597206Z" level=info msg="Start subscribing containerd event" Jan 29 12:57:26.084714 containerd[1515]: time="2025-01-29T12:57:26.084671431Z" level=info msg="Start recovering state" Jan 29 12:57:26.085163 containerd[1515]: time="2025-01-29T12:57:26.085136714Z" level=info msg="Start event monitor" Jan 29 12:57:26.085228 containerd[1515]: time="2025-01-29T12:57:26.085173946Z" level=info msg="Start snapshots syncer" Jan 29 12:57:26.085228 containerd[1515]: time="2025-01-29T12:57:26.085193038Z" level=info msg="Start cni network conf syncer for default" Jan 29 12:57:26.085228 containerd[1515]: time="2025-01-29T12:57:26.085205768Z" level=info msg="Start streaming server" Jan 29 12:57:26.087148 containerd[1515]: time="2025-01-29T12:57:26.087120163Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 12:57:26.087254 containerd[1515]: time="2025-01-29T12:57:26.087220146Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 12:57:26.090409 containerd[1515]: time="2025-01-29T12:57:26.088140415Z" level=info msg="containerd successfully booted in 0.105992s" Jan 29 12:57:26.088266 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 12:57:26.144194 systemd-networkd[1437]: eth0: Gained IPv6LL Jan 29 12:57:26.148944 systemd-timesyncd[1402]: Network configuration changed, trying to establish connection. Jan 29 12:57:26.153990 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 12:57:26.157877 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 12:57:26.171073 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:57:26.174361 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 12:57:26.249338 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 12:57:26.646202 tar[1497]: linux-amd64/LICENSE Jan 29 12:57:26.649045 tar[1497]: linux-amd64/README.md Jan 29 12:57:26.680306 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 29 12:57:26.798101 sshd_keygen[1516]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 12:57:26.811026 systemd-timesyncd[1402]: Network configuration changed, trying to establish connection. Jan 29 12:57:26.814181 systemd-networkd[1437]: eth0: Ignoring DHCPv6 address 2a02:1348:17c:d504:24:19ff:fef3:5412/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17c:d504:24:19ff:fef3:5412/64 assigned by NDisc. Jan 29 12:57:26.814193 systemd-networkd[1437]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 29 12:57:26.837378 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 12:57:26.847266 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 12:57:26.869831 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 12:57:26.870167 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 12:57:26.881301 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 12:57:26.897686 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 12:57:26.909103 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 12:57:26.920323 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 29 12:57:26.921483 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 12:57:27.176273 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 12:57:27.183276 systemd[1]: Started sshd@0-10.243.84.18:22-147.75.109.163:36348.service - OpenSSH per-connection server daemon (147.75.109.163:36348). Jan 29 12:57:27.196009 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:57:27.206635 (kubelet)[1608]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:57:27.812261 kubelet[1608]: E0129 12:57:27.812028 1608 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:57:27.817452 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:57:27.817740 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:57:27.820114 systemd-timesyncd[1402]: Network configuration changed, trying to establish connection. Jan 29 12:57:28.064876 systemd-timesyncd[1402]: Network configuration changed, trying to establish connection. Jan 29 12:57:28.092701 sshd[1606]: Accepted publickey for core from 147.75.109.163 port 36348 ssh2: RSA SHA256:N4m0UAGAVL0aGRQpLGyvungYkW8dGkNI4mN5vZ/Bmd0 Jan 29 12:57:28.094283 sshd-session[1606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:57:28.114172 systemd-logind[1492]: New session 1 of user core. Jan 29 12:57:28.117379 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 12:57:28.132724 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 12:57:28.156377 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 12:57:28.165279 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 12:57:28.180862 (systemd)[1618]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 12:57:28.321855 systemd[1618]: Queued start job for default target default.target. Jan 29 12:57:28.334862 systemd[1618]: Created slice app.slice - User Application Slice. Jan 29 12:57:28.334905 systemd[1618]: Reached target paths.target - Paths. Jan 29 12:57:28.334927 systemd[1618]: Reached target timers.target - Timers. Jan 29 12:57:28.337254 systemd[1618]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 12:57:28.354557 systemd[1618]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 12:57:28.354779 systemd[1618]: Reached target sockets.target - Sockets. Jan 29 12:57:28.354823 systemd[1618]: Reached target basic.target - Basic System. Jan 29 12:57:28.354897 systemd[1618]: Reached target default.target - Main User Target. Jan 29 12:57:28.354962 systemd[1618]: Startup finished in 163ms. Jan 29 12:57:28.355181 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 12:57:28.368109 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 12:57:29.008366 systemd[1]: Started sshd@1-10.243.84.18:22-147.75.109.163:42712.service - OpenSSH per-connection server daemon (147.75.109.163:42712). Jan 29 12:57:29.913846 sshd[1631]: Accepted publickey for core from 147.75.109.163 port 42712 ssh2: RSA SHA256:N4m0UAGAVL0aGRQpLGyvungYkW8dGkNI4mN5vZ/Bmd0 Jan 29 12:57:29.916051 sshd-session[1631]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:57:29.923773 systemd-logind[1492]: New session 2 of user core. Jan 29 12:57:29.937156 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 12:57:30.531125 sshd[1633]: Connection closed by 147.75.109.163 port 42712 Jan 29 12:57:30.530946 sshd-session[1631]: pam_unix(sshd:session): session closed for user core Jan 29 12:57:30.535061 systemd-logind[1492]: Session 2 logged out. Waiting for processes to exit. Jan 29 12:57:30.535685 systemd[1]: sshd@1-10.243.84.18:22-147.75.109.163:42712.service: Deactivated successfully. Jan 29 12:57:30.538389 systemd[1]: session-2.scope: Deactivated successfully. Jan 29 12:57:30.540831 systemd-logind[1492]: Removed session 2. Jan 29 12:57:30.699360 systemd[1]: Started sshd@2-10.243.84.18:22-147.75.109.163:42726.service - OpenSSH per-connection server daemon (147.75.109.163:42726). Jan 29 12:57:31.585323 sshd[1638]: Accepted publickey for core from 147.75.109.163 port 42726 ssh2: RSA SHA256:N4m0UAGAVL0aGRQpLGyvungYkW8dGkNI4mN5vZ/Bmd0 Jan 29 12:57:31.587312 sshd-session[1638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:57:31.594116 systemd-logind[1492]: New session 3 of user core. Jan 29 12:57:31.601086 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 12:57:31.969096 login[1598]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 29 12:57:31.971680 login[1599]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 29 12:57:31.977399 systemd-logind[1492]: New session 5 of user core. Jan 29 12:57:31.990103 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 12:57:31.994548 systemd-logind[1492]: New session 4 of user core. Jan 29 12:57:32.002097 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 12:57:32.200719 sshd[1640]: Connection closed by 147.75.109.163 port 42726 Jan 29 12:57:32.201856 sshd-session[1638]: pam_unix(sshd:session): session closed for user core Jan 29 12:57:32.206851 systemd-logind[1492]: Session 3 logged out. Waiting for processes to exit. Jan 29 12:57:32.208365 systemd[1]: sshd@2-10.243.84.18:22-147.75.109.163:42726.service: Deactivated successfully. Jan 29 12:57:32.211310 systemd[1]: session-3.scope: Deactivated successfully. Jan 29 12:57:32.213049 systemd-logind[1492]: Removed session 3. Jan 29 12:57:32.462296 coreos-metadata[1481]: Jan 29 12:57:32.462 WARN failed to locate config-drive, using the metadata service API instead Jan 29 12:57:32.488377 coreos-metadata[1481]: Jan 29 12:57:32.488 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 29 12:57:32.495943 coreos-metadata[1481]: Jan 29 12:57:32.495 INFO Fetch failed with 404: resource not found Jan 29 12:57:32.495943 coreos-metadata[1481]: Jan 29 12:57:32.495 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 29 12:57:32.496709 coreos-metadata[1481]: Jan 29 12:57:32.496 INFO Fetch successful Jan 29 12:57:32.496847 coreos-metadata[1481]: Jan 29 12:57:32.496 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 29 12:57:32.507353 coreos-metadata[1481]: Jan 29 12:57:32.507 INFO Fetch successful Jan 29 12:57:32.507546 coreos-metadata[1481]: Jan 29 12:57:32.507 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 29 12:57:32.522065 coreos-metadata[1481]: Jan 29 12:57:32.522 INFO Fetch successful Jan 29 12:57:32.522213 coreos-metadata[1481]: Jan 29 12:57:32.522 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 29 12:57:32.536521 coreos-metadata[1481]: Jan 29 12:57:32.536 INFO Fetch successful Jan 29 12:57:32.536632 coreos-metadata[1481]: Jan 29 12:57:32.536 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 29 12:57:32.555374 coreos-metadata[1481]: Jan 29 12:57:32.555 INFO Fetch successful Jan 29 12:57:32.591815 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 29 12:57:32.593388 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 12:57:32.975713 coreos-metadata[1552]: Jan 29 12:57:32.975 WARN failed to locate config-drive, using the metadata service API instead Jan 29 12:57:32.996716 coreos-metadata[1552]: Jan 29 12:57:32.996 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 29 12:57:33.020399 coreos-metadata[1552]: Jan 29 12:57:33.020 INFO Fetch successful Jan 29 12:57:33.020556 coreos-metadata[1552]: Jan 29 12:57:33.020 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 29 12:57:33.052952 coreos-metadata[1552]: Jan 29 12:57:33.052 INFO Fetch successful Jan 29 12:57:33.055446 unknown[1552]: wrote ssh authorized keys file for user: core Jan 29 12:57:33.074713 update-ssh-keys[1680]: Updated "/home/core/.ssh/authorized_keys" Jan 29 12:57:33.075419 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 29 12:57:33.078001 systemd[1]: Finished sshkeys.service. Jan 29 12:57:33.081210 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 12:57:33.087000 systemd[1]: Startup finished in 1.443s (kernel) + 14.726s (initrd) + 11.661s (userspace) = 27.831s. Jan 29 12:57:38.053848 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 12:57:38.062070 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:57:38.281063 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:57:38.287743 (kubelet)[1692]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:57:38.350623 kubelet[1692]: E0129 12:57:38.350381 1692 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:57:38.354644 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:57:38.354925 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:57:42.357311 systemd[1]: Started sshd@3-10.243.84.18:22-147.75.109.163:58562.service - OpenSSH per-connection server daemon (147.75.109.163:58562). Jan 29 12:57:43.261883 sshd[1700]: Accepted publickey for core from 147.75.109.163 port 58562 ssh2: RSA SHA256:N4m0UAGAVL0aGRQpLGyvungYkW8dGkNI4mN5vZ/Bmd0 Jan 29 12:57:43.264325 sshd-session[1700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:57:43.271866 systemd-logind[1492]: New session 6 of user core. Jan 29 12:57:43.278989 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 12:57:43.881945 sshd[1702]: Connection closed by 147.75.109.163 port 58562 Jan 29 12:57:43.881698 sshd-session[1700]: pam_unix(sshd:session): session closed for user core Jan 29 12:57:43.886577 systemd-logind[1492]: Session 6 logged out. Waiting for processes to exit. Jan 29 12:57:43.886972 systemd[1]: sshd@3-10.243.84.18:22-147.75.109.163:58562.service: Deactivated successfully. Jan 29 12:57:43.889232 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 12:57:43.891455 systemd-logind[1492]: Removed session 6. Jan 29 12:57:44.039540 systemd[1]: Started sshd@4-10.243.84.18:22-147.75.109.163:58574.service - OpenSSH per-connection server daemon (147.75.109.163:58574). Jan 29 12:57:44.927829 sshd[1707]: Accepted publickey for core from 147.75.109.163 port 58574 ssh2: RSA SHA256:N4m0UAGAVL0aGRQpLGyvungYkW8dGkNI4mN5vZ/Bmd0 Jan 29 12:57:44.930538 sshd-session[1707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:57:44.938074 systemd-logind[1492]: New session 7 of user core. Jan 29 12:57:44.948122 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 12:57:45.541910 sshd[1709]: Connection closed by 147.75.109.163 port 58574 Jan 29 12:57:45.541718 sshd-session[1707]: pam_unix(sshd:session): session closed for user core Jan 29 12:57:45.546494 systemd[1]: sshd@4-10.243.84.18:22-147.75.109.163:58574.service: Deactivated successfully. Jan 29 12:57:45.546970 systemd-logind[1492]: Session 7 logged out. Waiting for processes to exit. Jan 29 12:57:45.548889 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 12:57:45.550781 systemd-logind[1492]: Removed session 7. Jan 29 12:57:45.696095 systemd[1]: Started sshd@5-10.243.84.18:22-147.75.109.163:58590.service - OpenSSH per-connection server daemon (147.75.109.163:58590). Jan 29 12:57:46.605734 sshd[1714]: Accepted publickey for core from 147.75.109.163 port 58590 ssh2: RSA SHA256:N4m0UAGAVL0aGRQpLGyvungYkW8dGkNI4mN5vZ/Bmd0 Jan 29 12:57:46.608376 sshd-session[1714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:57:46.616058 systemd-logind[1492]: New session 8 of user core. Jan 29 12:57:46.631195 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 12:57:47.230176 sshd[1716]: Connection closed by 147.75.109.163 port 58590 Jan 29 12:57:47.231373 sshd-session[1714]: pam_unix(sshd:session): session closed for user core Jan 29 12:57:47.236719 systemd[1]: sshd@5-10.243.84.18:22-147.75.109.163:58590.service: Deactivated successfully. Jan 29 12:57:47.239434 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 12:57:47.240269 systemd-logind[1492]: Session 8 logged out. Waiting for processes to exit. Jan 29 12:57:47.241729 systemd-logind[1492]: Removed session 8. Jan 29 12:57:47.395188 systemd[1]: Started sshd@6-10.243.84.18:22-147.75.109.163:34562.service - OpenSSH per-connection server daemon (147.75.109.163:34562). Jan 29 12:57:48.287679 sshd[1721]: Accepted publickey for core from 147.75.109.163 port 34562 ssh2: RSA SHA256:N4m0UAGAVL0aGRQpLGyvungYkW8dGkNI4mN5vZ/Bmd0 Jan 29 12:57:48.292242 sshd-session[1721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:57:48.300305 systemd-logind[1492]: New session 9 of user core. Jan 29 12:57:48.313029 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 12:57:48.553532 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 29 12:57:48.564326 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:57:48.786019 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:57:48.786234 (kubelet)[1733]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:57:48.792717 sudo[1727]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 12:57:48.793688 sudo[1727]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 12:57:48.806313 sudo[1727]: pam_unix(sudo:session): session closed for user root Jan 29 12:57:48.839525 kubelet[1733]: E0129 12:57:48.839242 1733 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:57:48.841972 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:57:48.842208 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:57:48.951001 sshd[1723]: Connection closed by 147.75.109.163 port 34562 Jan 29 12:57:48.951701 sshd-session[1721]: pam_unix(sshd:session): session closed for user core Jan 29 12:57:48.956452 systemd-logind[1492]: Session 9 logged out. Waiting for processes to exit. Jan 29 12:57:48.956966 systemd[1]: sshd@6-10.243.84.18:22-147.75.109.163:34562.service: Deactivated successfully. Jan 29 12:57:48.959051 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 12:57:48.960920 systemd-logind[1492]: Removed session 9. Jan 29 12:57:49.117185 systemd[1]: Started sshd@7-10.243.84.18:22-147.75.109.163:34574.service - OpenSSH per-connection server daemon (147.75.109.163:34574). Jan 29 12:57:50.003322 sshd[1743]: Accepted publickey for core from 147.75.109.163 port 34574 ssh2: RSA SHA256:N4m0UAGAVL0aGRQpLGyvungYkW8dGkNI4mN5vZ/Bmd0 Jan 29 12:57:50.005415 sshd-session[1743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:57:50.012448 systemd-logind[1492]: New session 10 of user core. Jan 29 12:57:50.019012 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 29 12:57:50.478557 sudo[1747]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 12:57:50.479028 sudo[1747]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 12:57:50.484215 sudo[1747]: pam_unix(sudo:session): session closed for user root Jan 29 12:57:50.492536 sudo[1746]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 29 12:57:50.492999 sudo[1746]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 12:57:50.519428 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 12:57:50.561158 augenrules[1769]: No rules Jan 29 12:57:50.563061 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 12:57:50.563376 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 12:57:50.565213 sudo[1746]: pam_unix(sudo:session): session closed for user root Jan 29 12:57:50.708107 sshd[1745]: Connection closed by 147.75.109.163 port 34574 Jan 29 12:57:50.709265 sshd-session[1743]: pam_unix(sshd:session): session closed for user core Jan 29 12:57:50.715699 systemd[1]: sshd@7-10.243.84.18:22-147.75.109.163:34574.service: Deactivated successfully. Jan 29 12:57:50.718006 systemd[1]: session-10.scope: Deactivated successfully. Jan 29 12:57:50.718900 systemd-logind[1492]: Session 10 logged out. Waiting for processes to exit. Jan 29 12:57:50.720583 systemd-logind[1492]: Removed session 10. Jan 29 12:57:50.860703 systemd[1]: Started sshd@8-10.243.84.18:22-147.75.109.163:34586.service - OpenSSH per-connection server daemon (147.75.109.163:34586). Jan 29 12:57:51.765236 sshd[1777]: Accepted publickey for core from 147.75.109.163 port 34586 ssh2: RSA SHA256:N4m0UAGAVL0aGRQpLGyvungYkW8dGkNI4mN5vZ/Bmd0 Jan 29 12:57:51.768053 sshd-session[1777]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:57:51.774777 systemd-logind[1492]: New session 11 of user core. Jan 29 12:57:51.782984 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 29 12:57:52.241659 sudo[1780]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 12:57:52.242220 sudo[1780]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 12:57:52.706261 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 29 12:57:52.707753 (dockerd)[1800]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 29 12:57:53.130755 dockerd[1800]: time="2025-01-29T12:57:53.130593297Z" level=info msg="Starting up" Jan 29 12:57:53.253109 systemd[1]: var-lib-docker-metacopy\x2dcheck1414464437-merged.mount: Deactivated successfully. Jan 29 12:57:53.277458 dockerd[1800]: time="2025-01-29T12:57:53.276892154Z" level=info msg="Loading containers: start." Jan 29 12:57:53.482835 kernel: Initializing XFRM netlink socket Jan 29 12:57:53.518292 systemd-timesyncd[1402]: Network configuration changed, trying to establish connection. Jan 29 12:57:53.591293 systemd-networkd[1437]: docker0: Link UP Jan 29 12:57:53.627801 dockerd[1800]: time="2025-01-29T12:57:53.627713778Z" level=info msg="Loading containers: done." Jan 29 12:57:53.651372 dockerd[1800]: time="2025-01-29T12:57:53.651269072Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 29 12:57:53.651626 dockerd[1800]: time="2025-01-29T12:57:53.651438473Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Jan 29 12:57:53.651715 dockerd[1800]: time="2025-01-29T12:57:53.651629378Z" level=info msg="Daemon has completed initialization" Jan 29 12:57:53.692365 dockerd[1800]: time="2025-01-29T12:57:53.692269916Z" level=info msg="API listen on /run/docker.sock" Jan 29 12:57:53.692840 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 29 12:57:53.778197 systemd-timesyncd[1402]: Contacted time server [2a00:da00:1800:837c::1]:123 (2.flatcar.pool.ntp.org). Jan 29 12:57:53.778291 systemd-timesyncd[1402]: Initial clock synchronization to Wed 2025-01-29 12:57:53.983071 UTC. Jan 29 12:57:54.922177 containerd[1515]: time="2025-01-29T12:57:54.922028400Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\"" Jan 29 12:57:55.901397 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount139991042.mount: Deactivated successfully. Jan 29 12:57:56.862294 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 29 12:57:57.489722 containerd[1515]: time="2025-01-29T12:57:57.489609912Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:57:57.491518 containerd[1515]: time="2025-01-29T12:57:57.491470969Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.5: active requests=0, bytes read=27976729" Jan 29 12:57:57.492663 containerd[1515]: time="2025-01-29T12:57:57.492592817Z" level=info msg="ImageCreate event name:\"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:57:57.496587 containerd[1515]: time="2025-01-29T12:57:57.496509225Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:57:57.498867 containerd[1515]: time="2025-01-29T12:57:57.498158104Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.5\" with image id \"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\", size \"27973521\" in 2.576013435s" Jan 29 12:57:57.498867 containerd[1515]: time="2025-01-29T12:57:57.498238683Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\" returns image reference \"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\"" Jan 29 12:57:57.500647 containerd[1515]: time="2025-01-29T12:57:57.500611414Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\"" Jan 29 12:57:59.055455 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 29 12:57:59.067136 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:57:59.313089 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:57:59.316319 (kubelet)[2062]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:57:59.465622 kubelet[2062]: E0129 12:57:59.465477 2062 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:57:59.479321 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:57:59.479622 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:57:59.901610 containerd[1515]: time="2025-01-29T12:57:59.901437456Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:57:59.903708 containerd[1515]: time="2025-01-29T12:57:59.903626239Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.5: active requests=0, bytes read=24701151" Jan 29 12:57:59.904341 containerd[1515]: time="2025-01-29T12:57:59.904059987Z" level=info msg="ImageCreate event name:\"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:57:59.908112 containerd[1515]: time="2025-01-29T12:57:59.908014252Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:57:59.910102 containerd[1515]: time="2025-01-29T12:57:59.909682412Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.5\" with image id \"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\", size \"26147725\" in 2.408865397s" Jan 29 12:57:59.910102 containerd[1515]: time="2025-01-29T12:57:59.909756159Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\" returns image reference \"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\"" Jan 29 12:57:59.912147 containerd[1515]: time="2025-01-29T12:57:59.912106855Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\"" Jan 29 12:58:01.704480 containerd[1515]: time="2025-01-29T12:58:01.704083579Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:58:01.706423 containerd[1515]: time="2025-01-29T12:58:01.705768856Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.5: active requests=0, bytes read=18652061" Jan 29 12:58:01.707480 containerd[1515]: time="2025-01-29T12:58:01.707348099Z" level=info msg="ImageCreate event name:\"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:58:01.714281 containerd[1515]: time="2025-01-29T12:58:01.714217876Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:58:01.715837 containerd[1515]: time="2025-01-29T12:58:01.715579461Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.5\" with image id \"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\", size \"20098653\" in 1.803423326s" Jan 29 12:58:01.715837 containerd[1515]: time="2025-01-29T12:58:01.715631961Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\" returns image reference \"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\"" Jan 29 12:58:01.717173 containerd[1515]: time="2025-01-29T12:58:01.717072894Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\"" Jan 29 12:58:03.425543 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1570409332.mount: Deactivated successfully. Jan 29 12:58:04.342878 containerd[1515]: time="2025-01-29T12:58:04.342694998Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:58:04.344414 containerd[1515]: time="2025-01-29T12:58:04.344356520Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.5: active requests=0, bytes read=30231136" Jan 29 12:58:04.345634 containerd[1515]: time="2025-01-29T12:58:04.345558099Z" level=info msg="ImageCreate event name:\"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:58:04.349437 containerd[1515]: time="2025-01-29T12:58:04.349384495Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:58:04.351398 containerd[1515]: time="2025-01-29T12:58:04.350760458Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.5\" with image id \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\", size \"30230147\" in 2.633644583s" Jan 29 12:58:04.351398 containerd[1515]: time="2025-01-29T12:58:04.350848537Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\" returns image reference \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\"" Jan 29 12:58:04.352677 containerd[1515]: time="2025-01-29T12:58:04.352404156Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 29 12:58:05.026330 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2800288176.mount: Deactivated successfully. Jan 29 12:58:06.419845 containerd[1515]: time="2025-01-29T12:58:06.419622028Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:58:06.421878 containerd[1515]: time="2025-01-29T12:58:06.421813750Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Jan 29 12:58:06.423126 containerd[1515]: time="2025-01-29T12:58:06.423059965Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:58:06.428168 containerd[1515]: time="2025-01-29T12:58:06.428103582Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:58:06.429346 containerd[1515]: time="2025-01-29T12:58:06.429054215Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.076603848s" Jan 29 12:58:06.429346 containerd[1515]: time="2025-01-29T12:58:06.429109296Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 29 12:58:06.430870 containerd[1515]: time="2025-01-29T12:58:06.430528240Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 29 12:58:07.083472 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount343714775.mount: Deactivated successfully. Jan 29 12:58:07.091529 containerd[1515]: time="2025-01-29T12:58:07.091466263Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:58:07.092951 containerd[1515]: time="2025-01-29T12:58:07.092865886Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Jan 29 12:58:07.094194 containerd[1515]: time="2025-01-29T12:58:07.093840433Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:58:07.100897 containerd[1515]: time="2025-01-29T12:58:07.100858786Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:58:07.103278 containerd[1515]: time="2025-01-29T12:58:07.103232179Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 672.664366ms" Jan 29 12:58:07.103278 containerd[1515]: time="2025-01-29T12:58:07.103275801Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 29 12:58:07.104698 containerd[1515]: time="2025-01-29T12:58:07.104416402Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jan 29 12:58:07.719333 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2025317547.mount: Deactivated successfully. Jan 29 12:58:09.554983 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 29 12:58:09.564303 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:58:09.983102 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:58:09.990220 (kubelet)[2189]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:58:10.106107 kubelet[2189]: E0129 12:58:10.105971 2189 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:58:10.109190 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:58:10.109462 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:58:10.262310 update_engine[1494]: I20250129 12:58:10.261054 1494 update_attempter.cc:509] Updating boot flags... Jan 29 12:58:10.359818 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2208) Jan 29 12:58:10.511874 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2210) Jan 29 12:58:10.581396 containerd[1515]: time="2025-01-29T12:58:10.577824046Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:58:10.581396 containerd[1515]: time="2025-01-29T12:58:10.580113571Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56779981" Jan 29 12:58:10.582191 containerd[1515]: time="2025-01-29T12:58:10.581528027Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:58:10.613361 containerd[1515]: time="2025-01-29T12:58:10.613291891Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:58:10.615557 containerd[1515]: time="2025-01-29T12:58:10.615517240Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 3.511054908s" Jan 29 12:58:10.615640 containerd[1515]: time="2025-01-29T12:58:10.615564334Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jan 29 12:58:14.924184 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:58:14.939141 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:58:14.971018 systemd[1]: Reloading requested from client PID 2239 ('systemctl') (unit session-11.scope)... Jan 29 12:58:14.971071 systemd[1]: Reloading... Jan 29 12:58:15.134834 zram_generator::config[2277]: No configuration found. Jan 29 12:58:15.305705 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 12:58:15.411678 systemd[1]: Reloading finished in 439 ms. Jan 29 12:58:15.558934 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 29 12:58:15.559049 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 29 12:58:15.559479 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:58:15.566181 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:58:15.714816 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:58:15.728537 (kubelet)[2344]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 12:58:15.821936 kubelet[2344]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:58:15.821936 kubelet[2344]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 12:58:15.821936 kubelet[2344]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:58:15.821936 kubelet[2344]: I0129 12:58:15.821842 2344 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 12:58:16.547555 kubelet[2344]: I0129 12:58:16.547458 2344 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 29 12:58:16.547555 kubelet[2344]: I0129 12:58:16.547509 2344 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 12:58:16.548063 kubelet[2344]: I0129 12:58:16.547938 2344 server.go:929] "Client rotation is on, will bootstrap in background" Jan 29 12:58:16.576172 kubelet[2344]: I0129 12:58:16.574962 2344 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 12:58:16.576172 kubelet[2344]: E0129 12:58:16.575083 2344 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.243.84.18:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.243.84.18:6443: connect: connection refused" logger="UnhandledError" Jan 29 12:58:16.587741 kubelet[2344]: E0129 12:58:16.587668 2344 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 12:58:16.587741 kubelet[2344]: I0129 12:58:16.587721 2344 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 12:58:16.595216 kubelet[2344]: I0129 12:58:16.595172 2344 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 12:58:16.596707 kubelet[2344]: I0129 12:58:16.596656 2344 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 29 12:58:16.597011 kubelet[2344]: I0129 12:58:16.596955 2344 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 12:58:16.597281 kubelet[2344]: I0129 12:58:16.597007 2344 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-i7wtu.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 12:58:16.597542 kubelet[2344]: I0129 12:58:16.597332 2344 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 12:58:16.597542 kubelet[2344]: I0129 12:58:16.597353 2344 container_manager_linux.go:300] "Creating device plugin manager" Jan 29 12:58:16.597636 kubelet[2344]: I0129 12:58:16.597554 2344 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:58:16.600659 kubelet[2344]: I0129 12:58:16.600249 2344 kubelet.go:408] "Attempting to sync node with API server" Jan 29 12:58:16.600659 kubelet[2344]: I0129 12:58:16.600290 2344 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 12:58:16.600659 kubelet[2344]: I0129 12:58:16.600375 2344 kubelet.go:314] "Adding apiserver pod source" Jan 29 12:58:16.600659 kubelet[2344]: I0129 12:58:16.600439 2344 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 12:58:16.603632 kubelet[2344]: W0129 12:58:16.602751 2344 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.243.84.18:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-i7wtu.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.243.84.18:6443: connect: connection refused Jan 29 12:58:16.603632 kubelet[2344]: E0129 12:58:16.602864 2344 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.243.84.18:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-i7wtu.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.243.84.18:6443: connect: connection refused" logger="UnhandledError" Jan 29 12:58:16.608236 kubelet[2344]: W0129 12:58:16.608179 2344 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.243.84.18:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.243.84.18:6443: connect: connection refused Jan 29 12:58:16.608520 kubelet[2344]: E0129 12:58:16.608368 2344 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.243.84.18:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.243.84.18:6443: connect: connection refused" logger="UnhandledError" Jan 29 12:58:16.609317 kubelet[2344]: I0129 12:58:16.609153 2344 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 12:58:16.611810 kubelet[2344]: I0129 12:58:16.611050 2344 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 12:58:16.611810 kubelet[2344]: W0129 12:58:16.611191 2344 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 12:58:16.612460 kubelet[2344]: I0129 12:58:16.612440 2344 server.go:1269] "Started kubelet" Jan 29 12:58:16.613705 kubelet[2344]: I0129 12:58:16.613271 2344 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 12:58:16.614860 kubelet[2344]: I0129 12:58:16.614830 2344 server.go:460] "Adding debug handlers to kubelet server" Jan 29 12:58:16.619845 kubelet[2344]: I0129 12:58:16.617267 2344 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 12:58:16.619845 kubelet[2344]: I0129 12:58:16.617689 2344 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 12:58:16.625008 kubelet[2344]: I0129 12:58:16.623688 2344 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 12:58:16.625564 kubelet[2344]: E0129 12:58:16.620940 2344 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.243.84.18:6443/api/v1/namespaces/default/events\": dial tcp 10.243.84.18:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-i7wtu.gb1.brightbox.com.181f2b370bf9a356 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-i7wtu.gb1.brightbox.com,UID:srv-i7wtu.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-i7wtu.gb1.brightbox.com,},FirstTimestamp:2025-01-29 12:58:16.612406102 +0000 UTC m=+0.877897198,LastTimestamp:2025-01-29 12:58:16.612406102 +0000 UTC m=+0.877897198,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-i7wtu.gb1.brightbox.com,}" Jan 29 12:58:16.629187 kubelet[2344]: I0129 12:58:16.628136 2344 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 12:58:16.633811 kubelet[2344]: E0129 12:58:16.631887 2344 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-i7wtu.gb1.brightbox.com\" not found" Jan 29 12:58:16.633811 kubelet[2344]: I0129 12:58:16.631961 2344 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 29 12:58:16.633811 kubelet[2344]: I0129 12:58:16.633592 2344 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 29 12:58:16.633811 kubelet[2344]: I0129 12:58:16.633706 2344 reconciler.go:26] "Reconciler: start to sync state" Jan 29 12:58:16.638044 kubelet[2344]: I0129 12:58:16.637464 2344 factory.go:221] Registration of the systemd container factory successfully Jan 29 12:58:16.638044 kubelet[2344]: I0129 12:58:16.637587 2344 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 12:58:16.639941 kubelet[2344]: E0129 12:58:16.639874 2344 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.243.84.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-i7wtu.gb1.brightbox.com?timeout=10s\": dial tcp 10.243.84.18:6443: connect: connection refused" interval="200ms" Jan 29 12:58:16.640173 kubelet[2344]: W0129 12:58:16.640030 2344 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.243.84.18:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.243.84.18:6443: connect: connection refused Jan 29 12:58:16.640173 kubelet[2344]: E0129 12:58:16.640127 2344 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.243.84.18:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.243.84.18:6443: connect: connection refused" logger="UnhandledError" Jan 29 12:58:16.640386 kubelet[2344]: I0129 12:58:16.640338 2344 factory.go:221] Registration of the containerd container factory successfully Jan 29 12:58:16.663985 kubelet[2344]: I0129 12:58:16.663925 2344 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 12:58:16.665707 kubelet[2344]: I0129 12:58:16.665682 2344 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 12:58:16.665889 kubelet[2344]: I0129 12:58:16.665868 2344 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 12:58:16.666041 kubelet[2344]: I0129 12:58:16.666021 2344 kubelet.go:2321] "Starting kubelet main sync loop" Jan 29 12:58:16.666342 kubelet[2344]: E0129 12:58:16.666312 2344 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 12:58:16.666827 kubelet[2344]: E0129 12:58:16.666775 2344 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 12:58:16.679015 kubelet[2344]: W0129 12:58:16.678631 2344 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.243.84.18:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.243.84.18:6443: connect: connection refused Jan 29 12:58:16.679015 kubelet[2344]: E0129 12:58:16.678724 2344 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.243.84.18:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.243.84.18:6443: connect: connection refused" logger="UnhandledError" Jan 29 12:58:16.681394 kubelet[2344]: I0129 12:58:16.681370 2344 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 12:58:16.681625 kubelet[2344]: I0129 12:58:16.681586 2344 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 12:58:16.681775 kubelet[2344]: I0129 12:58:16.681746 2344 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:58:16.683969 kubelet[2344]: I0129 12:58:16.683946 2344 policy_none.go:49] "None policy: Start" Jan 29 12:58:16.684846 kubelet[2344]: I0129 12:58:16.684822 2344 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 12:58:16.684929 kubelet[2344]: I0129 12:58:16.684892 2344 state_mem.go:35] "Initializing new in-memory state store" Jan 29 12:58:16.697280 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 29 12:58:16.718635 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 29 12:58:16.724336 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 29 12:58:16.732739 kubelet[2344]: E0129 12:58:16.732682 2344 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-i7wtu.gb1.brightbox.com\" not found" Jan 29 12:58:16.735409 kubelet[2344]: I0129 12:58:16.735382 2344 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 12:58:16.735951 kubelet[2344]: I0129 12:58:16.735716 2344 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 12:58:16.735951 kubelet[2344]: I0129 12:58:16.735744 2344 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 12:58:16.736599 kubelet[2344]: I0129 12:58:16.736253 2344 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 12:58:16.739894 kubelet[2344]: E0129 12:58:16.739706 2344 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-i7wtu.gb1.brightbox.com\" not found" Jan 29 12:58:16.785298 systemd[1]: Created slice kubepods-burstable-pod7e4b39adb34c566a08992129ca4808a5.slice - libcontainer container kubepods-burstable-pod7e4b39adb34c566a08992129ca4808a5.slice. Jan 29 12:58:16.798828 systemd[1]: Created slice kubepods-burstable-podc809cda956acd1e1092b3682b148238e.slice - libcontainer container kubepods-burstable-podc809cda956acd1e1092b3682b148238e.slice. Jan 29 12:58:16.805744 systemd[1]: Created slice kubepods-burstable-pod99b063b921dad5cc944375410d40b924.slice - libcontainer container kubepods-burstable-pod99b063b921dad5cc944375410d40b924.slice. Jan 29 12:58:16.838534 kubelet[2344]: I0129 12:58:16.838476 2344 kubelet_node_status.go:72] "Attempting to register node" node="srv-i7wtu.gb1.brightbox.com" Jan 29 12:58:16.839357 kubelet[2344]: E0129 12:58:16.838991 2344 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.243.84.18:6443/api/v1/nodes\": dial tcp 10.243.84.18:6443: connect: connection refused" node="srv-i7wtu.gb1.brightbox.com" Jan 29 12:58:16.840588 kubelet[2344]: E0129 12:58:16.840540 2344 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.243.84.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-i7wtu.gb1.brightbox.com?timeout=10s\": dial tcp 10.243.84.18:6443: connect: connection refused" interval="400ms" Jan 29 12:58:16.935368 kubelet[2344]: I0129 12:58:16.935276 2344 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/99b063b921dad5cc944375410d40b924-ca-certs\") pod \"kube-apiserver-srv-i7wtu.gb1.brightbox.com\" (UID: \"99b063b921dad5cc944375410d40b924\") " pod="kube-system/kube-apiserver-srv-i7wtu.gb1.brightbox.com" Jan 29 12:58:16.936065 kubelet[2344]: I0129 12:58:16.935678 2344 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/99b063b921dad5cc944375410d40b924-usr-share-ca-certificates\") pod \"kube-apiserver-srv-i7wtu.gb1.brightbox.com\" (UID: \"99b063b921dad5cc944375410d40b924\") " pod="kube-system/kube-apiserver-srv-i7wtu.gb1.brightbox.com" Jan 29 12:58:16.936065 kubelet[2344]: I0129 12:58:16.935728 2344 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c809cda956acd1e1092b3682b148238e-ca-certs\") pod \"kube-controller-manager-srv-i7wtu.gb1.brightbox.com\" (UID: \"c809cda956acd1e1092b3682b148238e\") " pod="kube-system/kube-controller-manager-srv-i7wtu.gb1.brightbox.com" Jan 29 12:58:16.936065 kubelet[2344]: I0129 12:58:16.935828 2344 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c809cda956acd1e1092b3682b148238e-k8s-certs\") pod \"kube-controller-manager-srv-i7wtu.gb1.brightbox.com\" (UID: \"c809cda956acd1e1092b3682b148238e\") " pod="kube-system/kube-controller-manager-srv-i7wtu.gb1.brightbox.com" Jan 29 12:58:16.936065 kubelet[2344]: I0129 12:58:16.935860 2344 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c809cda956acd1e1092b3682b148238e-kubeconfig\") pod \"kube-controller-manager-srv-i7wtu.gb1.brightbox.com\" (UID: \"c809cda956acd1e1092b3682b148238e\") " pod="kube-system/kube-controller-manager-srv-i7wtu.gb1.brightbox.com" Jan 29 12:58:16.936065 kubelet[2344]: I0129 12:58:16.935909 2344 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c809cda956acd1e1092b3682b148238e-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-i7wtu.gb1.brightbox.com\" (UID: \"c809cda956acd1e1092b3682b148238e\") " pod="kube-system/kube-controller-manager-srv-i7wtu.gb1.brightbox.com" Jan 29 12:58:16.936355 kubelet[2344]: I0129 12:58:16.935950 2344 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7e4b39adb34c566a08992129ca4808a5-kubeconfig\") pod \"kube-scheduler-srv-i7wtu.gb1.brightbox.com\" (UID: \"7e4b39adb34c566a08992129ca4808a5\") " pod="kube-system/kube-scheduler-srv-i7wtu.gb1.brightbox.com" Jan 29 12:58:16.936355 kubelet[2344]: I0129 12:58:16.935996 2344 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/99b063b921dad5cc944375410d40b924-k8s-certs\") pod \"kube-apiserver-srv-i7wtu.gb1.brightbox.com\" (UID: \"99b063b921dad5cc944375410d40b924\") " pod="kube-system/kube-apiserver-srv-i7wtu.gb1.brightbox.com" Jan 29 12:58:16.936529 kubelet[2344]: I0129 12:58:16.936482 2344 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c809cda956acd1e1092b3682b148238e-flexvolume-dir\") pod \"kube-controller-manager-srv-i7wtu.gb1.brightbox.com\" (UID: \"c809cda956acd1e1092b3682b148238e\") " pod="kube-system/kube-controller-manager-srv-i7wtu.gb1.brightbox.com" Jan 29 12:58:17.042399 kubelet[2344]: I0129 12:58:17.042329 2344 kubelet_node_status.go:72] "Attempting to register node" node="srv-i7wtu.gb1.brightbox.com" Jan 29 12:58:17.042864 kubelet[2344]: E0129 12:58:17.042776 2344 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.243.84.18:6443/api/v1/nodes\": dial tcp 10.243.84.18:6443: connect: connection refused" node="srv-i7wtu.gb1.brightbox.com" Jan 29 12:58:17.098190 containerd[1515]: time="2025-01-29T12:58:17.098084925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-i7wtu.gb1.brightbox.com,Uid:7e4b39adb34c566a08992129ca4808a5,Namespace:kube-system,Attempt:0,}" Jan 29 12:58:17.107808 containerd[1515]: time="2025-01-29T12:58:17.107722413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-i7wtu.gb1.brightbox.com,Uid:c809cda956acd1e1092b3682b148238e,Namespace:kube-system,Attempt:0,}" Jan 29 12:58:17.110036 containerd[1515]: time="2025-01-29T12:58:17.109641046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-i7wtu.gb1.brightbox.com,Uid:99b063b921dad5cc944375410d40b924,Namespace:kube-system,Attempt:0,}" Jan 29 12:58:17.241691 kubelet[2344]: E0129 12:58:17.241616 2344 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.243.84.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-i7wtu.gb1.brightbox.com?timeout=10s\": dial tcp 10.243.84.18:6443: connect: connection refused" interval="800ms" Jan 29 12:58:17.430976 kubelet[2344]: W0129 12:58:17.430736 2344 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.243.84.18:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-i7wtu.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.243.84.18:6443: connect: connection refused Jan 29 12:58:17.430976 kubelet[2344]: E0129 12:58:17.430934 2344 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.243.84.18:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-i7wtu.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.243.84.18:6443: connect: connection refused" logger="UnhandledError" Jan 29 12:58:17.445475 kubelet[2344]: I0129 12:58:17.445437 2344 kubelet_node_status.go:72] "Attempting to register node" node="srv-i7wtu.gb1.brightbox.com" Jan 29 12:58:17.445796 kubelet[2344]: E0129 12:58:17.445743 2344 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.243.84.18:6443/api/v1/nodes\": dial tcp 10.243.84.18:6443: connect: connection refused" node="srv-i7wtu.gb1.brightbox.com" Jan 29 12:58:17.533963 kubelet[2344]: W0129 12:58:17.533784 2344 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.243.84.18:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.243.84.18:6443: connect: connection refused Jan 29 12:58:17.533963 kubelet[2344]: E0129 12:58:17.533898 2344 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.243.84.18:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.243.84.18:6443: connect: connection refused" logger="UnhandledError" Jan 29 12:58:17.662994 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1497462518.mount: Deactivated successfully. Jan 29 12:58:17.687597 containerd[1515]: time="2025-01-29T12:58:17.686095437Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:58:17.688777 containerd[1515]: time="2025-01-29T12:58:17.688726983Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jan 29 12:58:17.690094 containerd[1515]: time="2025-01-29T12:58:17.690040304Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:58:17.691275 containerd[1515]: time="2025-01-29T12:58:17.691228768Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:58:17.692873 containerd[1515]: time="2025-01-29T12:58:17.692835172Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:58:17.693191 containerd[1515]: time="2025-01-29T12:58:17.693155968Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 12:58:17.693616 containerd[1515]: time="2025-01-29T12:58:17.693582752Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 12:58:17.699017 containerd[1515]: time="2025-01-29T12:58:17.698968209Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:58:17.700231 containerd[1515]: time="2025-01-29T12:58:17.700200727Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 601.817182ms" Jan 29 12:58:17.702822 containerd[1515]: time="2025-01-29T12:58:17.702551451Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 592.795294ms" Jan 29 12:58:17.707844 containerd[1515]: time="2025-01-29T12:58:17.707529238Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 599.670012ms" Jan 29 12:58:17.728757 kubelet[2344]: W0129 12:58:17.728669 2344 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.243.84.18:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.243.84.18:6443: connect: connection refused Jan 29 12:58:17.728757 kubelet[2344]: E0129 12:58:17.728757 2344 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.243.84.18:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.243.84.18:6443: connect: connection refused" logger="UnhandledError" Jan 29 12:58:17.962151 containerd[1515]: time="2025-01-29T12:58:17.961613325Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:58:17.962151 containerd[1515]: time="2025-01-29T12:58:17.961728783Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:58:17.962151 containerd[1515]: time="2025-01-29T12:58:17.961749738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:58:17.964262 containerd[1515]: time="2025-01-29T12:58:17.964172719Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:58:17.968001 containerd[1515]: time="2025-01-29T12:58:17.966194017Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:58:17.968001 containerd[1515]: time="2025-01-29T12:58:17.966260767Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:58:17.968001 containerd[1515]: time="2025-01-29T12:58:17.966292501Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:58:17.968001 containerd[1515]: time="2025-01-29T12:58:17.966391329Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:58:17.969306 containerd[1515]: time="2025-01-29T12:58:17.969106172Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:58:17.969306 containerd[1515]: time="2025-01-29T12:58:17.969177890Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:58:17.969306 containerd[1515]: time="2025-01-29T12:58:17.969202829Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:58:17.969518 containerd[1515]: time="2025-01-29T12:58:17.969336295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:58:17.984814 kubelet[2344]: W0129 12:58:17.982307 2344 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.243.84.18:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.243.84.18:6443: connect: connection refused Jan 29 12:58:17.984814 kubelet[2344]: E0129 12:58:17.984317 2344 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.243.84.18:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.243.84.18:6443: connect: connection refused" logger="UnhandledError" Jan 29 12:58:18.019612 systemd[1]: Started cri-containerd-ee1a3a685c6a21abd2b3faa6c2958331c3f5471cfaf7f4c567e12f0b0835798a.scope - libcontainer container ee1a3a685c6a21abd2b3faa6c2958331c3f5471cfaf7f4c567e12f0b0835798a. Jan 29 12:58:18.033202 systemd[1]: Started cri-containerd-058b51c7999c384e195c682046013af6212b35b7a384d81d393bb02ada0844c0.scope - libcontainer container 058b51c7999c384e195c682046013af6212b35b7a384d81d393bb02ada0844c0. Jan 29 12:58:18.038068 systemd[1]: Started cri-containerd-642798839b64de3259bf188e7dcf04c2f0efa61cb8e2125e099e473c1c4e71f6.scope - libcontainer container 642798839b64de3259bf188e7dcf04c2f0efa61cb8e2125e099e473c1c4e71f6. Jan 29 12:58:18.044201 kubelet[2344]: E0129 12:58:18.044103 2344 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.243.84.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-i7wtu.gb1.brightbox.com?timeout=10s\": dial tcp 10.243.84.18:6443: connect: connection refused" interval="1.6s" Jan 29 12:58:18.136691 containerd[1515]: time="2025-01-29T12:58:18.136371397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-i7wtu.gb1.brightbox.com,Uid:7e4b39adb34c566a08992129ca4808a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"ee1a3a685c6a21abd2b3faa6c2958331c3f5471cfaf7f4c567e12f0b0835798a\"" Jan 29 12:58:18.153862 containerd[1515]: time="2025-01-29T12:58:18.152617058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-i7wtu.gb1.brightbox.com,Uid:99b063b921dad5cc944375410d40b924,Namespace:kube-system,Attempt:0,} returns sandbox id \"058b51c7999c384e195c682046013af6212b35b7a384d81d393bb02ada0844c0\"" Jan 29 12:58:18.159147 containerd[1515]: time="2025-01-29T12:58:18.159099806Z" level=info msg="CreateContainer within sandbox \"ee1a3a685c6a21abd2b3faa6c2958331c3f5471cfaf7f4c567e12f0b0835798a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 29 12:58:18.159701 containerd[1515]: time="2025-01-29T12:58:18.159645915Z" level=info msg="CreateContainer within sandbox \"058b51c7999c384e195c682046013af6212b35b7a384d81d393bb02ada0844c0\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 29 12:58:18.178590 containerd[1515]: time="2025-01-29T12:58:18.178531601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-i7wtu.gb1.brightbox.com,Uid:c809cda956acd1e1092b3682b148238e,Namespace:kube-system,Attempt:0,} returns sandbox id \"642798839b64de3259bf188e7dcf04c2f0efa61cb8e2125e099e473c1c4e71f6\"" Jan 29 12:58:18.183825 containerd[1515]: time="2025-01-29T12:58:18.183212931Z" level=info msg="CreateContainer within sandbox \"642798839b64de3259bf188e7dcf04c2f0efa61cb8e2125e099e473c1c4e71f6\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 29 12:58:18.184224 containerd[1515]: time="2025-01-29T12:58:18.184192156Z" level=info msg="CreateContainer within sandbox \"058b51c7999c384e195c682046013af6212b35b7a384d81d393bb02ada0844c0\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"53c3c39b8f223b94b9566c42e9296dffcc4ce548acc261354aa2a32cf092a91d\"" Jan 29 12:58:18.185160 containerd[1515]: time="2025-01-29T12:58:18.185115104Z" level=info msg="StartContainer for \"53c3c39b8f223b94b9566c42e9296dffcc4ce548acc261354aa2a32cf092a91d\"" Jan 29 12:58:18.194708 kubelet[2344]: E0129 12:58:18.193973 2344 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.243.84.18:6443/api/v1/namespaces/default/events\": dial tcp 10.243.84.18:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-i7wtu.gb1.brightbox.com.181f2b370bf9a356 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-i7wtu.gb1.brightbox.com,UID:srv-i7wtu.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-i7wtu.gb1.brightbox.com,},FirstTimestamp:2025-01-29 12:58:16.612406102 +0000 UTC m=+0.877897198,LastTimestamp:2025-01-29 12:58:16.612406102 +0000 UTC m=+0.877897198,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-i7wtu.gb1.brightbox.com,}" Jan 29 12:58:18.199774 containerd[1515]: time="2025-01-29T12:58:18.199715516Z" level=info msg="CreateContainer within sandbox \"ee1a3a685c6a21abd2b3faa6c2958331c3f5471cfaf7f4c567e12f0b0835798a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"55cadb7c6765febbdd5108e31bea3fa907d7cbf5fb76058679660d1668e39ede\"" Jan 29 12:58:18.200811 containerd[1515]: time="2025-01-29T12:58:18.200652093Z" level=info msg="StartContainer for \"55cadb7c6765febbdd5108e31bea3fa907d7cbf5fb76058679660d1668e39ede\"" Jan 29 12:58:18.204366 containerd[1515]: time="2025-01-29T12:58:18.204250160Z" level=info msg="CreateContainer within sandbox \"642798839b64de3259bf188e7dcf04c2f0efa61cb8e2125e099e473c1c4e71f6\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f721c4e743c5b93fd9b8548d4d73ea4cb13f43e7e3b64aa7ab2559a90c2df99c\"" Jan 29 12:58:18.204879 containerd[1515]: time="2025-01-29T12:58:18.204833354Z" level=info msg="StartContainer for \"f721c4e743c5b93fd9b8548d4d73ea4cb13f43e7e3b64aa7ab2559a90c2df99c\"" Jan 29 12:58:18.255215 systemd[1]: Started cri-containerd-53c3c39b8f223b94b9566c42e9296dffcc4ce548acc261354aa2a32cf092a91d.scope - libcontainer container 53c3c39b8f223b94b9566c42e9296dffcc4ce548acc261354aa2a32cf092a91d. Jan 29 12:58:18.256756 kubelet[2344]: I0129 12:58:18.256052 2344 kubelet_node_status.go:72] "Attempting to register node" node="srv-i7wtu.gb1.brightbox.com" Jan 29 12:58:18.256756 kubelet[2344]: E0129 12:58:18.256594 2344 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.243.84.18:6443/api/v1/nodes\": dial tcp 10.243.84.18:6443: connect: connection refused" node="srv-i7wtu.gb1.brightbox.com" Jan 29 12:58:18.266001 systemd[1]: Started cri-containerd-f721c4e743c5b93fd9b8548d4d73ea4cb13f43e7e3b64aa7ab2559a90c2df99c.scope - libcontainer container f721c4e743c5b93fd9b8548d4d73ea4cb13f43e7e3b64aa7ab2559a90c2df99c. Jan 29 12:58:18.289010 systemd[1]: Started cri-containerd-55cadb7c6765febbdd5108e31bea3fa907d7cbf5fb76058679660d1668e39ede.scope - libcontainer container 55cadb7c6765febbdd5108e31bea3fa907d7cbf5fb76058679660d1668e39ede. Jan 29 12:58:18.386969 containerd[1515]: time="2025-01-29T12:58:18.386820575Z" level=info msg="StartContainer for \"53c3c39b8f223b94b9566c42e9296dffcc4ce548acc261354aa2a32cf092a91d\" returns successfully" Jan 29 12:58:18.390854 containerd[1515]: time="2025-01-29T12:58:18.390055205Z" level=info msg="StartContainer for \"f721c4e743c5b93fd9b8548d4d73ea4cb13f43e7e3b64aa7ab2559a90c2df99c\" returns successfully" Jan 29 12:58:18.410713 containerd[1515]: time="2025-01-29T12:58:18.410644108Z" level=info msg="StartContainer for \"55cadb7c6765febbdd5108e31bea3fa907d7cbf5fb76058679660d1668e39ede\" returns successfully" Jan 29 12:58:18.683337 kubelet[2344]: E0129 12:58:18.682203 2344 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.243.84.18:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.243.84.18:6443: connect: connection refused" logger="UnhandledError" Jan 29 12:58:19.861038 kubelet[2344]: I0129 12:58:19.860982 2344 kubelet_node_status.go:72] "Attempting to register node" node="srv-i7wtu.gb1.brightbox.com" Jan 29 12:58:21.310942 kubelet[2344]: E0129 12:58:21.310805 2344 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-i7wtu.gb1.brightbox.com\" not found" node="srv-i7wtu.gb1.brightbox.com" Jan 29 12:58:21.431025 kubelet[2344]: I0129 12:58:21.430977 2344 kubelet_node_status.go:75] "Successfully registered node" node="srv-i7wtu.gb1.brightbox.com" Jan 29 12:58:21.607576 kubelet[2344]: I0129 12:58:21.606274 2344 apiserver.go:52] "Watching apiserver" Jan 29 12:58:21.634461 kubelet[2344]: I0129 12:58:21.634368 2344 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 29 12:58:23.061693 kubelet[2344]: W0129 12:58:23.061628 2344 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 12:58:23.765010 systemd[1]: Reloading requested from client PID 2624 ('systemctl') (unit session-11.scope)... Jan 29 12:58:23.765070 systemd[1]: Reloading... Jan 29 12:58:23.893951 zram_generator::config[2665]: No configuration found. Jan 29 12:58:24.068777 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 12:58:24.192074 systemd[1]: Reloading finished in 426 ms. Jan 29 12:58:24.255358 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:58:24.261675 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 12:58:24.262245 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:58:24.262359 systemd[1]: kubelet.service: Consumed 1.400s CPU time, 113.5M memory peak, 0B memory swap peak. Jan 29 12:58:24.269169 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:58:24.473849 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:58:24.484386 (kubelet)[2726]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 12:58:24.593740 kubelet[2726]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:58:24.593740 kubelet[2726]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 12:58:24.593740 kubelet[2726]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:58:24.595157 kubelet[2726]: I0129 12:58:24.595063 2726 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 12:58:24.604215 kubelet[2726]: I0129 12:58:24.604114 2726 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 29 12:58:24.604215 kubelet[2726]: I0129 12:58:24.604170 2726 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 12:58:24.604516 kubelet[2726]: I0129 12:58:24.604475 2726 server.go:929] "Client rotation is on, will bootstrap in background" Jan 29 12:58:24.606500 kubelet[2726]: I0129 12:58:24.606468 2726 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 12:58:24.612292 kubelet[2726]: I0129 12:58:24.612022 2726 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 12:58:24.617939 kubelet[2726]: E0129 12:58:24.617903 2726 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 12:58:24.618048 kubelet[2726]: I0129 12:58:24.617942 2726 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 12:58:24.623725 kubelet[2726]: I0129 12:58:24.623343 2726 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 12:58:24.623725 kubelet[2726]: I0129 12:58:24.623601 2726 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 29 12:58:24.624306 kubelet[2726]: I0129 12:58:24.624107 2726 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 12:58:24.626665 kubelet[2726]: I0129 12:58:24.624656 2726 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-i7wtu.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 12:58:24.627017 kubelet[2726]: I0129 12:58:24.626984 2726 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 12:58:24.627238 kubelet[2726]: I0129 12:58:24.627132 2726 container_manager_linux.go:300] "Creating device plugin manager" Jan 29 12:58:24.627559 kubelet[2726]: I0129 12:58:24.627517 2726 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:58:24.629713 kubelet[2726]: I0129 12:58:24.628021 2726 kubelet.go:408] "Attempting to sync node with API server" Jan 29 12:58:24.629713 kubelet[2726]: I0129 12:58:24.628077 2726 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 12:58:24.629713 kubelet[2726]: I0129 12:58:24.628145 2726 kubelet.go:314] "Adding apiserver pod source" Jan 29 12:58:24.629713 kubelet[2726]: I0129 12:58:24.628175 2726 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 12:58:24.632920 kubelet[2726]: I0129 12:58:24.632897 2726 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 12:58:24.633620 kubelet[2726]: I0129 12:58:24.633587 2726 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 12:58:24.635435 kubelet[2726]: I0129 12:58:24.635415 2726 server.go:1269] "Started kubelet" Jan 29 12:58:24.645397 kubelet[2726]: I0129 12:58:24.645174 2726 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 12:58:24.652529 kubelet[2726]: I0129 12:58:24.651886 2726 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 12:58:24.653156 kubelet[2726]: I0129 12:58:24.652766 2726 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 12:58:24.663425 kubelet[2726]: I0129 12:58:24.656876 2726 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 12:58:24.668889 kubelet[2726]: I0129 12:58:24.668495 2726 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 12:58:24.668889 kubelet[2726]: I0129 12:58:24.657391 2726 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 29 12:58:24.669444 kubelet[2726]: I0129 12:58:24.657251 2726 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 29 12:58:24.670453 kubelet[2726]: I0129 12:58:24.663107 2726 server.go:460] "Adding debug handlers to kubelet server" Jan 29 12:58:24.671525 kubelet[2726]: E0129 12:58:24.658137 2726 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-i7wtu.gb1.brightbox.com\" not found" Jan 29 12:58:24.675347 kubelet[2726]: I0129 12:58:24.674203 2726 reconciler.go:26] "Reconciler: start to sync state" Jan 29 12:58:24.679183 kubelet[2726]: I0129 12:58:24.679146 2726 factory.go:221] Registration of the systemd container factory successfully Jan 29 12:58:24.680493 kubelet[2726]: I0129 12:58:24.679328 2726 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 12:58:24.681169 kubelet[2726]: E0129 12:58:24.681127 2726 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 12:58:24.683865 kubelet[2726]: I0129 12:58:24.683762 2726 factory.go:221] Registration of the containerd container factory successfully Jan 29 12:58:24.706819 kubelet[2726]: I0129 12:58:24.704032 2726 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 12:58:24.710287 kubelet[2726]: I0129 12:58:24.710235 2726 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 12:58:24.711251 kubelet[2726]: I0129 12:58:24.711229 2726 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 12:58:24.711464 kubelet[2726]: I0129 12:58:24.711444 2726 kubelet.go:2321] "Starting kubelet main sync loop" Jan 29 12:58:24.711739 kubelet[2726]: E0129 12:58:24.711701 2726 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 12:58:24.789943 kubelet[2726]: I0129 12:58:24.789402 2726 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 12:58:24.789943 kubelet[2726]: I0129 12:58:24.789453 2726 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 12:58:24.789943 kubelet[2726]: I0129 12:58:24.789492 2726 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:58:24.789943 kubelet[2726]: I0129 12:58:24.789820 2726 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 29 12:58:24.789943 kubelet[2726]: I0129 12:58:24.789843 2726 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 29 12:58:24.789943 kubelet[2726]: I0129 12:58:24.789900 2726 policy_none.go:49] "None policy: Start" Jan 29 12:58:24.795029 kubelet[2726]: I0129 12:58:24.791871 2726 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 12:58:24.795029 kubelet[2726]: I0129 12:58:24.791929 2726 state_mem.go:35] "Initializing new in-memory state store" Jan 29 12:58:24.795029 kubelet[2726]: I0129 12:58:24.792497 2726 state_mem.go:75] "Updated machine memory state" Jan 29 12:58:24.800630 sudo[2757]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 29 12:58:24.802048 sudo[2757]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 29 12:58:24.805908 kubelet[2726]: I0129 12:58:24.805473 2726 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 12:58:24.806650 kubelet[2726]: I0129 12:58:24.806616 2726 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 12:58:24.806742 kubelet[2726]: I0129 12:58:24.806658 2726 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 12:58:24.807344 kubelet[2726]: I0129 12:58:24.807255 2726 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 12:58:24.838813 kubelet[2726]: W0129 12:58:24.837138 2726 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 12:58:24.841358 kubelet[2726]: W0129 12:58:24.841330 2726 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 12:58:24.842810 kubelet[2726]: W0129 12:58:24.841527 2726 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 12:58:24.842810 kubelet[2726]: E0129 12:58:24.841602 2726 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-srv-i7wtu.gb1.brightbox.com\" already exists" pod="kube-system/kube-controller-manager-srv-i7wtu.gb1.brightbox.com" Jan 29 12:58:24.878863 kubelet[2726]: I0129 12:58:24.878820 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/99b063b921dad5cc944375410d40b924-ca-certs\") pod \"kube-apiserver-srv-i7wtu.gb1.brightbox.com\" (UID: \"99b063b921dad5cc944375410d40b924\") " pod="kube-system/kube-apiserver-srv-i7wtu.gb1.brightbox.com" Jan 29 12:58:24.879203 kubelet[2726]: I0129 12:58:24.879153 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c809cda956acd1e1092b3682b148238e-ca-certs\") pod \"kube-controller-manager-srv-i7wtu.gb1.brightbox.com\" (UID: \"c809cda956acd1e1092b3682b148238e\") " pod="kube-system/kube-controller-manager-srv-i7wtu.gb1.brightbox.com" Jan 29 12:58:24.879650 kubelet[2726]: I0129 12:58:24.879343 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c809cda956acd1e1092b3682b148238e-kubeconfig\") pod \"kube-controller-manager-srv-i7wtu.gb1.brightbox.com\" (UID: \"c809cda956acd1e1092b3682b148238e\") " pod="kube-system/kube-controller-manager-srv-i7wtu.gb1.brightbox.com" Jan 29 12:58:24.879650 kubelet[2726]: I0129 12:58:24.879402 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c809cda956acd1e1092b3682b148238e-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-i7wtu.gb1.brightbox.com\" (UID: \"c809cda956acd1e1092b3682b148238e\") " pod="kube-system/kube-controller-manager-srv-i7wtu.gb1.brightbox.com" Jan 29 12:58:24.879650 kubelet[2726]: I0129 12:58:24.879430 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7e4b39adb34c566a08992129ca4808a5-kubeconfig\") pod \"kube-scheduler-srv-i7wtu.gb1.brightbox.com\" (UID: \"7e4b39adb34c566a08992129ca4808a5\") " pod="kube-system/kube-scheduler-srv-i7wtu.gb1.brightbox.com" Jan 29 12:58:24.879650 kubelet[2726]: I0129 12:58:24.879464 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/99b063b921dad5cc944375410d40b924-k8s-certs\") pod \"kube-apiserver-srv-i7wtu.gb1.brightbox.com\" (UID: \"99b063b921dad5cc944375410d40b924\") " pod="kube-system/kube-apiserver-srv-i7wtu.gb1.brightbox.com" Jan 29 12:58:24.879650 kubelet[2726]: I0129 12:58:24.879492 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/99b063b921dad5cc944375410d40b924-usr-share-ca-certificates\") pod \"kube-apiserver-srv-i7wtu.gb1.brightbox.com\" (UID: \"99b063b921dad5cc944375410d40b924\") " pod="kube-system/kube-apiserver-srv-i7wtu.gb1.brightbox.com" Jan 29 12:58:24.879970 kubelet[2726]: I0129 12:58:24.879517 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c809cda956acd1e1092b3682b148238e-flexvolume-dir\") pod \"kube-controller-manager-srv-i7wtu.gb1.brightbox.com\" (UID: \"c809cda956acd1e1092b3682b148238e\") " pod="kube-system/kube-controller-manager-srv-i7wtu.gb1.brightbox.com" Jan 29 12:58:24.879970 kubelet[2726]: I0129 12:58:24.879539 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c809cda956acd1e1092b3682b148238e-k8s-certs\") pod \"kube-controller-manager-srv-i7wtu.gb1.brightbox.com\" (UID: \"c809cda956acd1e1092b3682b148238e\") " pod="kube-system/kube-controller-manager-srv-i7wtu.gb1.brightbox.com" Jan 29 12:58:24.948559 kubelet[2726]: I0129 12:58:24.947526 2726 kubelet_node_status.go:72] "Attempting to register node" node="srv-i7wtu.gb1.brightbox.com" Jan 29 12:58:24.962028 kubelet[2726]: I0129 12:58:24.962001 2726 kubelet_node_status.go:111] "Node was previously registered" node="srv-i7wtu.gb1.brightbox.com" Jan 29 12:58:24.963337 kubelet[2726]: I0129 12:58:24.962298 2726 kubelet_node_status.go:75] "Successfully registered node" node="srv-i7wtu.gb1.brightbox.com" Jan 29 12:58:25.538448 sudo[2757]: pam_unix(sudo:session): session closed for user root Jan 29 12:58:25.642030 kubelet[2726]: I0129 12:58:25.641971 2726 apiserver.go:52] "Watching apiserver" Jan 29 12:58:25.669967 kubelet[2726]: I0129 12:58:25.669911 2726 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 29 12:58:25.873168 kubelet[2726]: I0129 12:58:25.873015 2726 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-i7wtu.gb1.brightbox.com" podStartSLOduration=1.872772367 podStartE2EDuration="1.872772367s" podCreationTimestamp="2025-01-29 12:58:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:58:25.845794138 +0000 UTC m=+1.334410794" watchObservedRunningTime="2025-01-29 12:58:25.872772367 +0000 UTC m=+1.361389018" Jan 29 12:58:25.874178 kubelet[2726]: I0129 12:58:25.874132 2726 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-i7wtu.gb1.brightbox.com" podStartSLOduration=2.87412157 podStartE2EDuration="2.87412157s" podCreationTimestamp="2025-01-29 12:58:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:58:25.870138854 +0000 UTC m=+1.358755513" watchObservedRunningTime="2025-01-29 12:58:25.87412157 +0000 UTC m=+1.362738211" Jan 29 12:58:25.925766 kubelet[2726]: I0129 12:58:25.925566 2726 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-i7wtu.gb1.brightbox.com" podStartSLOduration=1.925546106 podStartE2EDuration="1.925546106s" podCreationTimestamp="2025-01-29 12:58:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:58:25.895312399 +0000 UTC m=+1.383929051" watchObservedRunningTime="2025-01-29 12:58:25.925546106 +0000 UTC m=+1.414162758" Jan 29 12:58:27.515255 sudo[1780]: pam_unix(sudo:session): session closed for user root Jan 29 12:58:27.658364 sshd[1779]: Connection closed by 147.75.109.163 port 34586 Jan 29 12:58:27.661010 sshd-session[1777]: pam_unix(sshd:session): session closed for user core Jan 29 12:58:27.667139 systemd[1]: sshd@8-10.243.84.18:22-147.75.109.163:34586.service: Deactivated successfully. Jan 29 12:58:27.669614 systemd[1]: session-11.scope: Deactivated successfully. Jan 29 12:58:27.669930 systemd[1]: session-11.scope: Consumed 6.666s CPU time, 138.3M memory peak, 0B memory swap peak. Jan 29 12:58:27.670668 systemd-logind[1492]: Session 11 logged out. Waiting for processes to exit. Jan 29 12:58:27.672327 systemd-logind[1492]: Removed session 11. Jan 29 12:58:30.338445 kubelet[2726]: I0129 12:58:30.338143 2726 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 29 12:58:30.341438 kubelet[2726]: I0129 12:58:30.340065 2726 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 29 12:58:30.341529 containerd[1515]: time="2025-01-29T12:58:30.338967032Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 12:58:30.949735 systemd[1]: Created slice kubepods-besteffort-pod14ab41c0_a587_4567_9e5d_88cdeb3940e9.slice - libcontainer container kubepods-besteffort-pod14ab41c0_a587_4567_9e5d_88cdeb3940e9.slice. Jan 29 12:58:30.969132 systemd[1]: Created slice kubepods-burstable-podcf516ff4_fc12_42c7_96c9_710ba06ef722.slice - libcontainer container kubepods-burstable-podcf516ff4_fc12_42c7_96c9_710ba06ef722.slice. Jan 29 12:58:31.022824 kubelet[2726]: I0129 12:58:31.022706 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/14ab41c0-a587-4567-9e5d-88cdeb3940e9-xtables-lock\") pod \"kube-proxy-h6c6b\" (UID: \"14ab41c0-a587-4567-9e5d-88cdeb3940e9\") " pod="kube-system/kube-proxy-h6c6b" Jan 29 12:58:31.022824 kubelet[2726]: I0129 12:58:31.022812 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/14ab41c0-a587-4567-9e5d-88cdeb3940e9-lib-modules\") pod \"kube-proxy-h6c6b\" (UID: \"14ab41c0-a587-4567-9e5d-88cdeb3940e9\") " pod="kube-system/kube-proxy-h6c6b" Jan 29 12:58:31.023232 kubelet[2726]: I0129 12:58:31.022869 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cf516ff4-fc12-42c7-96c9-710ba06ef722-cni-path\") pod \"cilium-5q8wt\" (UID: \"cf516ff4-fc12-42c7-96c9-710ba06ef722\") " pod="kube-system/cilium-5q8wt" Jan 29 12:58:31.023232 kubelet[2726]: I0129 12:58:31.022900 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cf516ff4-fc12-42c7-96c9-710ba06ef722-host-proc-sys-net\") pod \"cilium-5q8wt\" (UID: \"cf516ff4-fc12-42c7-96c9-710ba06ef722\") " pod="kube-system/cilium-5q8wt" Jan 29 12:58:31.023232 kubelet[2726]: I0129 12:58:31.022942 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64tbr\" (UniqueName: \"kubernetes.io/projected/cf516ff4-fc12-42c7-96c9-710ba06ef722-kube-api-access-64tbr\") pod \"cilium-5q8wt\" (UID: \"cf516ff4-fc12-42c7-96c9-710ba06ef722\") " pod="kube-system/cilium-5q8wt" Jan 29 12:58:31.023232 kubelet[2726]: I0129 12:58:31.022971 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cf516ff4-fc12-42c7-96c9-710ba06ef722-cilium-run\") pod \"cilium-5q8wt\" (UID: \"cf516ff4-fc12-42c7-96c9-710ba06ef722\") " pod="kube-system/cilium-5q8wt" Jan 29 12:58:31.023232 kubelet[2726]: I0129 12:58:31.023015 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cf516ff4-fc12-42c7-96c9-710ba06ef722-bpf-maps\") pod \"cilium-5q8wt\" (UID: \"cf516ff4-fc12-42c7-96c9-710ba06ef722\") " pod="kube-system/cilium-5q8wt" Jan 29 12:58:31.023232 kubelet[2726]: I0129 12:58:31.023054 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cf516ff4-fc12-42c7-96c9-710ba06ef722-cilium-cgroup\") pod \"cilium-5q8wt\" (UID: \"cf516ff4-fc12-42c7-96c9-710ba06ef722\") " pod="kube-system/cilium-5q8wt" Jan 29 12:58:31.023525 kubelet[2726]: I0129 12:58:31.023098 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cf516ff4-fc12-42c7-96c9-710ba06ef722-xtables-lock\") pod \"cilium-5q8wt\" (UID: \"cf516ff4-fc12-42c7-96c9-710ba06ef722\") " pod="kube-system/cilium-5q8wt" Jan 29 12:58:31.023525 kubelet[2726]: I0129 12:58:31.023127 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cf516ff4-fc12-42c7-96c9-710ba06ef722-hubble-tls\") pod \"cilium-5q8wt\" (UID: \"cf516ff4-fc12-42c7-96c9-710ba06ef722\") " pod="kube-system/cilium-5q8wt" Jan 29 12:58:31.023525 kubelet[2726]: I0129 12:58:31.023155 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cf516ff4-fc12-42c7-96c9-710ba06ef722-cilium-config-path\") pod \"cilium-5q8wt\" (UID: \"cf516ff4-fc12-42c7-96c9-710ba06ef722\") " pod="kube-system/cilium-5q8wt" Jan 29 12:58:31.023525 kubelet[2726]: I0129 12:58:31.023194 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cf516ff4-fc12-42c7-96c9-710ba06ef722-host-proc-sys-kernel\") pod \"cilium-5q8wt\" (UID: \"cf516ff4-fc12-42c7-96c9-710ba06ef722\") " pod="kube-system/cilium-5q8wt" Jan 29 12:58:31.023525 kubelet[2726]: I0129 12:58:31.023237 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cf516ff4-fc12-42c7-96c9-710ba06ef722-hostproc\") pod \"cilium-5q8wt\" (UID: \"cf516ff4-fc12-42c7-96c9-710ba06ef722\") " pod="kube-system/cilium-5q8wt" Jan 29 12:58:31.023525 kubelet[2726]: I0129 12:58:31.023267 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/14ab41c0-a587-4567-9e5d-88cdeb3940e9-kube-proxy\") pod \"kube-proxy-h6c6b\" (UID: \"14ab41c0-a587-4567-9e5d-88cdeb3940e9\") " pod="kube-system/kube-proxy-h6c6b" Jan 29 12:58:31.023846 kubelet[2726]: I0129 12:58:31.023309 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cf516ff4-fc12-42c7-96c9-710ba06ef722-clustermesh-secrets\") pod \"cilium-5q8wt\" (UID: \"cf516ff4-fc12-42c7-96c9-710ba06ef722\") " pod="kube-system/cilium-5q8wt" Jan 29 12:58:31.023846 kubelet[2726]: I0129 12:58:31.023341 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cf516ff4-fc12-42c7-96c9-710ba06ef722-etc-cni-netd\") pod \"cilium-5q8wt\" (UID: \"cf516ff4-fc12-42c7-96c9-710ba06ef722\") " pod="kube-system/cilium-5q8wt" Jan 29 12:58:31.023846 kubelet[2726]: I0129 12:58:31.023370 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4db57\" (UniqueName: \"kubernetes.io/projected/14ab41c0-a587-4567-9e5d-88cdeb3940e9-kube-api-access-4db57\") pod \"kube-proxy-h6c6b\" (UID: \"14ab41c0-a587-4567-9e5d-88cdeb3940e9\") " pod="kube-system/kube-proxy-h6c6b" Jan 29 12:58:31.023846 kubelet[2726]: I0129 12:58:31.023400 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cf516ff4-fc12-42c7-96c9-710ba06ef722-lib-modules\") pod \"cilium-5q8wt\" (UID: \"cf516ff4-fc12-42c7-96c9-710ba06ef722\") " pod="kube-system/cilium-5q8wt" Jan 29 12:58:31.152836 kubelet[2726]: E0129 12:58:31.146526 2726 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 29 12:58:31.152836 kubelet[2726]: E0129 12:58:31.146609 2726 projected.go:194] Error preparing data for projected volume kube-api-access-4db57 for pod kube-system/kube-proxy-h6c6b: configmap "kube-root-ca.crt" not found Jan 29 12:58:31.152836 kubelet[2726]: E0129 12:58:31.146723 2726 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/14ab41c0-a587-4567-9e5d-88cdeb3940e9-kube-api-access-4db57 podName:14ab41c0-a587-4567-9e5d-88cdeb3940e9 nodeName:}" failed. No retries permitted until 2025-01-29 12:58:31.646685189 +0000 UTC m=+7.135301830 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-4db57" (UniqueName: "kubernetes.io/projected/14ab41c0-a587-4567-9e5d-88cdeb3940e9-kube-api-access-4db57") pod "kube-proxy-h6c6b" (UID: "14ab41c0-a587-4567-9e5d-88cdeb3940e9") : configmap "kube-root-ca.crt" not found Jan 29 12:58:31.153607 kubelet[2726]: E0129 12:58:31.153565 2726 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 29 12:58:31.153691 kubelet[2726]: E0129 12:58:31.153617 2726 projected.go:194] Error preparing data for projected volume kube-api-access-64tbr for pod kube-system/cilium-5q8wt: configmap "kube-root-ca.crt" not found Jan 29 12:58:31.153691 kubelet[2726]: E0129 12:58:31.153659 2726 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cf516ff4-fc12-42c7-96c9-710ba06ef722-kube-api-access-64tbr podName:cf516ff4-fc12-42c7-96c9-710ba06ef722 nodeName:}" failed. No retries permitted until 2025-01-29 12:58:31.653644426 +0000 UTC m=+7.142261067 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-64tbr" (UniqueName: "kubernetes.io/projected/cf516ff4-fc12-42c7-96c9-710ba06ef722-kube-api-access-64tbr") pod "cilium-5q8wt" (UID: "cf516ff4-fc12-42c7-96c9-710ba06ef722") : configmap "kube-root-ca.crt" not found Jan 29 12:58:31.383027 systemd[1]: Created slice kubepods-besteffort-poda3b44990_636b_4493_8905_3f93af6a411c.slice - libcontainer container kubepods-besteffort-poda3b44990_636b_4493_8905_3f93af6a411c.slice. Jan 29 12:58:31.427109 kubelet[2726]: I0129 12:58:31.427050 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2l8v\" (UniqueName: \"kubernetes.io/projected/a3b44990-636b-4493-8905-3f93af6a411c-kube-api-access-b2l8v\") pod \"cilium-operator-5d85765b45-cvdpg\" (UID: \"a3b44990-636b-4493-8905-3f93af6a411c\") " pod="kube-system/cilium-operator-5d85765b45-cvdpg" Jan 29 12:58:31.427816 kubelet[2726]: I0129 12:58:31.427158 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a3b44990-636b-4493-8905-3f93af6a411c-cilium-config-path\") pod \"cilium-operator-5d85765b45-cvdpg\" (UID: \"a3b44990-636b-4493-8905-3f93af6a411c\") " pod="kube-system/cilium-operator-5d85765b45-cvdpg" Jan 29 12:58:31.690145 containerd[1515]: time="2025-01-29T12:58:31.689612463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-cvdpg,Uid:a3b44990-636b-4493-8905-3f93af6a411c,Namespace:kube-system,Attempt:0,}" Jan 29 12:58:31.760360 containerd[1515]: time="2025-01-29T12:58:31.759201187Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:58:31.760360 containerd[1515]: time="2025-01-29T12:58:31.759570462Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:58:31.762857 containerd[1515]: time="2025-01-29T12:58:31.760847259Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:58:31.762857 containerd[1515]: time="2025-01-29T12:58:31.761030356Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:58:31.799067 systemd[1]: Started cri-containerd-8c2f888900090e57cc1d97ef70712cb184f1579207b006f946e5375e908c49cd.scope - libcontainer container 8c2f888900090e57cc1d97ef70712cb184f1579207b006f946e5375e908c49cd. Jan 29 12:58:31.864626 containerd[1515]: time="2025-01-29T12:58:31.864019787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-h6c6b,Uid:14ab41c0-a587-4567-9e5d-88cdeb3940e9,Namespace:kube-system,Attempt:0,}" Jan 29 12:58:31.877463 containerd[1515]: time="2025-01-29T12:58:31.876984700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5q8wt,Uid:cf516ff4-fc12-42c7-96c9-710ba06ef722,Namespace:kube-system,Attempt:0,}" Jan 29 12:58:31.891468 containerd[1515]: time="2025-01-29T12:58:31.891419180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-cvdpg,Uid:a3b44990-636b-4493-8905-3f93af6a411c,Namespace:kube-system,Attempt:0,} returns sandbox id \"8c2f888900090e57cc1d97ef70712cb184f1579207b006f946e5375e908c49cd\"" Jan 29 12:58:31.896019 containerd[1515]: time="2025-01-29T12:58:31.895898408Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 29 12:58:31.912218 containerd[1515]: time="2025-01-29T12:58:31.912010710Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:58:31.912218 containerd[1515]: time="2025-01-29T12:58:31.912174572Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:58:31.912915 containerd[1515]: time="2025-01-29T12:58:31.912519738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:58:31.913270 containerd[1515]: time="2025-01-29T12:58:31.913197975Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:58:31.933908 containerd[1515]: time="2025-01-29T12:58:31.933442508Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:58:31.933908 containerd[1515]: time="2025-01-29T12:58:31.933569775Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:58:31.933908 containerd[1515]: time="2025-01-29T12:58:31.933601023Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:58:31.933908 containerd[1515]: time="2025-01-29T12:58:31.933748449Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:58:31.955032 systemd[1]: Started cri-containerd-cefd3369c2ff3bb3cd0cfa9e42527b0b02ce88248e1f28eb4ca2b39c381d3920.scope - libcontainer container cefd3369c2ff3bb3cd0cfa9e42527b0b02ce88248e1f28eb4ca2b39c381d3920. Jan 29 12:58:31.976979 systemd[1]: Started cri-containerd-4517ff33a9839fe5c11f1a481d1c1eba5a1d5c373922b2ec6c73bc420ac34aee.scope - libcontainer container 4517ff33a9839fe5c11f1a481d1c1eba5a1d5c373922b2ec6c73bc420ac34aee. Jan 29 12:58:32.033946 containerd[1515]: time="2025-01-29T12:58:32.033773995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-h6c6b,Uid:14ab41c0-a587-4567-9e5d-88cdeb3940e9,Namespace:kube-system,Attempt:0,} returns sandbox id \"cefd3369c2ff3bb3cd0cfa9e42527b0b02ce88248e1f28eb4ca2b39c381d3920\"" Jan 29 12:58:32.046377 containerd[1515]: time="2025-01-29T12:58:32.046124702Z" level=info msg="CreateContainer within sandbox \"cefd3369c2ff3bb3cd0cfa9e42527b0b02ce88248e1f28eb4ca2b39c381d3920\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 12:58:32.059113 containerd[1515]: time="2025-01-29T12:58:32.059022834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5q8wt,Uid:cf516ff4-fc12-42c7-96c9-710ba06ef722,Namespace:kube-system,Attempt:0,} returns sandbox id \"4517ff33a9839fe5c11f1a481d1c1eba5a1d5c373922b2ec6c73bc420ac34aee\"" Jan 29 12:58:32.079251 containerd[1515]: time="2025-01-29T12:58:32.079196985Z" level=info msg="CreateContainer within sandbox \"cefd3369c2ff3bb3cd0cfa9e42527b0b02ce88248e1f28eb4ca2b39c381d3920\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"26a602fe7aeee5a08aaf077ef20bafe6c820d31990c6502060d06bcc4c3f4992\"" Jan 29 12:58:32.080311 containerd[1515]: time="2025-01-29T12:58:32.080251167Z" level=info msg="StartContainer for \"26a602fe7aeee5a08aaf077ef20bafe6c820d31990c6502060d06bcc4c3f4992\"" Jan 29 12:58:32.121198 systemd[1]: Started cri-containerd-26a602fe7aeee5a08aaf077ef20bafe6c820d31990c6502060d06bcc4c3f4992.scope - libcontainer container 26a602fe7aeee5a08aaf077ef20bafe6c820d31990c6502060d06bcc4c3f4992. Jan 29 12:58:32.198262 containerd[1515]: time="2025-01-29T12:58:32.197941492Z" level=info msg="StartContainer for \"26a602fe7aeee5a08aaf077ef20bafe6c820d31990c6502060d06bcc4c3f4992\" returns successfully" Jan 29 12:58:32.795263 kubelet[2726]: I0129 12:58:32.794453 2726 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-h6c6b" podStartSLOduration=2.794402538 podStartE2EDuration="2.794402538s" podCreationTimestamp="2025-01-29 12:58:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:58:32.793756904 +0000 UTC m=+8.282373560" watchObservedRunningTime="2025-01-29 12:58:32.794402538 +0000 UTC m=+8.283019193" Jan 29 12:58:33.808694 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount289595302.mount: Deactivated successfully. Jan 29 12:58:34.591412 containerd[1515]: time="2025-01-29T12:58:34.591335197Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:58:34.592754 containerd[1515]: time="2025-01-29T12:58:34.592629591Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 29 12:58:34.593770 containerd[1515]: time="2025-01-29T12:58:34.593719691Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:58:34.595882 containerd[1515]: time="2025-01-29T12:58:34.595852140Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.699895636s" Jan 29 12:58:34.596084 containerd[1515]: time="2025-01-29T12:58:34.595978334Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 29 12:58:34.599915 containerd[1515]: time="2025-01-29T12:58:34.598245777Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 29 12:58:34.599915 containerd[1515]: time="2025-01-29T12:58:34.599237823Z" level=info msg="CreateContainer within sandbox \"8c2f888900090e57cc1d97ef70712cb184f1579207b006f946e5375e908c49cd\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 29 12:58:34.626232 containerd[1515]: time="2025-01-29T12:58:34.626135848Z" level=info msg="CreateContainer within sandbox \"8c2f888900090e57cc1d97ef70712cb184f1579207b006f946e5375e908c49cd\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"e3cf26e544691120b398c39117e8f161bf075ca181e967f1a518cfd70b9b3586\"" Jan 29 12:58:34.626959 containerd[1515]: time="2025-01-29T12:58:34.626828779Z" level=info msg="StartContainer for \"e3cf26e544691120b398c39117e8f161bf075ca181e967f1a518cfd70b9b3586\"" Jan 29 12:58:34.681068 systemd[1]: Started cri-containerd-e3cf26e544691120b398c39117e8f161bf075ca181e967f1a518cfd70b9b3586.scope - libcontainer container e3cf26e544691120b398c39117e8f161bf075ca181e967f1a518cfd70b9b3586. Jan 29 12:58:34.729609 containerd[1515]: time="2025-01-29T12:58:34.729521308Z" level=info msg="StartContainer for \"e3cf26e544691120b398c39117e8f161bf075ca181e967f1a518cfd70b9b3586\" returns successfully" Jan 29 12:58:34.853412 kubelet[2726]: I0129 12:58:34.852491 2726 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-cvdpg" podStartSLOduration=1.150382641 podStartE2EDuration="3.852471931s" podCreationTimestamp="2025-01-29 12:58:31 +0000 UTC" firstStartedPulling="2025-01-29 12:58:31.894980994 +0000 UTC m=+7.383597643" lastFinishedPulling="2025-01-29 12:58:34.597070282 +0000 UTC m=+10.085686933" observedRunningTime="2025-01-29 12:58:34.826898108 +0000 UTC m=+10.315514771" watchObservedRunningTime="2025-01-29 12:58:34.852471931 +0000 UTC m=+10.341088576" Jan 29 12:58:42.105722 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4288731628.mount: Deactivated successfully. Jan 29 12:58:45.498254 containerd[1515]: time="2025-01-29T12:58:45.498145159Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:58:45.500343 containerd[1515]: time="2025-01-29T12:58:45.500273908Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 29 12:58:45.500823 containerd[1515]: time="2025-01-29T12:58:45.500486914Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:58:45.504706 containerd[1515]: time="2025-01-29T12:58:45.504672695Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.906378757s" Jan 29 12:58:45.505107 containerd[1515]: time="2025-01-29T12:58:45.504867521Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 29 12:58:45.509234 containerd[1515]: time="2025-01-29T12:58:45.509109138Z" level=info msg="CreateContainer within sandbox \"4517ff33a9839fe5c11f1a481d1c1eba5a1d5c373922b2ec6c73bc420ac34aee\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 29 12:58:45.590826 containerd[1515]: time="2025-01-29T12:58:45.590408088Z" level=info msg="CreateContainer within sandbox \"4517ff33a9839fe5c11f1a481d1c1eba5a1d5c373922b2ec6c73bc420ac34aee\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d67298d1a19316aaa425518c231fae214ffb5c86021e004a38853a0b8a7713a8\"" Jan 29 12:58:45.594017 containerd[1515]: time="2025-01-29T12:58:45.593987435Z" level=info msg="StartContainer for \"d67298d1a19316aaa425518c231fae214ffb5c86021e004a38853a0b8a7713a8\"" Jan 29 12:58:45.718045 systemd[1]: Started cri-containerd-d67298d1a19316aaa425518c231fae214ffb5c86021e004a38853a0b8a7713a8.scope - libcontainer container d67298d1a19316aaa425518c231fae214ffb5c86021e004a38853a0b8a7713a8. Jan 29 12:58:45.796850 systemd[1]: cri-containerd-d67298d1a19316aaa425518c231fae214ffb5c86021e004a38853a0b8a7713a8.scope: Deactivated successfully. Jan 29 12:58:45.811152 containerd[1515]: time="2025-01-29T12:58:45.811058669Z" level=info msg="StartContainer for \"d67298d1a19316aaa425518c231fae214ffb5c86021e004a38853a0b8a7713a8\" returns successfully" Jan 29 12:58:45.983133 containerd[1515]: time="2025-01-29T12:58:45.964847261Z" level=info msg="shim disconnected" id=d67298d1a19316aaa425518c231fae214ffb5c86021e004a38853a0b8a7713a8 namespace=k8s.io Jan 29 12:58:45.983133 containerd[1515]: time="2025-01-29T12:58:45.983011309Z" level=warning msg="cleaning up after shim disconnected" id=d67298d1a19316aaa425518c231fae214ffb5c86021e004a38853a0b8a7713a8 namespace=k8s.io Jan 29 12:58:45.983133 containerd[1515]: time="2025-01-29T12:58:45.983033078Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:58:46.584657 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d67298d1a19316aaa425518c231fae214ffb5c86021e004a38853a0b8a7713a8-rootfs.mount: Deactivated successfully. Jan 29 12:58:46.910489 containerd[1515]: time="2025-01-29T12:58:46.909997468Z" level=info msg="CreateContainer within sandbox \"4517ff33a9839fe5c11f1a481d1c1eba5a1d5c373922b2ec6c73bc420ac34aee\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 29 12:58:46.949169 containerd[1515]: time="2025-01-29T12:58:46.947457107Z" level=info msg="CreateContainer within sandbox \"4517ff33a9839fe5c11f1a481d1c1eba5a1d5c373922b2ec6c73bc420ac34aee\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a51fd6c0acfe26ddc63d8e61bf5f5caa373557a51a6c578d19be56dfcf91485f\"" Jan 29 12:58:46.949169 containerd[1515]: time="2025-01-29T12:58:46.948547957Z" level=info msg="StartContainer for \"a51fd6c0acfe26ddc63d8e61bf5f5caa373557a51a6c578d19be56dfcf91485f\"" Jan 29 12:58:46.996011 systemd[1]: Started cri-containerd-a51fd6c0acfe26ddc63d8e61bf5f5caa373557a51a6c578d19be56dfcf91485f.scope - libcontainer container a51fd6c0acfe26ddc63d8e61bf5f5caa373557a51a6c578d19be56dfcf91485f. Jan 29 12:58:47.030195 containerd[1515]: time="2025-01-29T12:58:47.030039702Z" level=info msg="StartContainer for \"a51fd6c0acfe26ddc63d8e61bf5f5caa373557a51a6c578d19be56dfcf91485f\" returns successfully" Jan 29 12:58:47.048674 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 12:58:47.049097 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 12:58:47.049247 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 29 12:58:47.058163 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 12:58:47.058466 systemd[1]: cri-containerd-a51fd6c0acfe26ddc63d8e61bf5f5caa373557a51a6c578d19be56dfcf91485f.scope: Deactivated successfully. Jan 29 12:58:47.094886 containerd[1515]: time="2025-01-29T12:58:47.094766766Z" level=info msg="shim disconnected" id=a51fd6c0acfe26ddc63d8e61bf5f5caa373557a51a6c578d19be56dfcf91485f namespace=k8s.io Jan 29 12:58:47.094886 containerd[1515]: time="2025-01-29T12:58:47.094877651Z" level=warning msg="cleaning up after shim disconnected" id=a51fd6c0acfe26ddc63d8e61bf5f5caa373557a51a6c578d19be56dfcf91485f namespace=k8s.io Jan 29 12:58:47.096457 containerd[1515]: time="2025-01-29T12:58:47.094897181Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:58:47.117505 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 12:58:47.584838 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a51fd6c0acfe26ddc63d8e61bf5f5caa373557a51a6c578d19be56dfcf91485f-rootfs.mount: Deactivated successfully. Jan 29 12:58:47.910713 containerd[1515]: time="2025-01-29T12:58:47.910377581Z" level=info msg="CreateContainer within sandbox \"4517ff33a9839fe5c11f1a481d1c1eba5a1d5c373922b2ec6c73bc420ac34aee\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 29 12:58:47.963337 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2561188349.mount: Deactivated successfully. Jan 29 12:58:47.974209 containerd[1515]: time="2025-01-29T12:58:47.974102915Z" level=info msg="CreateContainer within sandbox \"4517ff33a9839fe5c11f1a481d1c1eba5a1d5c373922b2ec6c73bc420ac34aee\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f33a5f2971344d4c5a5b6c03b6c51ae98000d1ca7c0340fbe8b2c2c73bd22f3a\"" Jan 29 12:58:47.975033 containerd[1515]: time="2025-01-29T12:58:47.974895044Z" level=info msg="StartContainer for \"f33a5f2971344d4c5a5b6c03b6c51ae98000d1ca7c0340fbe8b2c2c73bd22f3a\"" Jan 29 12:58:48.057067 systemd[1]: Started cri-containerd-f33a5f2971344d4c5a5b6c03b6c51ae98000d1ca7c0340fbe8b2c2c73bd22f3a.scope - libcontainer container f33a5f2971344d4c5a5b6c03b6c51ae98000d1ca7c0340fbe8b2c2c73bd22f3a. Jan 29 12:58:48.111015 containerd[1515]: time="2025-01-29T12:58:48.110911465Z" level=info msg="StartContainer for \"f33a5f2971344d4c5a5b6c03b6c51ae98000d1ca7c0340fbe8b2c2c73bd22f3a\" returns successfully" Jan 29 12:58:48.118821 systemd[1]: cri-containerd-f33a5f2971344d4c5a5b6c03b6c51ae98000d1ca7c0340fbe8b2c2c73bd22f3a.scope: Deactivated successfully. Jan 29 12:58:48.154207 containerd[1515]: time="2025-01-29T12:58:48.154124895Z" level=info msg="shim disconnected" id=f33a5f2971344d4c5a5b6c03b6c51ae98000d1ca7c0340fbe8b2c2c73bd22f3a namespace=k8s.io Jan 29 12:58:48.154207 containerd[1515]: time="2025-01-29T12:58:48.154203791Z" level=warning msg="cleaning up after shim disconnected" id=f33a5f2971344d4c5a5b6c03b6c51ae98000d1ca7c0340fbe8b2c2c73bd22f3a namespace=k8s.io Jan 29 12:58:48.155152 containerd[1515]: time="2025-01-29T12:58:48.154218164Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:58:48.176275 containerd[1515]: time="2025-01-29T12:58:48.174337771Z" level=warning msg="cleanup warnings time=\"2025-01-29T12:58:48Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 12:58:48.584139 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f33a5f2971344d4c5a5b6c03b6c51ae98000d1ca7c0340fbe8b2c2c73bd22f3a-rootfs.mount: Deactivated successfully. Jan 29 12:58:48.920445 containerd[1515]: time="2025-01-29T12:58:48.918858634Z" level=info msg="CreateContainer within sandbox \"4517ff33a9839fe5c11f1a481d1c1eba5a1d5c373922b2ec6c73bc420ac34aee\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 29 12:58:48.943016 containerd[1515]: time="2025-01-29T12:58:48.942946201Z" level=info msg="CreateContainer within sandbox \"4517ff33a9839fe5c11f1a481d1c1eba5a1d5c373922b2ec6c73bc420ac34aee\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"102f7b9eeb5d4edb026d307e440cbb52b750e408aa3da345f5120dc814721c06\"" Jan 29 12:58:48.947760 containerd[1515]: time="2025-01-29T12:58:48.944337637Z" level=info msg="StartContainer for \"102f7b9eeb5d4edb026d307e440cbb52b750e408aa3da345f5120dc814721c06\"" Jan 29 12:58:48.997010 systemd[1]: Started cri-containerd-102f7b9eeb5d4edb026d307e440cbb52b750e408aa3da345f5120dc814721c06.scope - libcontainer container 102f7b9eeb5d4edb026d307e440cbb52b750e408aa3da345f5120dc814721c06. Jan 29 12:58:49.047777 systemd[1]: cri-containerd-102f7b9eeb5d4edb026d307e440cbb52b750e408aa3da345f5120dc814721c06.scope: Deactivated successfully. Jan 29 12:58:49.049917 containerd[1515]: time="2025-01-29T12:58:49.049859705Z" level=info msg="StartContainer for \"102f7b9eeb5d4edb026d307e440cbb52b750e408aa3da345f5120dc814721c06\" returns successfully" Jan 29 12:58:49.088123 containerd[1515]: time="2025-01-29T12:58:49.087990899Z" level=info msg="shim disconnected" id=102f7b9eeb5d4edb026d307e440cbb52b750e408aa3da345f5120dc814721c06 namespace=k8s.io Jan 29 12:58:49.088463 containerd[1515]: time="2025-01-29T12:58:49.088129644Z" level=warning msg="cleaning up after shim disconnected" id=102f7b9eeb5d4edb026d307e440cbb52b750e408aa3da345f5120dc814721c06 namespace=k8s.io Jan 29 12:58:49.088463 containerd[1515]: time="2025-01-29T12:58:49.088150032Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:58:49.584587 systemd[1]: run-containerd-runc-k8s.io-102f7b9eeb5d4edb026d307e440cbb52b750e408aa3da345f5120dc814721c06-runc.EwMbtl.mount: Deactivated successfully. Jan 29 12:58:49.584794 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-102f7b9eeb5d4edb026d307e440cbb52b750e408aa3da345f5120dc814721c06-rootfs.mount: Deactivated successfully. Jan 29 12:58:49.929414 containerd[1515]: time="2025-01-29T12:58:49.927270997Z" level=info msg="CreateContainer within sandbox \"4517ff33a9839fe5c11f1a481d1c1eba5a1d5c373922b2ec6c73bc420ac34aee\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 29 12:58:49.952947 containerd[1515]: time="2025-01-29T12:58:49.952876189Z" level=info msg="CreateContainer within sandbox \"4517ff33a9839fe5c11f1a481d1c1eba5a1d5c373922b2ec6c73bc420ac34aee\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"50c90e5ccb930483a61f88c9f118d565182bef951184d801aeb3699e24e9ee9d\"" Jan 29 12:58:49.953798 containerd[1515]: time="2025-01-29T12:58:49.953748768Z" level=info msg="StartContainer for \"50c90e5ccb930483a61f88c9f118d565182bef951184d801aeb3699e24e9ee9d\"" Jan 29 12:58:50.006080 systemd[1]: Started cri-containerd-50c90e5ccb930483a61f88c9f118d565182bef951184d801aeb3699e24e9ee9d.scope - libcontainer container 50c90e5ccb930483a61f88c9f118d565182bef951184d801aeb3699e24e9ee9d. Jan 29 12:58:50.057279 containerd[1515]: time="2025-01-29T12:58:50.057203069Z" level=info msg="StartContainer for \"50c90e5ccb930483a61f88c9f118d565182bef951184d801aeb3699e24e9ee9d\" returns successfully" Jan 29 12:58:50.349246 kubelet[2726]: I0129 12:58:50.349179 2726 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 29 12:58:50.442942 systemd[1]: Created slice kubepods-burstable-podf707b63d_0c2c_4a9d_b972_c106e5d101ce.slice - libcontainer container kubepods-burstable-podf707b63d_0c2c_4a9d_b972_c106e5d101ce.slice. Jan 29 12:58:50.456326 systemd[1]: Created slice kubepods-burstable-pod450ba1e5_6526_43de_b462_dcc818053fc4.slice - libcontainer container kubepods-burstable-pod450ba1e5_6526_43de_b462_dcc818053fc4.slice. Jan 29 12:58:50.496629 kubelet[2726]: I0129 12:58:50.495996 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/450ba1e5-6526-43de-b462-dcc818053fc4-config-volume\") pod \"coredns-6f6b679f8f-khd2n\" (UID: \"450ba1e5-6526-43de-b462-dcc818053fc4\") " pod="kube-system/coredns-6f6b679f8f-khd2n" Jan 29 12:58:50.496629 kubelet[2726]: I0129 12:58:50.496066 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trs8p\" (UniqueName: \"kubernetes.io/projected/450ba1e5-6526-43de-b462-dcc818053fc4-kube-api-access-trs8p\") pod \"coredns-6f6b679f8f-khd2n\" (UID: \"450ba1e5-6526-43de-b462-dcc818053fc4\") " pod="kube-system/coredns-6f6b679f8f-khd2n" Jan 29 12:58:50.496629 kubelet[2726]: I0129 12:58:50.496116 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f707b63d-0c2c-4a9d-b972-c106e5d101ce-config-volume\") pod \"coredns-6f6b679f8f-ld4tf\" (UID: \"f707b63d-0c2c-4a9d-b972-c106e5d101ce\") " pod="kube-system/coredns-6f6b679f8f-ld4tf" Jan 29 12:58:50.496629 kubelet[2726]: I0129 12:58:50.496145 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w64hf\" (UniqueName: \"kubernetes.io/projected/f707b63d-0c2c-4a9d-b972-c106e5d101ce-kube-api-access-w64hf\") pod \"coredns-6f6b679f8f-ld4tf\" (UID: \"f707b63d-0c2c-4a9d-b972-c106e5d101ce\") " pod="kube-system/coredns-6f6b679f8f-ld4tf" Jan 29 12:58:50.750890 containerd[1515]: time="2025-01-29T12:58:50.750714845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-ld4tf,Uid:f707b63d-0c2c-4a9d-b972-c106e5d101ce,Namespace:kube-system,Attempt:0,}" Jan 29 12:58:50.762352 containerd[1515]: time="2025-01-29T12:58:50.762306577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-khd2n,Uid:450ba1e5-6526-43de-b462-dcc818053fc4,Namespace:kube-system,Attempt:0,}" Jan 29 12:58:52.761409 systemd-networkd[1437]: cilium_host: Link UP Jan 29 12:58:52.768558 systemd-networkd[1437]: cilium_net: Link UP Jan 29 12:58:52.768942 systemd-networkd[1437]: cilium_net: Gained carrier Jan 29 12:58:52.769272 systemd-networkd[1437]: cilium_host: Gained carrier Jan 29 12:58:52.927095 systemd-networkd[1437]: cilium_vxlan: Link UP Jan 29 12:58:52.928083 systemd-networkd[1437]: cilium_vxlan: Gained carrier Jan 29 12:58:53.032103 systemd-networkd[1437]: cilium_net: Gained IPv6LL Jan 29 12:58:53.507915 kernel: NET: Registered PF_ALG protocol family Jan 29 12:58:53.506397 systemd-networkd[1437]: cilium_host: Gained IPv6LL Jan 29 12:58:54.547691 systemd-networkd[1437]: lxc_health: Link UP Jan 29 12:58:54.551023 systemd-networkd[1437]: lxc_health: Gained carrier Jan 29 12:58:54.872471 systemd-networkd[1437]: lxc80ba64c85123: Link UP Jan 29 12:58:54.881751 kernel: eth0: renamed from tmp4d440 Jan 29 12:58:54.887711 systemd-networkd[1437]: lxc923a6e4b94fb: Link UP Jan 29 12:58:54.898829 kernel: eth0: renamed from tmpf3c1c Jan 29 12:58:54.910772 systemd-networkd[1437]: lxc80ba64c85123: Gained carrier Jan 29 12:58:54.914747 systemd-networkd[1437]: lxc923a6e4b94fb: Gained carrier Jan 29 12:58:54.915204 systemd-networkd[1437]: cilium_vxlan: Gained IPv6LL Jan 29 12:58:55.932598 kubelet[2726]: I0129 12:58:55.931778 2726 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-5q8wt" podStartSLOduration=12.48802502 podStartE2EDuration="25.9317197s" podCreationTimestamp="2025-01-29 12:58:30 +0000 UTC" firstStartedPulling="2025-01-29 12:58:32.062140482 +0000 UTC m=+7.550757123" lastFinishedPulling="2025-01-29 12:58:45.505835157 +0000 UTC m=+20.994451803" observedRunningTime="2025-01-29 12:58:50.961833204 +0000 UTC m=+26.450449857" watchObservedRunningTime="2025-01-29 12:58:55.9317197 +0000 UTC m=+31.420336352" Jan 29 12:58:56.064098 systemd-networkd[1437]: lxc_health: Gained IPv6LL Jan 29 12:58:56.192216 systemd-networkd[1437]: lxc80ba64c85123: Gained IPv6LL Jan 29 12:58:56.576192 systemd-networkd[1437]: lxc923a6e4b94fb: Gained IPv6LL Jan 29 12:59:00.748344 containerd[1515]: time="2025-01-29T12:59:00.748056420Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:59:00.754819 containerd[1515]: time="2025-01-29T12:59:00.749010365Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:59:00.754819 containerd[1515]: time="2025-01-29T12:59:00.751857260Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:59:00.754819 containerd[1515]: time="2025-01-29T12:59:00.752075544Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:59:00.754819 containerd[1515]: time="2025-01-29T12:59:00.750015028Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:59:00.754819 containerd[1515]: time="2025-01-29T12:59:00.750146690Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:59:00.754819 containerd[1515]: time="2025-01-29T12:59:00.750171734Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:59:00.754819 containerd[1515]: time="2025-01-29T12:59:00.750363839Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:59:00.849038 systemd[1]: Started cri-containerd-4d4402677a52fafd36fd51c1c0ef04882ffa2a131858e4d8b272bc38b6a39935.scope - libcontainer container 4d4402677a52fafd36fd51c1c0ef04882ffa2a131858e4d8b272bc38b6a39935. Jan 29 12:59:00.854710 systemd[1]: Started cri-containerd-f3c1c4ee7cab31291a19c73a4590deccde661780f48300c31a74602b1114bba3.scope - libcontainer container f3c1c4ee7cab31291a19c73a4590deccde661780f48300c31a74602b1114bba3. Jan 29 12:59:01.086777 containerd[1515]: time="2025-01-29T12:59:01.085057439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-khd2n,Uid:450ba1e5-6526-43de-b462-dcc818053fc4,Namespace:kube-system,Attempt:0,} returns sandbox id \"4d4402677a52fafd36fd51c1c0ef04882ffa2a131858e4d8b272bc38b6a39935\"" Jan 29 12:59:01.098825 containerd[1515]: time="2025-01-29T12:59:01.098419124Z" level=info msg="CreateContainer within sandbox \"4d4402677a52fafd36fd51c1c0ef04882ffa2a131858e4d8b272bc38b6a39935\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 12:59:01.111819 containerd[1515]: time="2025-01-29T12:59:01.111718339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-ld4tf,Uid:f707b63d-0c2c-4a9d-b972-c106e5d101ce,Namespace:kube-system,Attempt:0,} returns sandbox id \"f3c1c4ee7cab31291a19c73a4590deccde661780f48300c31a74602b1114bba3\"" Jan 29 12:59:01.116859 containerd[1515]: time="2025-01-29T12:59:01.116574698Z" level=info msg="CreateContainer within sandbox \"f3c1c4ee7cab31291a19c73a4590deccde661780f48300c31a74602b1114bba3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 12:59:01.134090 containerd[1515]: time="2025-01-29T12:59:01.134015240Z" level=info msg="CreateContainer within sandbox \"4d4402677a52fafd36fd51c1c0ef04882ffa2a131858e4d8b272bc38b6a39935\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"92bd360a565b57cca68f9c8734042984a677c3123d7d1694770724a4728f99dd\"" Jan 29 12:59:01.135175 containerd[1515]: time="2025-01-29T12:59:01.135102809Z" level=info msg="StartContainer for \"92bd360a565b57cca68f9c8734042984a677c3123d7d1694770724a4728f99dd\"" Jan 29 12:59:01.139693 containerd[1515]: time="2025-01-29T12:59:01.139536548Z" level=info msg="CreateContainer within sandbox \"f3c1c4ee7cab31291a19c73a4590deccde661780f48300c31a74602b1114bba3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"26a75277b98bd300b08da9a515b1f9fdc290511b6f42f5db4ba15809f9a13095\"" Jan 29 12:59:01.141658 containerd[1515]: time="2025-01-29T12:59:01.141165744Z" level=info msg="StartContainer for \"26a75277b98bd300b08da9a515b1f9fdc290511b6f42f5db4ba15809f9a13095\"" Jan 29 12:59:01.184806 systemd[1]: Started cri-containerd-92bd360a565b57cca68f9c8734042984a677c3123d7d1694770724a4728f99dd.scope - libcontainer container 92bd360a565b57cca68f9c8734042984a677c3123d7d1694770724a4728f99dd. Jan 29 12:59:01.196299 systemd[1]: Started cri-containerd-26a75277b98bd300b08da9a515b1f9fdc290511b6f42f5db4ba15809f9a13095.scope - libcontainer container 26a75277b98bd300b08da9a515b1f9fdc290511b6f42f5db4ba15809f9a13095. Jan 29 12:59:01.252341 containerd[1515]: time="2025-01-29T12:59:01.252201100Z" level=info msg="StartContainer for \"92bd360a565b57cca68f9c8734042984a677c3123d7d1694770724a4728f99dd\" returns successfully" Jan 29 12:59:01.262902 containerd[1515]: time="2025-01-29T12:59:01.261666354Z" level=info msg="StartContainer for \"26a75277b98bd300b08da9a515b1f9fdc290511b6f42f5db4ba15809f9a13095\" returns successfully" Jan 29 12:59:01.760535 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4075915575.mount: Deactivated successfully. Jan 29 12:59:02.008691 kubelet[2726]: I0129 12:59:02.007155 2726 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-ld4tf" podStartSLOduration=31.007092746 podStartE2EDuration="31.007092746s" podCreationTimestamp="2025-01-29 12:58:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:59:02.005496042 +0000 UTC m=+37.494112682" watchObservedRunningTime="2025-01-29 12:59:02.007092746 +0000 UTC m=+37.495709397" Jan 29 12:59:02.062676 kubelet[2726]: I0129 12:59:02.061947 2726 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-khd2n" podStartSLOduration=31.061918771 podStartE2EDuration="31.061918771s" podCreationTimestamp="2025-01-29 12:58:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:59:02.058826743 +0000 UTC m=+37.547443398" watchObservedRunningTime="2025-01-29 12:59:02.061918771 +0000 UTC m=+37.550535421" Jan 29 12:59:51.739334 systemd[1]: Started sshd@9-10.243.84.18:22-185.42.12.240:26108.service - OpenSSH per-connection server daemon (185.42.12.240:26108). Jan 29 12:59:52.070833 sshd[4114]: Connection reset by authenticating user root 185.42.12.240 port 26108 [preauth] Jan 29 12:59:52.074628 systemd[1]: sshd@9-10.243.84.18:22-185.42.12.240:26108.service: Deactivated successfully. Jan 29 12:59:52.149530 systemd[1]: Started sshd@10-10.243.84.18:22-185.42.12.240:26134.service - OpenSSH per-connection server daemon (185.42.12.240:26134). Jan 29 12:59:52.409857 sshd[4119]: Invalid user telecomadmin from 185.42.12.240 port 26134 Jan 29 12:59:52.465225 sshd[4119]: Connection reset by invalid user telecomadmin 185.42.12.240 port 26134 [preauth] Jan 29 12:59:52.467796 systemd[1]: sshd@10-10.243.84.18:22-185.42.12.240:26134.service: Deactivated successfully. Jan 29 12:59:52.540294 systemd[1]: Started sshd@11-10.243.84.18:22-185.42.12.240:26148.service - OpenSSH per-connection server daemon (185.42.12.240:26148). Jan 29 12:59:52.798830 sshd[4124]: Invalid user admin from 185.42.12.240 port 26148 Jan 29 12:59:52.854201 sshd[4124]: Connection reset by invalid user admin 185.42.12.240 port 26148 [preauth] Jan 29 12:59:52.856577 systemd[1]: sshd@11-10.243.84.18:22-185.42.12.240:26148.service: Deactivated successfully. Jan 29 12:59:52.923618 systemd[1]: Started sshd@12-10.243.84.18:22-185.42.12.240:26156.service - OpenSSH per-connection server daemon (185.42.12.240:26156). Jan 29 12:59:53.230907 sshd[4129]: Connection reset by authenticating user root 185.42.12.240 port 26156 [preauth] Jan 29 12:59:53.232276 systemd[1]: sshd@12-10.243.84.18:22-185.42.12.240:26156.service: Deactivated successfully. Jan 29 12:59:53.305804 systemd[1]: Started sshd@13-10.243.84.18:22-185.42.12.240:26158.service - OpenSSH per-connection server daemon (185.42.12.240:26158). Jan 29 12:59:53.671686 sshd[4134]: Connection reset by authenticating user root 185.42.12.240 port 26158 [preauth] Jan 29 12:59:53.673490 systemd[1]: sshd@13-10.243.84.18:22-185.42.12.240:26158.service: Deactivated successfully. Jan 29 13:00:25.160254 systemd[1]: Started sshd@14-10.243.84.18:22-147.75.109.163:46288.service - OpenSSH per-connection server daemon (147.75.109.163:46288). Jan 29 13:00:26.087950 sshd[4145]: Accepted publickey for core from 147.75.109.163 port 46288 ssh2: RSA SHA256:N4m0UAGAVL0aGRQpLGyvungYkW8dGkNI4mN5vZ/Bmd0 Jan 29 13:00:26.091552 sshd-session[4145]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 13:00:26.101124 systemd-logind[1492]: New session 12 of user core. Jan 29 13:00:26.109112 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 29 13:00:27.313763 sshd[4147]: Connection closed by 147.75.109.163 port 46288 Jan 29 13:00:27.316702 sshd-session[4145]: pam_unix(sshd:session): session closed for user core Jan 29 13:00:27.326333 systemd-logind[1492]: Session 12 logged out. Waiting for processes to exit. Jan 29 13:00:27.327036 systemd[1]: sshd@14-10.243.84.18:22-147.75.109.163:46288.service: Deactivated successfully. Jan 29 13:00:27.331369 systemd[1]: session-12.scope: Deactivated successfully. Jan 29 13:00:27.333264 systemd-logind[1492]: Removed session 12. Jan 29 13:00:32.474830 systemd[1]: Started sshd@15-10.243.84.18:22-147.75.109.163:49012.service - OpenSSH per-connection server daemon (147.75.109.163:49012). Jan 29 13:00:33.365374 sshd[4160]: Accepted publickey for core from 147.75.109.163 port 49012 ssh2: RSA SHA256:N4m0UAGAVL0aGRQpLGyvungYkW8dGkNI4mN5vZ/Bmd0 Jan 29 13:00:33.367612 sshd-session[4160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 13:00:33.374073 systemd-logind[1492]: New session 13 of user core. Jan 29 13:00:33.379983 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 29 13:00:34.082052 sshd[4164]: Connection closed by 147.75.109.163 port 49012 Jan 29 13:00:34.083735 sshd-session[4160]: pam_unix(sshd:session): session closed for user core Jan 29 13:00:34.089180 systemd[1]: sshd@15-10.243.84.18:22-147.75.109.163:49012.service: Deactivated successfully. Jan 29 13:00:34.091880 systemd[1]: session-13.scope: Deactivated successfully. Jan 29 13:00:34.093377 systemd-logind[1492]: Session 13 logged out. Waiting for processes to exit. Jan 29 13:00:34.094917 systemd-logind[1492]: Removed session 13. Jan 29 13:00:39.246181 systemd[1]: Started sshd@16-10.243.84.18:22-147.75.109.163:51006.service - OpenSSH per-connection server daemon (147.75.109.163:51006). Jan 29 13:00:40.165139 sshd[4176]: Accepted publickey for core from 147.75.109.163 port 51006 ssh2: RSA SHA256:N4m0UAGAVL0aGRQpLGyvungYkW8dGkNI4mN5vZ/Bmd0 Jan 29 13:00:40.167303 sshd-session[4176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 13:00:40.177395 systemd-logind[1492]: New session 14 of user core. Jan 29 13:00:40.184011 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 29 13:00:40.900942 sshd[4178]: Connection closed by 147.75.109.163 port 51006 Jan 29 13:00:40.902302 sshd-session[4176]: pam_unix(sshd:session): session closed for user core Jan 29 13:00:40.910720 systemd[1]: sshd@16-10.243.84.18:22-147.75.109.163:51006.service: Deactivated successfully. Jan 29 13:00:40.915105 systemd[1]: session-14.scope: Deactivated successfully. Jan 29 13:00:40.916535 systemd-logind[1492]: Session 14 logged out. Waiting for processes to exit. Jan 29 13:00:40.919283 systemd-logind[1492]: Removed session 14. Jan 29 13:00:41.062158 systemd[1]: Started sshd@17-10.243.84.18:22-147.75.109.163:51022.service - OpenSSH per-connection server daemon (147.75.109.163:51022). Jan 29 13:00:41.944603 sshd[4190]: Accepted publickey for core from 147.75.109.163 port 51022 ssh2: RSA SHA256:N4m0UAGAVL0aGRQpLGyvungYkW8dGkNI4mN5vZ/Bmd0 Jan 29 13:00:41.946843 sshd-session[4190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 13:00:41.953344 systemd-logind[1492]: New session 15 of user core. Jan 29 13:00:41.961974 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 29 13:00:42.755872 sshd[4192]: Connection closed by 147.75.109.163 port 51022 Jan 29 13:00:42.755323 sshd-session[4190]: pam_unix(sshd:session): session closed for user core Jan 29 13:00:42.762960 systemd-logind[1492]: Session 15 logged out. Waiting for processes to exit. Jan 29 13:00:42.764272 systemd[1]: sshd@17-10.243.84.18:22-147.75.109.163:51022.service: Deactivated successfully. Jan 29 13:00:42.767776 systemd[1]: session-15.scope: Deactivated successfully. Jan 29 13:00:42.771092 systemd-logind[1492]: Removed session 15. Jan 29 13:00:42.919207 systemd[1]: Started sshd@18-10.243.84.18:22-147.75.109.163:51038.service - OpenSSH per-connection server daemon (147.75.109.163:51038). Jan 29 13:00:43.823659 sshd[4201]: Accepted publickey for core from 147.75.109.163 port 51038 ssh2: RSA SHA256:N4m0UAGAVL0aGRQpLGyvungYkW8dGkNI4mN5vZ/Bmd0 Jan 29 13:00:43.825841 sshd-session[4201]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 13:00:43.833158 systemd-logind[1492]: New session 16 of user core. Jan 29 13:00:43.840991 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 29 13:00:44.534074 sshd[4203]: Connection closed by 147.75.109.163 port 51038 Jan 29 13:00:44.535093 sshd-session[4201]: pam_unix(sshd:session): session closed for user core Jan 29 13:00:44.539381 systemd[1]: sshd@18-10.243.84.18:22-147.75.109.163:51038.service: Deactivated successfully. Jan 29 13:00:44.542031 systemd[1]: session-16.scope: Deactivated successfully. Jan 29 13:00:44.544125 systemd-logind[1492]: Session 16 logged out. Waiting for processes to exit. Jan 29 13:00:44.546069 systemd-logind[1492]: Removed session 16. Jan 29 13:00:49.691098 systemd[1]: Started sshd@19-10.243.84.18:22-147.75.109.163:48884.service - OpenSSH per-connection server daemon (147.75.109.163:48884). Jan 29 13:00:50.585127 sshd[4215]: Accepted publickey for core from 147.75.109.163 port 48884 ssh2: RSA SHA256:N4m0UAGAVL0aGRQpLGyvungYkW8dGkNI4mN5vZ/Bmd0 Jan 29 13:00:50.587267 sshd-session[4215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 13:00:50.594662 systemd-logind[1492]: New session 17 of user core. Jan 29 13:00:50.598007 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 29 13:00:51.286114 sshd[4217]: Connection closed by 147.75.109.163 port 48884 Jan 29 13:00:51.286678 sshd-session[4215]: pam_unix(sshd:session): session closed for user core Jan 29 13:00:51.292417 systemd[1]: sshd@19-10.243.84.18:22-147.75.109.163:48884.service: Deactivated successfully. Jan 29 13:00:51.294953 systemd[1]: session-17.scope: Deactivated successfully. Jan 29 13:00:51.296068 systemd-logind[1492]: Session 17 logged out. Waiting for processes to exit. Jan 29 13:00:51.297494 systemd-logind[1492]: Removed session 17. Jan 29 13:00:56.447157 systemd[1]: Started sshd@20-10.243.84.18:22-147.75.109.163:48898.service - OpenSSH per-connection server daemon (147.75.109.163:48898). Jan 29 13:00:57.354700 sshd[4229]: Accepted publickey for core from 147.75.109.163 port 48898 ssh2: RSA SHA256:N4m0UAGAVL0aGRQpLGyvungYkW8dGkNI4mN5vZ/Bmd0 Jan 29 13:00:57.356937 sshd-session[4229]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 13:00:57.366214 systemd-logind[1492]: New session 18 of user core. Jan 29 13:00:57.369179 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 29 13:00:58.082342 sshd[4231]: Connection closed by 147.75.109.163 port 48898 Jan 29 13:00:58.084250 sshd-session[4229]: pam_unix(sshd:session): session closed for user core Jan 29 13:00:58.089475 systemd[1]: sshd@20-10.243.84.18:22-147.75.109.163:48898.service: Deactivated successfully. Jan 29 13:00:58.092405 systemd[1]: session-18.scope: Deactivated successfully. Jan 29 13:00:58.093452 systemd-logind[1492]: Session 18 logged out. Waiting for processes to exit. Jan 29 13:00:58.095418 systemd-logind[1492]: Removed session 18. Jan 29 13:00:58.238149 systemd[1]: Started sshd@21-10.243.84.18:22-147.75.109.163:37682.service - OpenSSH per-connection server daemon (147.75.109.163:37682). Jan 29 13:00:59.140446 sshd[4242]: Accepted publickey for core from 147.75.109.163 port 37682 ssh2: RSA SHA256:N4m0UAGAVL0aGRQpLGyvungYkW8dGkNI4mN5vZ/Bmd0 Jan 29 13:00:59.143211 sshd-session[4242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 13:00:59.149590 systemd-logind[1492]: New session 19 of user core. Jan 29 13:00:59.161167 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 29 13:01:00.214857 sshd[4244]: Connection closed by 147.75.109.163 port 37682 Jan 29 13:01:00.216431 sshd-session[4242]: pam_unix(sshd:session): session closed for user core Jan 29 13:01:00.221323 systemd[1]: sshd@21-10.243.84.18:22-147.75.109.163:37682.service: Deactivated successfully. Jan 29 13:01:00.224051 systemd[1]: session-19.scope: Deactivated successfully. Jan 29 13:01:00.225773 systemd-logind[1492]: Session 19 logged out. Waiting for processes to exit. Jan 29 13:01:00.227643 systemd-logind[1492]: Removed session 19. Jan 29 13:01:00.380226 systemd[1]: Started sshd@22-10.243.84.18:22-147.75.109.163:37696.service - OpenSSH per-connection server daemon (147.75.109.163:37696). Jan 29 13:01:01.283293 sshd[4253]: Accepted publickey for core from 147.75.109.163 port 37696 ssh2: RSA SHA256:N4m0UAGAVL0aGRQpLGyvungYkW8dGkNI4mN5vZ/Bmd0 Jan 29 13:01:01.284127 sshd-session[4253]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 13:01:01.293290 systemd-logind[1492]: New session 20 of user core. Jan 29 13:01:01.301018 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 29 13:01:04.161334 sshd[4255]: Connection closed by 147.75.109.163 port 37696 Jan 29 13:01:04.164078 sshd-session[4253]: pam_unix(sshd:session): session closed for user core Jan 29 13:01:04.182392 systemd[1]: sshd@22-10.243.84.18:22-147.75.109.163:37696.service: Deactivated successfully. Jan 29 13:01:04.185933 systemd[1]: session-20.scope: Deactivated successfully. Jan 29 13:01:04.187190 systemd-logind[1492]: Session 20 logged out. Waiting for processes to exit. Jan 29 13:01:04.189044 systemd-logind[1492]: Removed session 20. Jan 29 13:01:04.325205 systemd[1]: Started sshd@23-10.243.84.18:22-147.75.109.163:37706.service - OpenSSH per-connection server daemon (147.75.109.163:37706). Jan 29 13:01:05.227337 sshd[4274]: Accepted publickey for core from 147.75.109.163 port 37706 ssh2: RSA SHA256:N4m0UAGAVL0aGRQpLGyvungYkW8dGkNI4mN5vZ/Bmd0 Jan 29 13:01:05.229352 sshd-session[4274]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 13:01:05.236106 systemd-logind[1492]: New session 21 of user core. Jan 29 13:01:05.243024 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 29 13:01:06.249049 sshd[4276]: Connection closed by 147.75.109.163 port 37706 Jan 29 13:01:06.250334 sshd-session[4274]: pam_unix(sshd:session): session closed for user core Jan 29 13:01:06.255692 systemd[1]: sshd@23-10.243.84.18:22-147.75.109.163:37706.service: Deactivated successfully. Jan 29 13:01:06.258210 systemd[1]: session-21.scope: Deactivated successfully. Jan 29 13:01:06.259530 systemd-logind[1492]: Session 21 logged out. Waiting for processes to exit. Jan 29 13:01:06.260988 systemd-logind[1492]: Removed session 21. Jan 29 13:01:06.406310 systemd[1]: Started sshd@24-10.243.84.18:22-147.75.109.163:37708.service - OpenSSH per-connection server daemon (147.75.109.163:37708). Jan 29 13:01:07.309068 sshd[4285]: Accepted publickey for core from 147.75.109.163 port 37708 ssh2: RSA SHA256:N4m0UAGAVL0aGRQpLGyvungYkW8dGkNI4mN5vZ/Bmd0 Jan 29 13:01:07.310946 sshd-session[4285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 13:01:07.317138 systemd-logind[1492]: New session 22 of user core. Jan 29 13:01:07.322998 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 29 13:01:08.015837 sshd[4287]: Connection closed by 147.75.109.163 port 37708 Jan 29 13:01:08.016864 sshd-session[4285]: pam_unix(sshd:session): session closed for user core Jan 29 13:01:08.022583 systemd[1]: sshd@24-10.243.84.18:22-147.75.109.163:37708.service: Deactivated successfully. Jan 29 13:01:08.027246 systemd[1]: session-22.scope: Deactivated successfully. Jan 29 13:01:08.029764 systemd-logind[1492]: Session 22 logged out. Waiting for processes to exit. Jan 29 13:01:08.031645 systemd-logind[1492]: Removed session 22. Jan 29 13:01:13.180175 systemd[1]: Started sshd@25-10.243.84.18:22-147.75.109.163:32872.service - OpenSSH per-connection server daemon (147.75.109.163:32872). Jan 29 13:01:14.082746 sshd[4302]: Accepted publickey for core from 147.75.109.163 port 32872 ssh2: RSA SHA256:N4m0UAGAVL0aGRQpLGyvungYkW8dGkNI4mN5vZ/Bmd0 Jan 29 13:01:14.084645 sshd-session[4302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 13:01:14.091017 systemd-logind[1492]: New session 23 of user core. Jan 29 13:01:14.101151 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 29 13:01:14.791542 sshd[4304]: Connection closed by 147.75.109.163 port 32872 Jan 29 13:01:14.792927 sshd-session[4302]: pam_unix(sshd:session): session closed for user core Jan 29 13:01:14.798543 systemd[1]: sshd@25-10.243.84.18:22-147.75.109.163:32872.service: Deactivated successfully. Jan 29 13:01:14.802153 systemd[1]: session-23.scope: Deactivated successfully. Jan 29 13:01:14.803725 systemd-logind[1492]: Session 23 logged out. Waiting for processes to exit. Jan 29 13:01:14.805726 systemd-logind[1492]: Removed session 23. Jan 29 13:01:19.953176 systemd[1]: Started sshd@26-10.243.84.18:22-147.75.109.163:58602.service - OpenSSH per-connection server daemon (147.75.109.163:58602). Jan 29 13:01:20.846830 sshd[4316]: Accepted publickey for core from 147.75.109.163 port 58602 ssh2: RSA SHA256:N4m0UAGAVL0aGRQpLGyvungYkW8dGkNI4mN5vZ/Bmd0 Jan 29 13:01:20.849194 sshd-session[4316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 13:01:20.857074 systemd-logind[1492]: New session 24 of user core. Jan 29 13:01:20.864981 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 29 13:01:21.552613 sshd[4318]: Connection closed by 147.75.109.163 port 58602 Jan 29 13:01:21.553249 sshd-session[4316]: pam_unix(sshd:session): session closed for user core Jan 29 13:01:21.558795 systemd[1]: sshd@26-10.243.84.18:22-147.75.109.163:58602.service: Deactivated successfully. Jan 29 13:01:21.561224 systemd[1]: session-24.scope: Deactivated successfully. Jan 29 13:01:21.562279 systemd-logind[1492]: Session 24 logged out. Waiting for processes to exit. Jan 29 13:01:21.564098 systemd-logind[1492]: Removed session 24. Jan 29 13:01:21.713140 systemd[1]: Started sshd@27-10.243.84.18:22-147.75.109.163:58614.service - OpenSSH per-connection server daemon (147.75.109.163:58614). Jan 29 13:01:22.601020 sshd[4329]: Accepted publickey for core from 147.75.109.163 port 58614 ssh2: RSA SHA256:N4m0UAGAVL0aGRQpLGyvungYkW8dGkNI4mN5vZ/Bmd0 Jan 29 13:01:22.602980 sshd-session[4329]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 13:01:22.609020 systemd-logind[1492]: New session 25 of user core. Jan 29 13:01:22.620072 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 29 13:01:24.512121 containerd[1515]: time="2025-01-29T13:01:24.511399850Z" level=info msg="StopContainer for \"e3cf26e544691120b398c39117e8f161bf075ca181e967f1a518cfd70b9b3586\" with timeout 30 (s)" Jan 29 13:01:24.519667 containerd[1515]: time="2025-01-29T13:01:24.518997375Z" level=info msg="Stop container \"e3cf26e544691120b398c39117e8f161bf075ca181e967f1a518cfd70b9b3586\" with signal terminated" Jan 29 13:01:24.580932 systemd[1]: cri-containerd-e3cf26e544691120b398c39117e8f161bf075ca181e967f1a518cfd70b9b3586.scope: Deactivated successfully. Jan 29 13:01:24.620841 containerd[1515]: time="2025-01-29T13:01:24.620651572Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 13:01:24.645136 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e3cf26e544691120b398c39117e8f161bf075ca181e967f1a518cfd70b9b3586-rootfs.mount: Deactivated successfully. Jan 29 13:01:24.645896 containerd[1515]: time="2025-01-29T13:01:24.645734599Z" level=info msg="StopContainer for \"50c90e5ccb930483a61f88c9f118d565182bef951184d801aeb3699e24e9ee9d\" with timeout 2 (s)" Jan 29 13:01:24.646704 containerd[1515]: time="2025-01-29T13:01:24.646675858Z" level=info msg="Stop container \"50c90e5ccb930483a61f88c9f118d565182bef951184d801aeb3699e24e9ee9d\" with signal terminated" Jan 29 13:01:24.652088 containerd[1515]: time="2025-01-29T13:01:24.651940803Z" level=info msg="shim disconnected" id=e3cf26e544691120b398c39117e8f161bf075ca181e967f1a518cfd70b9b3586 namespace=k8s.io Jan 29 13:01:24.652580 containerd[1515]: time="2025-01-29T13:01:24.652424162Z" level=warning msg="cleaning up after shim disconnected" id=e3cf26e544691120b398c39117e8f161bf075ca181e967f1a518cfd70b9b3586 namespace=k8s.io Jan 29 13:01:24.652580 containerd[1515]: time="2025-01-29T13:01:24.652459521Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 13:01:24.664455 systemd-networkd[1437]: lxc_health: Link DOWN Jan 29 13:01:24.664467 systemd-networkd[1437]: lxc_health: Lost carrier Jan 29 13:01:24.690531 systemd[1]: cri-containerd-50c90e5ccb930483a61f88c9f118d565182bef951184d801aeb3699e24e9ee9d.scope: Deactivated successfully. Jan 29 13:01:24.691028 systemd[1]: cri-containerd-50c90e5ccb930483a61f88c9f118d565182bef951184d801aeb3699e24e9ee9d.scope: Consumed 10.593s CPU time. Jan 29 13:01:24.717041 containerd[1515]: time="2025-01-29T13:01:24.716747713Z" level=info msg="StopContainer for \"e3cf26e544691120b398c39117e8f161bf075ca181e967f1a518cfd70b9b3586\" returns successfully" Jan 29 13:01:24.718480 containerd[1515]: time="2025-01-29T13:01:24.718433184Z" level=info msg="StopPodSandbox for \"8c2f888900090e57cc1d97ef70712cb184f1579207b006f946e5375e908c49cd\"" Jan 29 13:01:24.720947 containerd[1515]: time="2025-01-29T13:01:24.720706159Z" level=info msg="Container to stop \"e3cf26e544691120b398c39117e8f161bf075ca181e967f1a518cfd70b9b3586\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 13:01:24.726857 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8c2f888900090e57cc1d97ef70712cb184f1579207b006f946e5375e908c49cd-shm.mount: Deactivated successfully. Jan 29 13:01:24.744331 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-50c90e5ccb930483a61f88c9f118d565182bef951184d801aeb3699e24e9ee9d-rootfs.mount: Deactivated successfully. Jan 29 13:01:24.750621 systemd[1]: cri-containerd-8c2f888900090e57cc1d97ef70712cb184f1579207b006f946e5375e908c49cd.scope: Deactivated successfully. Jan 29 13:01:24.754511 containerd[1515]: time="2025-01-29T13:01:24.754374100Z" level=info msg="shim disconnected" id=50c90e5ccb930483a61f88c9f118d565182bef951184d801aeb3699e24e9ee9d namespace=k8s.io Jan 29 13:01:24.754769 containerd[1515]: time="2025-01-29T13:01:24.754606382Z" level=warning msg="cleaning up after shim disconnected" id=50c90e5ccb930483a61f88c9f118d565182bef951184d801aeb3699e24e9ee9d namespace=k8s.io Jan 29 13:01:24.754769 containerd[1515]: time="2025-01-29T13:01:24.754638075Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 13:01:24.797934 containerd[1515]: time="2025-01-29T13:01:24.797699387Z" level=info msg="StopContainer for \"50c90e5ccb930483a61f88c9f118d565182bef951184d801aeb3699e24e9ee9d\" returns successfully" Jan 29 13:01:24.800017 containerd[1515]: time="2025-01-29T13:01:24.799457430Z" level=info msg="StopPodSandbox for \"4517ff33a9839fe5c11f1a481d1c1eba5a1d5c373922b2ec6c73bc420ac34aee\"" Jan 29 13:01:24.800476 containerd[1515]: time="2025-01-29T13:01:24.800100967Z" level=info msg="Container to stop \"d67298d1a19316aaa425518c231fae214ffb5c86021e004a38853a0b8a7713a8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 13:01:24.800476 containerd[1515]: time="2025-01-29T13:01:24.800374340Z" level=info msg="Container to stop \"102f7b9eeb5d4edb026d307e440cbb52b750e408aa3da345f5120dc814721c06\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 13:01:24.800476 containerd[1515]: time="2025-01-29T13:01:24.800418449Z" level=info msg="Container to stop \"a51fd6c0acfe26ddc63d8e61bf5f5caa373557a51a6c578d19be56dfcf91485f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 13:01:24.800476 containerd[1515]: time="2025-01-29T13:01:24.800436820Z" level=info msg="Container to stop \"f33a5f2971344d4c5a5b6c03b6c51ae98000d1ca7c0340fbe8b2c2c73bd22f3a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 13:01:24.800476 containerd[1515]: time="2025-01-29T13:01:24.800452937Z" level=info msg="Container to stop \"50c90e5ccb930483a61f88c9f118d565182bef951184d801aeb3699e24e9ee9d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 13:01:24.801064 containerd[1515]: time="2025-01-29T13:01:24.800900367Z" level=info msg="shim disconnected" id=8c2f888900090e57cc1d97ef70712cb184f1579207b006f946e5375e908c49cd namespace=k8s.io Jan 29 13:01:24.801064 containerd[1515]: time="2025-01-29T13:01:24.800952792Z" level=warning msg="cleaning up after shim disconnected" id=8c2f888900090e57cc1d97ef70712cb184f1579207b006f946e5375e908c49cd namespace=k8s.io Jan 29 13:01:24.801064 containerd[1515]: time="2025-01-29T13:01:24.800978590Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 13:01:24.814241 systemd[1]: cri-containerd-4517ff33a9839fe5c11f1a481d1c1eba5a1d5c373922b2ec6c73bc420ac34aee.scope: Deactivated successfully. Jan 29 13:01:24.830971 containerd[1515]: time="2025-01-29T13:01:24.830904497Z" level=info msg="TearDown network for sandbox \"8c2f888900090e57cc1d97ef70712cb184f1579207b006f946e5375e908c49cd\" successfully" Jan 29 13:01:24.830971 containerd[1515]: time="2025-01-29T13:01:24.830965974Z" level=info msg="StopPodSandbox for \"8c2f888900090e57cc1d97ef70712cb184f1579207b006f946e5375e908c49cd\" returns successfully" Jan 29 13:01:24.863964 containerd[1515]: time="2025-01-29T13:01:24.863558072Z" level=info msg="shim disconnected" id=4517ff33a9839fe5c11f1a481d1c1eba5a1d5c373922b2ec6c73bc420ac34aee namespace=k8s.io Jan 29 13:01:24.863964 containerd[1515]: time="2025-01-29T13:01:24.863635841Z" level=warning msg="cleaning up after shim disconnected" id=4517ff33a9839fe5c11f1a481d1c1eba5a1d5c373922b2ec6c73bc420ac34aee namespace=k8s.io Jan 29 13:01:24.863964 containerd[1515]: time="2025-01-29T13:01:24.863651087Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 13:01:24.887560 kubelet[2726]: E0129 13:01:24.887279 2726 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 29 13:01:24.892614 containerd[1515]: time="2025-01-29T13:01:24.892295797Z" level=warning msg="cleanup warnings time=\"2025-01-29T13:01:24Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 13:01:24.894194 containerd[1515]: time="2025-01-29T13:01:24.894063792Z" level=info msg="TearDown network for sandbox \"4517ff33a9839fe5c11f1a481d1c1eba5a1d5c373922b2ec6c73bc420ac34aee\" successfully" Jan 29 13:01:24.894194 containerd[1515]: time="2025-01-29T13:01:24.894102967Z" level=info msg="StopPodSandbox for \"4517ff33a9839fe5c11f1a481d1c1eba5a1d5c373922b2ec6c73bc420ac34aee\" returns successfully" Jan 29 13:01:25.006337 kubelet[2726]: I0129 13:01:25.005992 2726 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cf516ff4-fc12-42c7-96c9-710ba06ef722-cni-path\") pod \"cf516ff4-fc12-42c7-96c9-710ba06ef722\" (UID: \"cf516ff4-fc12-42c7-96c9-710ba06ef722\") " Jan 29 13:01:25.006337 kubelet[2726]: I0129 13:01:25.006073 2726 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cf516ff4-fc12-42c7-96c9-710ba06ef722-xtables-lock\") pod \"cf516ff4-fc12-42c7-96c9-710ba06ef722\" (UID: \"cf516ff4-fc12-42c7-96c9-710ba06ef722\") " Jan 29 13:01:25.006337 kubelet[2726]: I0129 13:01:25.006102 2726 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cf516ff4-fc12-42c7-96c9-710ba06ef722-cilium-cgroup\") pod \"cf516ff4-fc12-42c7-96c9-710ba06ef722\" (UID: \"cf516ff4-fc12-42c7-96c9-710ba06ef722\") " Jan 29 13:01:25.006337 kubelet[2726]: I0129 13:01:25.006144 2726 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cf516ff4-fc12-42c7-96c9-710ba06ef722-cilium-config-path\") pod \"cf516ff4-fc12-42c7-96c9-710ba06ef722\" (UID: \"cf516ff4-fc12-42c7-96c9-710ba06ef722\") " Jan 29 13:01:25.006337 kubelet[2726]: I0129 13:01:25.006179 2726 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64tbr\" (UniqueName: \"kubernetes.io/projected/cf516ff4-fc12-42c7-96c9-710ba06ef722-kube-api-access-64tbr\") pod \"cf516ff4-fc12-42c7-96c9-710ba06ef722\" (UID: \"cf516ff4-fc12-42c7-96c9-710ba06ef722\") " Jan 29 13:01:25.006337 kubelet[2726]: I0129 13:01:25.006235 2726 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cf516ff4-fc12-42c7-96c9-710ba06ef722-bpf-maps\") pod \"cf516ff4-fc12-42c7-96c9-710ba06ef722\" (UID: \"cf516ff4-fc12-42c7-96c9-710ba06ef722\") " Jan 29 13:01:25.007540 kubelet[2726]: I0129 13:01:25.006282 2726 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cf516ff4-fc12-42c7-96c9-710ba06ef722-clustermesh-secrets\") pod \"cf516ff4-fc12-42c7-96c9-710ba06ef722\" (UID: \"cf516ff4-fc12-42c7-96c9-710ba06ef722\") " Jan 29 13:01:25.007540 kubelet[2726]: I0129 13:01:25.006318 2726 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b2l8v\" (UniqueName: \"kubernetes.io/projected/a3b44990-636b-4493-8905-3f93af6a411c-kube-api-access-b2l8v\") pod \"a3b44990-636b-4493-8905-3f93af6a411c\" (UID: \"a3b44990-636b-4493-8905-3f93af6a411c\") " Jan 29 13:01:25.007540 kubelet[2726]: I0129 13:01:25.006345 2726 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cf516ff4-fc12-42c7-96c9-710ba06ef722-host-proc-sys-net\") pod \"cf516ff4-fc12-42c7-96c9-710ba06ef722\" (UID: \"cf516ff4-fc12-42c7-96c9-710ba06ef722\") " Jan 29 13:01:25.007540 kubelet[2726]: I0129 13:01:25.006368 2726 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cf516ff4-fc12-42c7-96c9-710ba06ef722-hubble-tls\") pod \"cf516ff4-fc12-42c7-96c9-710ba06ef722\" (UID: \"cf516ff4-fc12-42c7-96c9-710ba06ef722\") " Jan 29 13:01:25.007540 kubelet[2726]: I0129 13:01:25.006397 2726 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cf516ff4-fc12-42c7-96c9-710ba06ef722-etc-cni-netd\") pod \"cf516ff4-fc12-42c7-96c9-710ba06ef722\" (UID: \"cf516ff4-fc12-42c7-96c9-710ba06ef722\") " Jan 29 13:01:25.007540 kubelet[2726]: I0129 13:01:25.006423 2726 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cf516ff4-fc12-42c7-96c9-710ba06ef722-lib-modules\") pod \"cf516ff4-fc12-42c7-96c9-710ba06ef722\" (UID: \"cf516ff4-fc12-42c7-96c9-710ba06ef722\") " Jan 29 13:01:25.007862 kubelet[2726]: I0129 13:01:25.006444 2726 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cf516ff4-fc12-42c7-96c9-710ba06ef722-cilium-run\") pod \"cf516ff4-fc12-42c7-96c9-710ba06ef722\" (UID: \"cf516ff4-fc12-42c7-96c9-710ba06ef722\") " Jan 29 13:01:25.007862 kubelet[2726]: I0129 13:01:25.006474 2726 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cf516ff4-fc12-42c7-96c9-710ba06ef722-host-proc-sys-kernel\") pod \"cf516ff4-fc12-42c7-96c9-710ba06ef722\" (UID: \"cf516ff4-fc12-42c7-96c9-710ba06ef722\") " Jan 29 13:01:25.007862 kubelet[2726]: I0129 13:01:25.006507 2726 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cf516ff4-fc12-42c7-96c9-710ba06ef722-hostproc\") pod \"cf516ff4-fc12-42c7-96c9-710ba06ef722\" (UID: \"cf516ff4-fc12-42c7-96c9-710ba06ef722\") " Jan 29 13:01:25.007862 kubelet[2726]: I0129 13:01:25.006554 2726 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a3b44990-636b-4493-8905-3f93af6a411c-cilium-config-path\") pod \"a3b44990-636b-4493-8905-3f93af6a411c\" (UID: \"a3b44990-636b-4493-8905-3f93af6a411c\") " Jan 29 13:01:25.015646 kubelet[2726]: I0129 13:01:25.014897 2726 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3b44990-636b-4493-8905-3f93af6a411c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a3b44990-636b-4493-8905-3f93af6a411c" (UID: "a3b44990-636b-4493-8905-3f93af6a411c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 13:01:25.016646 kubelet[2726]: I0129 13:01:25.014275 2726 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf516ff4-fc12-42c7-96c9-710ba06ef722-cni-path" (OuterVolumeSpecName: "cni-path") pod "cf516ff4-fc12-42c7-96c9-710ba06ef722" (UID: "cf516ff4-fc12-42c7-96c9-710ba06ef722"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 13:01:25.016646 kubelet[2726]: I0129 13:01:25.016015 2726 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf516ff4-fc12-42c7-96c9-710ba06ef722-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "cf516ff4-fc12-42c7-96c9-710ba06ef722" (UID: "cf516ff4-fc12-42c7-96c9-710ba06ef722"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 13:01:25.021531 kubelet[2726]: I0129 13:01:25.020893 2726 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3b44990-636b-4493-8905-3f93af6a411c-kube-api-access-b2l8v" (OuterVolumeSpecName: "kube-api-access-b2l8v") pod "a3b44990-636b-4493-8905-3f93af6a411c" (UID: "a3b44990-636b-4493-8905-3f93af6a411c"). InnerVolumeSpecName "kube-api-access-b2l8v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 13:01:25.021531 kubelet[2726]: I0129 13:01:25.020916 2726 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf516ff4-fc12-42c7-96c9-710ba06ef722-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "cf516ff4-fc12-42c7-96c9-710ba06ef722" (UID: "cf516ff4-fc12-42c7-96c9-710ba06ef722"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 13:01:25.021531 kubelet[2726]: I0129 13:01:25.020972 2726 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf516ff4-fc12-42c7-96c9-710ba06ef722-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "cf516ff4-fc12-42c7-96c9-710ba06ef722" (UID: "cf516ff4-fc12-42c7-96c9-710ba06ef722"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 13:01:25.021531 kubelet[2726]: I0129 13:01:25.020990 2726 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf516ff4-fc12-42c7-96c9-710ba06ef722-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "cf516ff4-fc12-42c7-96c9-710ba06ef722" (UID: "cf516ff4-fc12-42c7-96c9-710ba06ef722"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 13:01:25.021531 kubelet[2726]: I0129 13:01:25.021019 2726 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf516ff4-fc12-42c7-96c9-710ba06ef722-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "cf516ff4-fc12-42c7-96c9-710ba06ef722" (UID: "cf516ff4-fc12-42c7-96c9-710ba06ef722"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 13:01:25.021903 kubelet[2726]: I0129 13:01:25.021032 2726 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf516ff4-fc12-42c7-96c9-710ba06ef722-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "cf516ff4-fc12-42c7-96c9-710ba06ef722" (UID: "cf516ff4-fc12-42c7-96c9-710ba06ef722"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 13:01:25.021903 kubelet[2726]: I0129 13:01:25.021052 2726 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf516ff4-fc12-42c7-96c9-710ba06ef722-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "cf516ff4-fc12-42c7-96c9-710ba06ef722" (UID: "cf516ff4-fc12-42c7-96c9-710ba06ef722"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 13:01:25.021903 kubelet[2726]: I0129 13:01:25.021062 2726 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf516ff4-fc12-42c7-96c9-710ba06ef722-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "cf516ff4-fc12-42c7-96c9-710ba06ef722" (UID: "cf516ff4-fc12-42c7-96c9-710ba06ef722"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 13:01:25.021903 kubelet[2726]: I0129 13:01:25.021099 2726 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf516ff4-fc12-42c7-96c9-710ba06ef722-hostproc" (OuterVolumeSpecName: "hostproc") pod "cf516ff4-fc12-42c7-96c9-710ba06ef722" (UID: "cf516ff4-fc12-42c7-96c9-710ba06ef722"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 13:01:25.021903 kubelet[2726]: I0129 13:01:25.021131 2726 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf516ff4-fc12-42c7-96c9-710ba06ef722-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "cf516ff4-fc12-42c7-96c9-710ba06ef722" (UID: "cf516ff4-fc12-42c7-96c9-710ba06ef722"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 13:01:25.025264 kubelet[2726]: I0129 13:01:25.025225 2726 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf516ff4-fc12-42c7-96c9-710ba06ef722-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "cf516ff4-fc12-42c7-96c9-710ba06ef722" (UID: "cf516ff4-fc12-42c7-96c9-710ba06ef722"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 13:01:25.025607 kubelet[2726]: I0129 13:01:25.025526 2726 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf516ff4-fc12-42c7-96c9-710ba06ef722-kube-api-access-64tbr" (OuterVolumeSpecName: "kube-api-access-64tbr") pod "cf516ff4-fc12-42c7-96c9-710ba06ef722" (UID: "cf516ff4-fc12-42c7-96c9-710ba06ef722"). InnerVolumeSpecName "kube-api-access-64tbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 13:01:25.028567 kubelet[2726]: I0129 13:01:25.028500 2726 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf516ff4-fc12-42c7-96c9-710ba06ef722-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "cf516ff4-fc12-42c7-96c9-710ba06ef722" (UID: "cf516ff4-fc12-42c7-96c9-710ba06ef722"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 13:01:25.109420 kubelet[2726]: I0129 13:01:25.109000 2726 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cf516ff4-fc12-42c7-96c9-710ba06ef722-cni-path\") on node \"srv-i7wtu.gb1.brightbox.com\" DevicePath \"\"" Jan 29 13:01:25.109420 kubelet[2726]: I0129 13:01:25.109073 2726 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cf516ff4-fc12-42c7-96c9-710ba06ef722-xtables-lock\") on node \"srv-i7wtu.gb1.brightbox.com\" DevicePath \"\"" Jan 29 13:01:25.109420 kubelet[2726]: I0129 13:01:25.109100 2726 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-64tbr\" (UniqueName: \"kubernetes.io/projected/cf516ff4-fc12-42c7-96c9-710ba06ef722-kube-api-access-64tbr\") on node \"srv-i7wtu.gb1.brightbox.com\" DevicePath \"\"" Jan 29 13:01:25.109420 kubelet[2726]: I0129 13:01:25.109118 2726 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cf516ff4-fc12-42c7-96c9-710ba06ef722-bpf-maps\") on node \"srv-i7wtu.gb1.brightbox.com\" DevicePath \"\"" Jan 29 13:01:25.109420 kubelet[2726]: I0129 13:01:25.109145 2726 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cf516ff4-fc12-42c7-96c9-710ba06ef722-cilium-cgroup\") on node \"srv-i7wtu.gb1.brightbox.com\" DevicePath \"\"" Jan 29 13:01:25.109420 kubelet[2726]: I0129 13:01:25.109158 2726 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cf516ff4-fc12-42c7-96c9-710ba06ef722-cilium-config-path\") on node \"srv-i7wtu.gb1.brightbox.com\" DevicePath \"\"" Jan 29 13:01:25.109420 kubelet[2726]: I0129 13:01:25.109178 2726 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-b2l8v\" (UniqueName: \"kubernetes.io/projected/a3b44990-636b-4493-8905-3f93af6a411c-kube-api-access-b2l8v\") on node \"srv-i7wtu.gb1.brightbox.com\" DevicePath \"\"" Jan 29 13:01:25.109420 kubelet[2726]: I0129 13:01:25.109195 2726 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cf516ff4-fc12-42c7-96c9-710ba06ef722-clustermesh-secrets\") on node \"srv-i7wtu.gb1.brightbox.com\" DevicePath \"\"" Jan 29 13:01:25.110310 kubelet[2726]: I0129 13:01:25.109233 2726 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cf516ff4-fc12-42c7-96c9-710ba06ef722-host-proc-sys-net\") on node \"srv-i7wtu.gb1.brightbox.com\" DevicePath \"\"" Jan 29 13:01:25.110310 kubelet[2726]: I0129 13:01:25.109248 2726 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cf516ff4-fc12-42c7-96c9-710ba06ef722-hubble-tls\") on node \"srv-i7wtu.gb1.brightbox.com\" DevicePath \"\"" Jan 29 13:01:25.110310 kubelet[2726]: I0129 13:01:25.109261 2726 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cf516ff4-fc12-42c7-96c9-710ba06ef722-etc-cni-netd\") on node \"srv-i7wtu.gb1.brightbox.com\" DevicePath \"\"" Jan 29 13:01:25.110310 kubelet[2726]: I0129 13:01:25.109296 2726 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cf516ff4-fc12-42c7-96c9-710ba06ef722-lib-modules\") on node \"srv-i7wtu.gb1.brightbox.com\" DevicePath \"\"" Jan 29 13:01:25.110310 kubelet[2726]: I0129 13:01:25.109323 2726 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cf516ff4-fc12-42c7-96c9-710ba06ef722-cilium-run\") on node \"srv-i7wtu.gb1.brightbox.com\" DevicePath \"\"" Jan 29 13:01:25.110310 kubelet[2726]: I0129 13:01:25.109337 2726 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cf516ff4-fc12-42c7-96c9-710ba06ef722-host-proc-sys-kernel\") on node \"srv-i7wtu.gb1.brightbox.com\" DevicePath \"\"" Jan 29 13:01:25.110310 kubelet[2726]: I0129 13:01:25.109369 2726 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a3b44990-636b-4493-8905-3f93af6a411c-cilium-config-path\") on node \"srv-i7wtu.gb1.brightbox.com\" DevicePath \"\"" Jan 29 13:01:25.110310 kubelet[2726]: I0129 13:01:25.109385 2726 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cf516ff4-fc12-42c7-96c9-710ba06ef722-hostproc\") on node \"srv-i7wtu.gb1.brightbox.com\" DevicePath \"\"" Jan 29 13:01:25.356967 kubelet[2726]: I0129 13:01:25.355335 2726 scope.go:117] "RemoveContainer" containerID="e3cf26e544691120b398c39117e8f161bf075ca181e967f1a518cfd70b9b3586" Jan 29 13:01:25.363681 systemd[1]: Removed slice kubepods-besteffort-poda3b44990_636b_4493_8905_3f93af6a411c.slice - libcontainer container kubepods-besteffort-poda3b44990_636b_4493_8905_3f93af6a411c.slice. Jan 29 13:01:25.376652 containerd[1515]: time="2025-01-29T13:01:25.376612935Z" level=info msg="RemoveContainer for \"e3cf26e544691120b398c39117e8f161bf075ca181e967f1a518cfd70b9b3586\"" Jan 29 13:01:25.386991 containerd[1515]: time="2025-01-29T13:01:25.386942105Z" level=info msg="RemoveContainer for \"e3cf26e544691120b398c39117e8f161bf075ca181e967f1a518cfd70b9b3586\" returns successfully" Jan 29 13:01:25.391460 kubelet[2726]: I0129 13:01:25.391424 2726 scope.go:117] "RemoveContainer" containerID="e3cf26e544691120b398c39117e8f161bf075ca181e967f1a518cfd70b9b3586" Jan 29 13:01:25.392115 containerd[1515]: time="2025-01-29T13:01:25.391961989Z" level=error msg="ContainerStatus for \"e3cf26e544691120b398c39117e8f161bf075ca181e967f1a518cfd70b9b3586\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e3cf26e544691120b398c39117e8f161bf075ca181e967f1a518cfd70b9b3586\": not found" Jan 29 13:01:25.393623 systemd[1]: Removed slice kubepods-burstable-podcf516ff4_fc12_42c7_96c9_710ba06ef722.slice - libcontainer container kubepods-burstable-podcf516ff4_fc12_42c7_96c9_710ba06ef722.slice. Jan 29 13:01:25.393776 systemd[1]: kubepods-burstable-podcf516ff4_fc12_42c7_96c9_710ba06ef722.slice: Consumed 10.724s CPU time. Jan 29 13:01:25.401961 kubelet[2726]: E0129 13:01:25.401898 2726 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e3cf26e544691120b398c39117e8f161bf075ca181e967f1a518cfd70b9b3586\": not found" containerID="e3cf26e544691120b398c39117e8f161bf075ca181e967f1a518cfd70b9b3586" Jan 29 13:01:25.402103 kubelet[2726]: I0129 13:01:25.401988 2726 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e3cf26e544691120b398c39117e8f161bf075ca181e967f1a518cfd70b9b3586"} err="failed to get container status \"e3cf26e544691120b398c39117e8f161bf075ca181e967f1a518cfd70b9b3586\": rpc error: code = NotFound desc = an error occurred when try to find container \"e3cf26e544691120b398c39117e8f161bf075ca181e967f1a518cfd70b9b3586\": not found" Jan 29 13:01:25.402211 kubelet[2726]: I0129 13:01:25.402107 2726 scope.go:117] "RemoveContainer" containerID="50c90e5ccb930483a61f88c9f118d565182bef951184d801aeb3699e24e9ee9d" Jan 29 13:01:25.405328 containerd[1515]: time="2025-01-29T13:01:25.405282352Z" level=info msg="RemoveContainer for \"50c90e5ccb930483a61f88c9f118d565182bef951184d801aeb3699e24e9ee9d\"" Jan 29 13:01:25.410809 containerd[1515]: time="2025-01-29T13:01:25.410678820Z" level=info msg="RemoveContainer for \"50c90e5ccb930483a61f88c9f118d565182bef951184d801aeb3699e24e9ee9d\" returns successfully" Jan 29 13:01:25.411175 kubelet[2726]: I0129 13:01:25.411139 2726 scope.go:117] "RemoveContainer" containerID="102f7b9eeb5d4edb026d307e440cbb52b750e408aa3da345f5120dc814721c06" Jan 29 13:01:25.416082 containerd[1515]: time="2025-01-29T13:01:25.415965042Z" level=info msg="RemoveContainer for \"102f7b9eeb5d4edb026d307e440cbb52b750e408aa3da345f5120dc814721c06\"" Jan 29 13:01:25.419598 containerd[1515]: time="2025-01-29T13:01:25.419471451Z" level=info msg="RemoveContainer for \"102f7b9eeb5d4edb026d307e440cbb52b750e408aa3da345f5120dc814721c06\" returns successfully" Jan 29 13:01:25.420004 kubelet[2726]: I0129 13:01:25.419862 2726 scope.go:117] "RemoveContainer" containerID="f33a5f2971344d4c5a5b6c03b6c51ae98000d1ca7c0340fbe8b2c2c73bd22f3a" Jan 29 13:01:25.422117 containerd[1515]: time="2025-01-29T13:01:25.421623129Z" level=info msg="RemoveContainer for \"f33a5f2971344d4c5a5b6c03b6c51ae98000d1ca7c0340fbe8b2c2c73bd22f3a\"" Jan 29 13:01:25.426445 containerd[1515]: time="2025-01-29T13:01:25.426344547Z" level=info msg="RemoveContainer for \"f33a5f2971344d4c5a5b6c03b6c51ae98000d1ca7c0340fbe8b2c2c73bd22f3a\" returns successfully" Jan 29 13:01:25.427055 kubelet[2726]: I0129 13:01:25.426783 2726 scope.go:117] "RemoveContainer" containerID="a51fd6c0acfe26ddc63d8e61bf5f5caa373557a51a6c578d19be56dfcf91485f" Jan 29 13:01:25.429302 containerd[1515]: time="2025-01-29T13:01:25.429173156Z" level=info msg="RemoveContainer for \"a51fd6c0acfe26ddc63d8e61bf5f5caa373557a51a6c578d19be56dfcf91485f\"" Jan 29 13:01:25.432273 containerd[1515]: time="2025-01-29T13:01:25.432201081Z" level=info msg="RemoveContainer for \"a51fd6c0acfe26ddc63d8e61bf5f5caa373557a51a6c578d19be56dfcf91485f\" returns successfully" Jan 29 13:01:25.433920 kubelet[2726]: I0129 13:01:25.432431 2726 scope.go:117] "RemoveContainer" containerID="d67298d1a19316aaa425518c231fae214ffb5c86021e004a38853a0b8a7713a8" Jan 29 13:01:25.435811 containerd[1515]: time="2025-01-29T13:01:25.435156646Z" level=info msg="RemoveContainer for \"d67298d1a19316aaa425518c231fae214ffb5c86021e004a38853a0b8a7713a8\"" Jan 29 13:01:25.439415 containerd[1515]: time="2025-01-29T13:01:25.439349649Z" level=info msg="RemoveContainer for \"d67298d1a19316aaa425518c231fae214ffb5c86021e004a38853a0b8a7713a8\" returns successfully" Jan 29 13:01:25.440060 kubelet[2726]: I0129 13:01:25.439948 2726 scope.go:117] "RemoveContainer" containerID="50c90e5ccb930483a61f88c9f118d565182bef951184d801aeb3699e24e9ee9d" Jan 29 13:01:25.441157 containerd[1515]: time="2025-01-29T13:01:25.440875170Z" level=error msg="ContainerStatus for \"50c90e5ccb930483a61f88c9f118d565182bef951184d801aeb3699e24e9ee9d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"50c90e5ccb930483a61f88c9f118d565182bef951184d801aeb3699e24e9ee9d\": not found" Jan 29 13:01:25.441324 kubelet[2726]: E0129 13:01:25.441112 2726 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"50c90e5ccb930483a61f88c9f118d565182bef951184d801aeb3699e24e9ee9d\": not found" containerID="50c90e5ccb930483a61f88c9f118d565182bef951184d801aeb3699e24e9ee9d" Jan 29 13:01:25.441324 kubelet[2726]: I0129 13:01:25.441148 2726 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"50c90e5ccb930483a61f88c9f118d565182bef951184d801aeb3699e24e9ee9d"} err="failed to get container status \"50c90e5ccb930483a61f88c9f118d565182bef951184d801aeb3699e24e9ee9d\": rpc error: code = NotFound desc = an error occurred when try to find container \"50c90e5ccb930483a61f88c9f118d565182bef951184d801aeb3699e24e9ee9d\": not found" Jan 29 13:01:25.441324 kubelet[2726]: I0129 13:01:25.441175 2726 scope.go:117] "RemoveContainer" containerID="102f7b9eeb5d4edb026d307e440cbb52b750e408aa3da345f5120dc814721c06" Jan 29 13:01:25.442237 kubelet[2726]: E0129 13:01:25.441989 2726 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"102f7b9eeb5d4edb026d307e440cbb52b750e408aa3da345f5120dc814721c06\": not found" containerID="102f7b9eeb5d4edb026d307e440cbb52b750e408aa3da345f5120dc814721c06" Jan 29 13:01:25.442237 kubelet[2726]: I0129 13:01:25.442017 2726 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"102f7b9eeb5d4edb026d307e440cbb52b750e408aa3da345f5120dc814721c06"} err="failed to get container status \"102f7b9eeb5d4edb026d307e440cbb52b750e408aa3da345f5120dc814721c06\": rpc error: code = NotFound desc = an error occurred when try to find container \"102f7b9eeb5d4edb026d307e440cbb52b750e408aa3da345f5120dc814721c06\": not found" Jan 29 13:01:25.442237 kubelet[2726]: I0129 13:01:25.442052 2726 scope.go:117] "RemoveContainer" containerID="f33a5f2971344d4c5a5b6c03b6c51ae98000d1ca7c0340fbe8b2c2c73bd22f3a" Jan 29 13:01:25.443660 containerd[1515]: time="2025-01-29T13:01:25.441759338Z" level=error msg="ContainerStatus for \"102f7b9eeb5d4edb026d307e440cbb52b750e408aa3da345f5120dc814721c06\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"102f7b9eeb5d4edb026d307e440cbb52b750e408aa3da345f5120dc814721c06\": not found" Jan 29 13:01:25.443660 containerd[1515]: time="2025-01-29T13:01:25.442352586Z" level=error msg="ContainerStatus for \"f33a5f2971344d4c5a5b6c03b6c51ae98000d1ca7c0340fbe8b2c2c73bd22f3a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f33a5f2971344d4c5a5b6c03b6c51ae98000d1ca7c0340fbe8b2c2c73bd22f3a\": not found" Jan 29 13:01:25.443869 kubelet[2726]: E0129 13:01:25.442923 2726 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f33a5f2971344d4c5a5b6c03b6c51ae98000d1ca7c0340fbe8b2c2c73bd22f3a\": not found" containerID="f33a5f2971344d4c5a5b6c03b6c51ae98000d1ca7c0340fbe8b2c2c73bd22f3a" Jan 29 13:01:25.443869 kubelet[2726]: I0129 13:01:25.442948 2726 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f33a5f2971344d4c5a5b6c03b6c51ae98000d1ca7c0340fbe8b2c2c73bd22f3a"} err="failed to get container status \"f33a5f2971344d4c5a5b6c03b6c51ae98000d1ca7c0340fbe8b2c2c73bd22f3a\": rpc error: code = NotFound desc = an error occurred when try to find container \"f33a5f2971344d4c5a5b6c03b6c51ae98000d1ca7c0340fbe8b2c2c73bd22f3a\": not found" Jan 29 13:01:25.443869 kubelet[2726]: I0129 13:01:25.443329 2726 scope.go:117] "RemoveContainer" containerID="a51fd6c0acfe26ddc63d8e61bf5f5caa373557a51a6c578d19be56dfcf91485f" Jan 29 13:01:25.444256 containerd[1515]: time="2025-01-29T13:01:25.443820763Z" level=error msg="ContainerStatus for \"a51fd6c0acfe26ddc63d8e61bf5f5caa373557a51a6c578d19be56dfcf91485f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a51fd6c0acfe26ddc63d8e61bf5f5caa373557a51a6c578d19be56dfcf91485f\": not found" Jan 29 13:01:25.444346 kubelet[2726]: E0129 13:01:25.444175 2726 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a51fd6c0acfe26ddc63d8e61bf5f5caa373557a51a6c578d19be56dfcf91485f\": not found" containerID="a51fd6c0acfe26ddc63d8e61bf5f5caa373557a51a6c578d19be56dfcf91485f" Jan 29 13:01:25.444346 kubelet[2726]: I0129 13:01:25.444204 2726 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a51fd6c0acfe26ddc63d8e61bf5f5caa373557a51a6c578d19be56dfcf91485f"} err="failed to get container status \"a51fd6c0acfe26ddc63d8e61bf5f5caa373557a51a6c578d19be56dfcf91485f\": rpc error: code = NotFound desc = an error occurred when try to find container \"a51fd6c0acfe26ddc63d8e61bf5f5caa373557a51a6c578d19be56dfcf91485f\": not found" Jan 29 13:01:25.444346 kubelet[2726]: I0129 13:01:25.444247 2726 scope.go:117] "RemoveContainer" containerID="d67298d1a19316aaa425518c231fae214ffb5c86021e004a38853a0b8a7713a8" Jan 29 13:01:25.444532 containerd[1515]: time="2025-01-29T13:01:25.444482381Z" level=error msg="ContainerStatus for \"d67298d1a19316aaa425518c231fae214ffb5c86021e004a38853a0b8a7713a8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d67298d1a19316aaa425518c231fae214ffb5c86021e004a38853a0b8a7713a8\": not found" Jan 29 13:01:25.444810 kubelet[2726]: E0129 13:01:25.444710 2726 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d67298d1a19316aaa425518c231fae214ffb5c86021e004a38853a0b8a7713a8\": not found" containerID="d67298d1a19316aaa425518c231fae214ffb5c86021e004a38853a0b8a7713a8" Jan 29 13:01:25.444810 kubelet[2726]: I0129 13:01:25.444769 2726 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d67298d1a19316aaa425518c231fae214ffb5c86021e004a38853a0b8a7713a8"} err="failed to get container status \"d67298d1a19316aaa425518c231fae214ffb5c86021e004a38853a0b8a7713a8\": rpc error: code = NotFound desc = an error occurred when try to find container \"d67298d1a19316aaa425518c231fae214ffb5c86021e004a38853a0b8a7713a8\": not found" Jan 29 13:01:25.570422 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4517ff33a9839fe5c11f1a481d1c1eba5a1d5c373922b2ec6c73bc420ac34aee-rootfs.mount: Deactivated successfully. Jan 29 13:01:25.570584 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4517ff33a9839fe5c11f1a481d1c1eba5a1d5c373922b2ec6c73bc420ac34aee-shm.mount: Deactivated successfully. Jan 29 13:01:25.570701 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8c2f888900090e57cc1d97ef70712cb184f1579207b006f946e5375e908c49cd-rootfs.mount: Deactivated successfully. Jan 29 13:01:25.570819 systemd[1]: var-lib-kubelet-pods-cf516ff4\x2dfc12\x2d42c7\x2d96c9\x2d710ba06ef722-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d64tbr.mount: Deactivated successfully. Jan 29 13:01:25.570920 systemd[1]: var-lib-kubelet-pods-a3b44990\x2d636b\x2d4493\x2d8905\x2d3f93af6a411c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2db2l8v.mount: Deactivated successfully. Jan 29 13:01:25.571028 systemd[1]: var-lib-kubelet-pods-cf516ff4\x2dfc12\x2d42c7\x2d96c9\x2d710ba06ef722-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 29 13:01:25.571121 systemd[1]: var-lib-kubelet-pods-cf516ff4\x2dfc12\x2d42c7\x2d96c9\x2d710ba06ef722-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 29 13:01:26.530914 sshd[4331]: Connection closed by 147.75.109.163 port 58614 Jan 29 13:01:26.532993 sshd-session[4329]: pam_unix(sshd:session): session closed for user core Jan 29 13:01:26.537968 systemd[1]: sshd@27-10.243.84.18:22-147.75.109.163:58614.service: Deactivated successfully. Jan 29 13:01:26.541619 systemd[1]: session-25.scope: Deactivated successfully. Jan 29 13:01:26.544150 systemd-logind[1492]: Session 25 logged out. Waiting for processes to exit. Jan 29 13:01:26.546229 systemd-logind[1492]: Removed session 25. Jan 29 13:01:26.689172 systemd[1]: Started sshd@28-10.243.84.18:22-147.75.109.163:58626.service - OpenSSH per-connection server daemon (147.75.109.163:58626). Jan 29 13:01:26.717517 kubelet[2726]: I0129 13:01:26.717435 2726 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3b44990-636b-4493-8905-3f93af6a411c" path="/var/lib/kubelet/pods/a3b44990-636b-4493-8905-3f93af6a411c/volumes" Jan 29 13:01:26.720008 kubelet[2726]: I0129 13:01:26.719886 2726 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf516ff4-fc12-42c7-96c9-710ba06ef722" path="/var/lib/kubelet/pods/cf516ff4-fc12-42c7-96c9-710ba06ef722/volumes" Jan 29 13:01:27.592006 sshd[4490]: Accepted publickey for core from 147.75.109.163 port 58626 ssh2: RSA SHA256:N4m0UAGAVL0aGRQpLGyvungYkW8dGkNI4mN5vZ/Bmd0 Jan 29 13:01:27.594194 sshd-session[4490]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 13:01:27.601698 systemd-logind[1492]: New session 26 of user core. Jan 29 13:01:27.610095 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 29 13:01:28.583123 kubelet[2726]: I0129 13:01:28.582920 2726 setters.go:600] "Node became not ready" node="srv-i7wtu.gb1.brightbox.com" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-29T13:01:28Z","lastTransitionTime":"2025-01-29T13:01:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 29 13:01:28.622936 kubelet[2726]: E0129 13:01:28.621306 2726 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a3b44990-636b-4493-8905-3f93af6a411c" containerName="cilium-operator" Jan 29 13:01:28.622936 kubelet[2726]: E0129 13:01:28.622946 2726 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cf516ff4-fc12-42c7-96c9-710ba06ef722" containerName="cilium-agent" Jan 29 13:01:28.623212 kubelet[2726]: E0129 13:01:28.622971 2726 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cf516ff4-fc12-42c7-96c9-710ba06ef722" containerName="mount-cgroup" Jan 29 13:01:28.623212 kubelet[2726]: E0129 13:01:28.622995 2726 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cf516ff4-fc12-42c7-96c9-710ba06ef722" containerName="apply-sysctl-overwrites" Jan 29 13:01:28.623212 kubelet[2726]: E0129 13:01:28.623006 2726 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cf516ff4-fc12-42c7-96c9-710ba06ef722" containerName="mount-bpf-fs" Jan 29 13:01:28.623212 kubelet[2726]: E0129 13:01:28.623015 2726 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cf516ff4-fc12-42c7-96c9-710ba06ef722" containerName="clean-cilium-state" Jan 29 13:01:28.647354 kubelet[2726]: I0129 13:01:28.635603 2726 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3b44990-636b-4493-8905-3f93af6a411c" containerName="cilium-operator" Jan 29 13:01:28.649475 kubelet[2726]: I0129 13:01:28.647678 2726 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf516ff4-fc12-42c7-96c9-710ba06ef722" containerName="cilium-agent" Jan 29 13:01:28.681445 systemd[1]: Created slice kubepods-burstable-pod9e4b2f00_1bb3_408c_ac22_5b148a9f677e.slice - libcontainer container kubepods-burstable-pod9e4b2f00_1bb3_408c_ac22_5b148a9f677e.slice. Jan 29 13:01:28.726356 sshd[4492]: Connection closed by 147.75.109.163 port 58626 Jan 29 13:01:28.727214 sshd-session[4490]: pam_unix(sshd:session): session closed for user core Jan 29 13:01:28.732581 systemd[1]: sshd@28-10.243.84.18:22-147.75.109.163:58626.service: Deactivated successfully. Jan 29 13:01:28.735654 systemd[1]: session-26.scope: Deactivated successfully. Jan 29 13:01:28.739830 kubelet[2726]: I0129 13:01:28.739172 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9e4b2f00-1bb3-408c-ac22-5b148a9f677e-lib-modules\") pod \"cilium-228ws\" (UID: \"9e4b2f00-1bb3-408c-ac22-5b148a9f677e\") " pod="kube-system/cilium-228ws" Jan 29 13:01:28.739830 kubelet[2726]: I0129 13:01:28.739314 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dg9sc\" (UniqueName: \"kubernetes.io/projected/9e4b2f00-1bb3-408c-ac22-5b148a9f677e-kube-api-access-dg9sc\") pod \"cilium-228ws\" (UID: \"9e4b2f00-1bb3-408c-ac22-5b148a9f677e\") " pod="kube-system/cilium-228ws" Jan 29 13:01:28.739830 kubelet[2726]: I0129 13:01:28.739368 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9e4b2f00-1bb3-408c-ac22-5b148a9f677e-cni-path\") pod \"cilium-228ws\" (UID: \"9e4b2f00-1bb3-408c-ac22-5b148a9f677e\") " pod="kube-system/cilium-228ws" Jan 29 13:01:28.739830 kubelet[2726]: I0129 13:01:28.739409 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9e4b2f00-1bb3-408c-ac22-5b148a9f677e-etc-cni-netd\") pod \"cilium-228ws\" (UID: \"9e4b2f00-1bb3-408c-ac22-5b148a9f677e\") " pod="kube-system/cilium-228ws" Jan 29 13:01:28.739830 kubelet[2726]: I0129 13:01:28.739437 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9e4b2f00-1bb3-408c-ac22-5b148a9f677e-host-proc-sys-net\") pod \"cilium-228ws\" (UID: \"9e4b2f00-1bb3-408c-ac22-5b148a9f677e\") " pod="kube-system/cilium-228ws" Jan 29 13:01:28.739830 kubelet[2726]: I0129 13:01:28.739469 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9e4b2f00-1bb3-408c-ac22-5b148a9f677e-hubble-tls\") pod \"cilium-228ws\" (UID: \"9e4b2f00-1bb3-408c-ac22-5b148a9f677e\") " pod="kube-system/cilium-228ws" Jan 29 13:01:28.740494 kubelet[2726]: I0129 13:01:28.739502 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9e4b2f00-1bb3-408c-ac22-5b148a9f677e-cilium-ipsec-secrets\") pod \"cilium-228ws\" (UID: \"9e4b2f00-1bb3-408c-ac22-5b148a9f677e\") " pod="kube-system/cilium-228ws" Jan 29 13:01:28.740494 kubelet[2726]: I0129 13:01:28.739534 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9e4b2f00-1bb3-408c-ac22-5b148a9f677e-host-proc-sys-kernel\") pod \"cilium-228ws\" (UID: \"9e4b2f00-1bb3-408c-ac22-5b148a9f677e\") " pod="kube-system/cilium-228ws" Jan 29 13:01:28.740494 kubelet[2726]: I0129 13:01:28.739573 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9e4b2f00-1bb3-408c-ac22-5b148a9f677e-hostproc\") pod \"cilium-228ws\" (UID: \"9e4b2f00-1bb3-408c-ac22-5b148a9f677e\") " pod="kube-system/cilium-228ws" Jan 29 13:01:28.740494 kubelet[2726]: I0129 13:01:28.739604 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9e4b2f00-1bb3-408c-ac22-5b148a9f677e-bpf-maps\") pod \"cilium-228ws\" (UID: \"9e4b2f00-1bb3-408c-ac22-5b148a9f677e\") " pod="kube-system/cilium-228ws" Jan 29 13:01:28.740494 kubelet[2726]: I0129 13:01:28.739635 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9e4b2f00-1bb3-408c-ac22-5b148a9f677e-xtables-lock\") pod \"cilium-228ws\" (UID: \"9e4b2f00-1bb3-408c-ac22-5b148a9f677e\") " pod="kube-system/cilium-228ws" Jan 29 13:01:28.740741 kubelet[2726]: I0129 13:01:28.740579 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9e4b2f00-1bb3-408c-ac22-5b148a9f677e-cilium-cgroup\") pod \"cilium-228ws\" (UID: \"9e4b2f00-1bb3-408c-ac22-5b148a9f677e\") " pod="kube-system/cilium-228ws" Jan 29 13:01:28.740741 kubelet[2726]: I0129 13:01:28.740638 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9e4b2f00-1bb3-408c-ac22-5b148a9f677e-clustermesh-secrets\") pod \"cilium-228ws\" (UID: \"9e4b2f00-1bb3-408c-ac22-5b148a9f677e\") " pod="kube-system/cilium-228ws" Jan 29 13:01:28.740741 kubelet[2726]: I0129 13:01:28.740685 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9e4b2f00-1bb3-408c-ac22-5b148a9f677e-cilium-run\") pod \"cilium-228ws\" (UID: \"9e4b2f00-1bb3-408c-ac22-5b148a9f677e\") " pod="kube-system/cilium-228ws" Jan 29 13:01:28.740741 kubelet[2726]: I0129 13:01:28.740713 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9e4b2f00-1bb3-408c-ac22-5b148a9f677e-cilium-config-path\") pod \"cilium-228ws\" (UID: \"9e4b2f00-1bb3-408c-ac22-5b148a9f677e\") " pod="kube-system/cilium-228ws" Jan 29 13:01:28.741712 systemd-logind[1492]: Session 26 logged out. Waiting for processes to exit. Jan 29 13:01:28.744817 systemd-logind[1492]: Removed session 26. Jan 29 13:01:28.901139 systemd[1]: Started sshd@29-10.243.84.18:22-147.75.109.163:59464.service - OpenSSH per-connection server daemon (147.75.109.163:59464). Jan 29 13:01:28.988043 containerd[1515]: time="2025-01-29T13:01:28.987972017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-228ws,Uid:9e4b2f00-1bb3-408c-ac22-5b148a9f677e,Namespace:kube-system,Attempt:0,}" Jan 29 13:01:29.024961 containerd[1515]: time="2025-01-29T13:01:29.024626282Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 13:01:29.024961 containerd[1515]: time="2025-01-29T13:01:29.024712972Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 13:01:29.024961 containerd[1515]: time="2025-01-29T13:01:29.024730431Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 13:01:29.025247 containerd[1515]: time="2025-01-29T13:01:29.024878448Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 13:01:29.057972 systemd[1]: Started cri-containerd-dd6f90e221a94c09667ccd9f736d3ef65c7bc9d7e7b328b22946658cf03be44e.scope - libcontainer container dd6f90e221a94c09667ccd9f736d3ef65c7bc9d7e7b328b22946658cf03be44e. Jan 29 13:01:29.099418 containerd[1515]: time="2025-01-29T13:01:29.099353877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-228ws,Uid:9e4b2f00-1bb3-408c-ac22-5b148a9f677e,Namespace:kube-system,Attempt:0,} returns sandbox id \"dd6f90e221a94c09667ccd9f736d3ef65c7bc9d7e7b328b22946658cf03be44e\"" Jan 29 13:01:29.104822 containerd[1515]: time="2025-01-29T13:01:29.104761202Z" level=info msg="CreateContainer within sandbox \"dd6f90e221a94c09667ccd9f736d3ef65c7bc9d7e7b328b22946658cf03be44e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 29 13:01:29.120007 containerd[1515]: time="2025-01-29T13:01:29.119942820Z" level=info msg="CreateContainer within sandbox \"dd6f90e221a94c09667ccd9f736d3ef65c7bc9d7e7b328b22946658cf03be44e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1fae225bef4e135e2fcf0a3b954343ad610b94851e9411637be8bed7175cf018\"" Jan 29 13:01:29.122833 containerd[1515]: time="2025-01-29T13:01:29.121398146Z" level=info msg="StartContainer for \"1fae225bef4e135e2fcf0a3b954343ad610b94851e9411637be8bed7175cf018\"" Jan 29 13:01:29.177213 systemd[1]: Started cri-containerd-1fae225bef4e135e2fcf0a3b954343ad610b94851e9411637be8bed7175cf018.scope - libcontainer container 1fae225bef4e135e2fcf0a3b954343ad610b94851e9411637be8bed7175cf018. Jan 29 13:01:29.224392 containerd[1515]: time="2025-01-29T13:01:29.224194379Z" level=info msg="StartContainer for \"1fae225bef4e135e2fcf0a3b954343ad610b94851e9411637be8bed7175cf018\" returns successfully" Jan 29 13:01:29.241087 systemd[1]: cri-containerd-1fae225bef4e135e2fcf0a3b954343ad610b94851e9411637be8bed7175cf018.scope: Deactivated successfully. Jan 29 13:01:29.288087 containerd[1515]: time="2025-01-29T13:01:29.287667277Z" level=info msg="shim disconnected" id=1fae225bef4e135e2fcf0a3b954343ad610b94851e9411637be8bed7175cf018 namespace=k8s.io Jan 29 13:01:29.289067 containerd[1515]: time="2025-01-29T13:01:29.288197466Z" level=warning msg="cleaning up after shim disconnected" id=1fae225bef4e135e2fcf0a3b954343ad610b94851e9411637be8bed7175cf018 namespace=k8s.io Jan 29 13:01:29.289067 containerd[1515]: time="2025-01-29T13:01:29.288359074Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 13:01:29.401801 containerd[1515]: time="2025-01-29T13:01:29.401512486Z" level=info msg="CreateContainer within sandbox \"dd6f90e221a94c09667ccd9f736d3ef65c7bc9d7e7b328b22946658cf03be44e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 29 13:01:29.416571 containerd[1515]: time="2025-01-29T13:01:29.416508892Z" level=info msg="CreateContainer within sandbox \"dd6f90e221a94c09667ccd9f736d3ef65c7bc9d7e7b328b22946658cf03be44e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"426875b481319daa176485b692e39020eea6cd83bf4480e6147dae3d559ace41\"" Jan 29 13:01:29.417679 containerd[1515]: time="2025-01-29T13:01:29.417631695Z" level=info msg="StartContainer for \"426875b481319daa176485b692e39020eea6cd83bf4480e6147dae3d559ace41\"" Jan 29 13:01:29.453989 systemd[1]: Started cri-containerd-426875b481319daa176485b692e39020eea6cd83bf4480e6147dae3d559ace41.scope - libcontainer container 426875b481319daa176485b692e39020eea6cd83bf4480e6147dae3d559ace41. Jan 29 13:01:29.491458 containerd[1515]: time="2025-01-29T13:01:29.491321214Z" level=info msg="StartContainer for \"426875b481319daa176485b692e39020eea6cd83bf4480e6147dae3d559ace41\" returns successfully" Jan 29 13:01:29.502928 systemd[1]: cri-containerd-426875b481319daa176485b692e39020eea6cd83bf4480e6147dae3d559ace41.scope: Deactivated successfully. Jan 29 13:01:29.532116 containerd[1515]: time="2025-01-29T13:01:29.531982621Z" level=info msg="shim disconnected" id=426875b481319daa176485b692e39020eea6cd83bf4480e6147dae3d559ace41 namespace=k8s.io Jan 29 13:01:29.532116 containerd[1515]: time="2025-01-29T13:01:29.532059264Z" level=warning msg="cleaning up after shim disconnected" id=426875b481319daa176485b692e39020eea6cd83bf4480e6147dae3d559ace41 namespace=k8s.io Jan 29 13:01:29.532116 containerd[1515]: time="2025-01-29T13:01:29.532075283Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 13:01:29.714044 kubelet[2726]: E0129 13:01:29.713683 2726 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-khd2n" podUID="450ba1e5-6526-43de-b462-dcc818053fc4" Jan 29 13:01:29.808572 sshd[4506]: Accepted publickey for core from 147.75.109.163 port 59464 ssh2: RSA SHA256:N4m0UAGAVL0aGRQpLGyvungYkW8dGkNI4mN5vZ/Bmd0 Jan 29 13:01:29.810981 sshd-session[4506]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 13:01:29.818117 systemd-logind[1492]: New session 27 of user core. Jan 29 13:01:29.825007 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 29 13:01:29.889968 kubelet[2726]: E0129 13:01:29.889892 2726 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 29 13:01:30.408837 containerd[1515]: time="2025-01-29T13:01:30.408148090Z" level=info msg="CreateContainer within sandbox \"dd6f90e221a94c09667ccd9f736d3ef65c7bc9d7e7b328b22946658cf03be44e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 29 13:01:30.434876 sshd[4675]: Connection closed by 147.75.109.163 port 59464 Jan 29 13:01:30.435804 sshd-session[4506]: pam_unix(sshd:session): session closed for user core Jan 29 13:01:30.443719 containerd[1515]: time="2025-01-29T13:01:30.443518666Z" level=info msg="CreateContainer within sandbox \"dd6f90e221a94c09667ccd9f736d3ef65c7bc9d7e7b328b22946658cf03be44e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"39509358dc9ad63485bf32ec41110d2349647f53f829aa18a848a8f3a3a5a236\"" Jan 29 13:01:30.445470 containerd[1515]: time="2025-01-29T13:01:30.445376097Z" level=info msg="StartContainer for \"39509358dc9ad63485bf32ec41110d2349647f53f829aa18a848a8f3a3a5a236\"" Jan 29 13:01:30.448477 systemd[1]: sshd@29-10.243.84.18:22-147.75.109.163:59464.service: Deactivated successfully. Jan 29 13:01:30.453285 systemd[1]: session-27.scope: Deactivated successfully. Jan 29 13:01:30.457309 systemd-logind[1492]: Session 27 logged out. Waiting for processes to exit. Jan 29 13:01:30.460664 systemd-logind[1492]: Removed session 27. Jan 29 13:01:30.515281 systemd[1]: Started cri-containerd-39509358dc9ad63485bf32ec41110d2349647f53f829aa18a848a8f3a3a5a236.scope - libcontainer container 39509358dc9ad63485bf32ec41110d2349647f53f829aa18a848a8f3a3a5a236. Jan 29 13:01:30.569818 containerd[1515]: time="2025-01-29T13:01:30.569728018Z" level=info msg="StartContainer for \"39509358dc9ad63485bf32ec41110d2349647f53f829aa18a848a8f3a3a5a236\" returns successfully" Jan 29 13:01:30.587760 systemd[1]: cri-containerd-39509358dc9ad63485bf32ec41110d2349647f53f829aa18a848a8f3a3a5a236.scope: Deactivated successfully. Jan 29 13:01:30.595098 systemd[1]: Started sshd@30-10.243.84.18:22-147.75.109.163:59472.service - OpenSSH per-connection server daemon (147.75.109.163:59472). Jan 29 13:01:30.630699 containerd[1515]: time="2025-01-29T13:01:30.630382058Z" level=info msg="shim disconnected" id=39509358dc9ad63485bf32ec41110d2349647f53f829aa18a848a8f3a3a5a236 namespace=k8s.io Jan 29 13:01:30.630699 containerd[1515]: time="2025-01-29T13:01:30.630468772Z" level=warning msg="cleaning up after shim disconnected" id=39509358dc9ad63485bf32ec41110d2349647f53f829aa18a848a8f3a3a5a236 namespace=k8s.io Jan 29 13:01:30.630699 containerd[1515]: time="2025-01-29T13:01:30.630483044Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 13:01:30.853153 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-39509358dc9ad63485bf32ec41110d2349647f53f829aa18a848a8f3a3a5a236-rootfs.mount: Deactivated successfully. Jan 29 13:01:31.413594 containerd[1515]: time="2025-01-29T13:01:31.413526977Z" level=info msg="CreateContainer within sandbox \"dd6f90e221a94c09667ccd9f736d3ef65c7bc9d7e7b328b22946658cf03be44e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 29 13:01:31.435114 containerd[1515]: time="2025-01-29T13:01:31.434674368Z" level=info msg="CreateContainer within sandbox \"dd6f90e221a94c09667ccd9f736d3ef65c7bc9d7e7b328b22946658cf03be44e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"697375f5c41d7aff6373839b1c6561d49ee43e6b5a3fc82dc16f7d4eae42b8fd\"" Jan 29 13:01:31.436507 containerd[1515]: time="2025-01-29T13:01:31.435355042Z" level=info msg="StartContainer for \"697375f5c41d7aff6373839b1c6561d49ee43e6b5a3fc82dc16f7d4eae42b8fd\"" Jan 29 13:01:31.489010 systemd[1]: Started cri-containerd-697375f5c41d7aff6373839b1c6561d49ee43e6b5a3fc82dc16f7d4eae42b8fd.scope - libcontainer container 697375f5c41d7aff6373839b1c6561d49ee43e6b5a3fc82dc16f7d4eae42b8fd. Jan 29 13:01:31.522282 sshd[4719]: Accepted publickey for core from 147.75.109.163 port 59472 ssh2: RSA SHA256:N4m0UAGAVL0aGRQpLGyvungYkW8dGkNI4mN5vZ/Bmd0 Jan 29 13:01:31.525044 sshd-session[4719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 13:01:31.527248 systemd[1]: cri-containerd-697375f5c41d7aff6373839b1c6561d49ee43e6b5a3fc82dc16f7d4eae42b8fd.scope: Deactivated successfully. Jan 29 13:01:31.534952 containerd[1515]: time="2025-01-29T13:01:31.533877476Z" level=info msg="StartContainer for \"697375f5c41d7aff6373839b1c6561d49ee43e6b5a3fc82dc16f7d4eae42b8fd\" returns successfully" Jan 29 13:01:31.537906 systemd-logind[1492]: New session 28 of user core. Jan 29 13:01:31.542102 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 29 13:01:31.574238 containerd[1515]: time="2025-01-29T13:01:31.574033550Z" level=info msg="shim disconnected" id=697375f5c41d7aff6373839b1c6561d49ee43e6b5a3fc82dc16f7d4eae42b8fd namespace=k8s.io Jan 29 13:01:31.574238 containerd[1515]: time="2025-01-29T13:01:31.574213572Z" level=warning msg="cleaning up after shim disconnected" id=697375f5c41d7aff6373839b1c6561d49ee43e6b5a3fc82dc16f7d4eae42b8fd namespace=k8s.io Jan 29 13:01:31.574238 containerd[1515]: time="2025-01-29T13:01:31.574230316Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 13:01:31.713849 kubelet[2726]: E0129 13:01:31.712920 2726 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-khd2n" podUID="450ba1e5-6526-43de-b462-dcc818053fc4" Jan 29 13:01:31.852922 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-697375f5c41d7aff6373839b1c6561d49ee43e6b5a3fc82dc16f7d4eae42b8fd-rootfs.mount: Deactivated successfully. Jan 29 13:01:32.419649 containerd[1515]: time="2025-01-29T13:01:32.419411613Z" level=info msg="CreateContainer within sandbox \"dd6f90e221a94c09667ccd9f736d3ef65c7bc9d7e7b328b22946658cf03be44e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 29 13:01:32.443469 containerd[1515]: time="2025-01-29T13:01:32.441834851Z" level=info msg="CreateContainer within sandbox \"dd6f90e221a94c09667ccd9f736d3ef65c7bc9d7e7b328b22946658cf03be44e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"36e8445da62b33b8f4062af1336efd37ba7d9a12da60e7c7715cc7d65b4284f0\"" Jan 29 13:01:32.458749 containerd[1515]: time="2025-01-29T13:01:32.458683855Z" level=info msg="StartContainer for \"36e8445da62b33b8f4062af1336efd37ba7d9a12da60e7c7715cc7d65b4284f0\"" Jan 29 13:01:32.507470 systemd[1]: run-containerd-runc-k8s.io-36e8445da62b33b8f4062af1336efd37ba7d9a12da60e7c7715cc7d65b4284f0-runc.yiisT2.mount: Deactivated successfully. Jan 29 13:01:32.517031 systemd[1]: Started cri-containerd-36e8445da62b33b8f4062af1336efd37ba7d9a12da60e7c7715cc7d65b4284f0.scope - libcontainer container 36e8445da62b33b8f4062af1336efd37ba7d9a12da60e7c7715cc7d65b4284f0. Jan 29 13:01:32.561560 containerd[1515]: time="2025-01-29T13:01:32.561299467Z" level=info msg="StartContainer for \"36e8445da62b33b8f4062af1336efd37ba7d9a12da60e7c7715cc7d65b4284f0\" returns successfully" Jan 29 13:01:33.335856 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 29 13:01:33.450874 kubelet[2726]: I0129 13:01:33.450738 2726 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-228ws" podStartSLOduration=5.450702537 podStartE2EDuration="5.450702537s" podCreationTimestamp="2025-01-29 13:01:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 13:01:33.449943139 +0000 UTC m=+188.938559804" watchObservedRunningTime="2025-01-29 13:01:33.450702537 +0000 UTC m=+188.939319198" Jan 29 13:01:33.712548 kubelet[2726]: E0129 13:01:33.712289 2726 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-khd2n" podUID="450ba1e5-6526-43de-b462-dcc818053fc4" Jan 29 13:01:34.448072 systemd[1]: run-containerd-runc-k8s.io-36e8445da62b33b8f4062af1336efd37ba7d9a12da60e7c7715cc7d65b4284f0-runc.QiB1Rs.mount: Deactivated successfully. Jan 29 13:01:36.676308 systemd[1]: run-containerd-runc-k8s.io-36e8445da62b33b8f4062af1336efd37ba7d9a12da60e7c7715cc7d65b4284f0-runc.ZLkcNz.mount: Deactivated successfully. Jan 29 13:01:37.083121 systemd-networkd[1437]: lxc_health: Link UP Jan 29 13:01:37.091291 systemd-networkd[1437]: lxc_health: Gained carrier Jan 29 13:01:38.560292 systemd-networkd[1437]: lxc_health: Gained IPv6LL Jan 29 13:01:41.333431 systemd[1]: run-containerd-runc-k8s.io-36e8445da62b33b8f4062af1336efd37ba7d9a12da60e7c7715cc7d65b4284f0-runc.aUMuD3.mount: Deactivated successfully. Jan 29 13:01:43.506358 systemd[1]: run-containerd-runc-k8s.io-36e8445da62b33b8f4062af1336efd37ba7d9a12da60e7c7715cc7d65b4284f0-runc.hQOtZV.mount: Deactivated successfully. Jan 29 13:01:43.719881 sshd[4776]: Connection closed by 147.75.109.163 port 59472 Jan 29 13:01:43.721677 sshd-session[4719]: pam_unix(sshd:session): session closed for user core Jan 29 13:01:43.734009 systemd[1]: sshd@30-10.243.84.18:22-147.75.109.163:59472.service: Deactivated successfully. Jan 29 13:01:43.736851 systemd[1]: session-28.scope: Deactivated successfully. Jan 29 13:01:43.739477 systemd-logind[1492]: Session 28 logged out. Waiting for processes to exit. Jan 29 13:01:43.741863 systemd-logind[1492]: Removed session 28.