Mar 19 12:01:31.046061 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Wed Mar 19 10:13:43 -00 2025 Mar 19 12:01:31.046102 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=08c32ef14ad6302a92b1d281c48443f5b56d59f0d37d38df628e5b6f012967bc Mar 19 12:01:31.046117 kernel: BIOS-provided physical RAM map: Mar 19 12:01:31.046135 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 19 12:01:31.046146 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 19 12:01:31.046157 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 19 12:01:31.046169 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Mar 19 12:01:31.046180 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Mar 19 12:01:31.046191 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 19 12:01:31.046202 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 19 12:01:31.046213 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 19 12:01:31.046232 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 19 12:01:31.046250 kernel: NX (Execute Disable) protection: active Mar 19 12:01:31.046262 kernel: APIC: Static calls initialized Mar 19 12:01:31.046275 kernel: SMBIOS 2.8 present. Mar 19 12:01:31.046305 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Mar 19 12:01:31.046318 kernel: Hypervisor detected: KVM Mar 19 12:01:31.046336 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 19 12:01:31.046348 kernel: kvm-clock: using sched offset of 5578859655 cycles Mar 19 12:01:31.046361 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 19 12:01:31.046373 kernel: tsc: Detected 2499.998 MHz processor Mar 19 12:01:31.046386 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 19 12:01:31.046398 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 19 12:01:31.046410 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Mar 19 12:01:31.046423 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 19 12:01:31.046435 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 19 12:01:31.046451 kernel: Using GB pages for direct mapping Mar 19 12:01:31.046463 kernel: ACPI: Early table checksum verification disabled Mar 19 12:01:31.046475 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Mar 19 12:01:31.046488 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 19 12:01:31.046500 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 19 12:01:31.046512 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 19 12:01:31.046524 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Mar 19 12:01:31.046536 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 19 12:01:31.046548 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 19 12:01:31.046565 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 19 12:01:31.046577 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 19 12:01:31.046589 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Mar 19 12:01:31.046602 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Mar 19 12:01:31.046614 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Mar 19 12:01:31.046632 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Mar 19 12:01:31.046645 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Mar 19 12:01:31.046668 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Mar 19 12:01:31.046682 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Mar 19 12:01:31.046695 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Mar 19 12:01:31.046708 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Mar 19 12:01:31.046720 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Mar 19 12:01:31.046733 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Mar 19 12:01:31.046745 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Mar 19 12:01:31.046757 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Mar 19 12:01:31.046775 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Mar 19 12:01:31.046788 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Mar 19 12:01:31.046800 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Mar 19 12:01:31.046813 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Mar 19 12:01:31.046825 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Mar 19 12:01:31.046837 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Mar 19 12:01:31.046850 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Mar 19 12:01:31.046862 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Mar 19 12:01:31.046880 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Mar 19 12:01:31.047944 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Mar 19 12:01:31.047966 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Mar 19 12:01:31.047979 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Mar 19 12:01:31.047992 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Mar 19 12:01:31.048005 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Mar 19 12:01:31.048018 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Mar 19 12:01:31.048031 kernel: Zone ranges: Mar 19 12:01:31.048043 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 19 12:01:31.048056 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Mar 19 12:01:31.048069 kernel: Normal empty Mar 19 12:01:31.048096 kernel: Movable zone start for each node Mar 19 12:01:31.048111 kernel: Early memory node ranges Mar 19 12:01:31.048123 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 19 12:01:31.048136 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Mar 19 12:01:31.048148 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Mar 19 12:01:31.048161 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 19 12:01:31.048173 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 19 12:01:31.048192 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Mar 19 12:01:31.048206 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 19 12:01:31.048225 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 19 12:01:31.048238 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 19 12:01:31.048251 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 19 12:01:31.048264 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 19 12:01:31.048276 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 19 12:01:31.048301 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 19 12:01:31.048314 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 19 12:01:31.048326 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 19 12:01:31.048339 kernel: TSC deadline timer available Mar 19 12:01:31.048357 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Mar 19 12:01:31.048370 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 19 12:01:31.048383 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 19 12:01:31.048395 kernel: Booting paravirtualized kernel on KVM Mar 19 12:01:31.048408 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 19 12:01:31.048420 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Mar 19 12:01:31.048433 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Mar 19 12:01:31.048446 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Mar 19 12:01:31.048458 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Mar 19 12:01:31.048476 kernel: kvm-guest: PV spinlocks enabled Mar 19 12:01:31.048488 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 19 12:01:31.048503 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=08c32ef14ad6302a92b1d281c48443f5b56d59f0d37d38df628e5b6f012967bc Mar 19 12:01:31.048516 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 19 12:01:31.048529 kernel: random: crng init done Mar 19 12:01:31.048541 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 19 12:01:31.048554 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Mar 19 12:01:31.048566 kernel: Fallback order for Node 0: 0 Mar 19 12:01:31.048590 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Mar 19 12:01:31.048603 kernel: Policy zone: DMA32 Mar 19 12:01:31.048616 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 19 12:01:31.048629 kernel: software IO TLB: area num 16. Mar 19 12:01:31.048642 kernel: Memory: 1899480K/2096616K available (14336K kernel code, 2303K rwdata, 22860K rodata, 43480K init, 1592K bss, 196876K reserved, 0K cma-reserved) Mar 19 12:01:31.048655 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Mar 19 12:01:31.048668 kernel: Kernel/User page tables isolation: enabled Mar 19 12:01:31.048680 kernel: ftrace: allocating 37910 entries in 149 pages Mar 19 12:01:31.048693 kernel: ftrace: allocated 149 pages with 4 groups Mar 19 12:01:31.048711 kernel: Dynamic Preempt: voluntary Mar 19 12:01:31.048724 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 19 12:01:31.048737 kernel: rcu: RCU event tracing is enabled. Mar 19 12:01:31.048750 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Mar 19 12:01:31.048763 kernel: Trampoline variant of Tasks RCU enabled. Mar 19 12:01:31.048788 kernel: Rude variant of Tasks RCU enabled. Mar 19 12:01:31.048806 kernel: Tracing variant of Tasks RCU enabled. Mar 19 12:01:31.048820 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 19 12:01:31.048833 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Mar 19 12:01:31.048846 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Mar 19 12:01:31.048859 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 19 12:01:31.048873 kernel: Console: colour VGA+ 80x25 Mar 19 12:01:31.049938 kernel: printk: console [tty0] enabled Mar 19 12:01:31.049955 kernel: printk: console [ttyS0] enabled Mar 19 12:01:31.049968 kernel: ACPI: Core revision 20230628 Mar 19 12:01:31.049982 kernel: APIC: Switch to symmetric I/O mode setup Mar 19 12:01:31.049995 kernel: x2apic enabled Mar 19 12:01:31.050017 kernel: APIC: Switched APIC routing to: physical x2apic Mar 19 12:01:31.050038 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Mar 19 12:01:31.050053 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Mar 19 12:01:31.050067 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 19 12:01:31.050080 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Mar 19 12:01:31.050094 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Mar 19 12:01:31.050107 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 19 12:01:31.050120 kernel: Spectre V2 : Mitigation: Retpolines Mar 19 12:01:31.050133 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Mar 19 12:01:31.050152 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Mar 19 12:01:31.050165 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Mar 19 12:01:31.050179 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Mar 19 12:01:31.050192 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Mar 19 12:01:31.050205 kernel: MDS: Mitigation: Clear CPU buffers Mar 19 12:01:31.050218 kernel: MMIO Stale Data: Unknown: No mitigations Mar 19 12:01:31.050231 kernel: SRBDS: Unknown: Dependent on hypervisor status Mar 19 12:01:31.050244 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 19 12:01:31.050257 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 19 12:01:31.050270 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 19 12:01:31.050296 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 19 12:01:31.050316 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Mar 19 12:01:31.050335 kernel: Freeing SMP alternatives memory: 32K Mar 19 12:01:31.050350 kernel: pid_max: default: 32768 minimum: 301 Mar 19 12:01:31.050363 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 19 12:01:31.050376 kernel: landlock: Up and running. Mar 19 12:01:31.050389 kernel: SELinux: Initializing. Mar 19 12:01:31.050402 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 19 12:01:31.050416 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 19 12:01:31.050429 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Mar 19 12:01:31.050442 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Mar 19 12:01:31.050456 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Mar 19 12:01:31.050476 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Mar 19 12:01:31.050489 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Mar 19 12:01:31.050502 kernel: signal: max sigframe size: 1776 Mar 19 12:01:31.050515 kernel: rcu: Hierarchical SRCU implementation. Mar 19 12:01:31.050529 kernel: rcu: Max phase no-delay instances is 400. Mar 19 12:01:31.050543 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 19 12:01:31.050556 kernel: smp: Bringing up secondary CPUs ... Mar 19 12:01:31.050569 kernel: smpboot: x86: Booting SMP configuration: Mar 19 12:01:31.050583 kernel: .... node #0, CPUs: #1 Mar 19 12:01:31.050601 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Mar 19 12:01:31.050615 kernel: smp: Brought up 1 node, 2 CPUs Mar 19 12:01:31.050628 kernel: smpboot: Max logical packages: 16 Mar 19 12:01:31.050641 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Mar 19 12:01:31.050654 kernel: devtmpfs: initialized Mar 19 12:01:31.050668 kernel: x86/mm: Memory block size: 128MB Mar 19 12:01:31.050681 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 19 12:01:31.050695 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Mar 19 12:01:31.050708 kernel: pinctrl core: initialized pinctrl subsystem Mar 19 12:01:31.050726 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 19 12:01:31.050740 kernel: audit: initializing netlink subsys (disabled) Mar 19 12:01:31.050753 kernel: audit: type=2000 audit(1742385689.352:1): state=initialized audit_enabled=0 res=1 Mar 19 12:01:31.050766 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 19 12:01:31.050780 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 19 12:01:31.050793 kernel: cpuidle: using governor menu Mar 19 12:01:31.050806 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 19 12:01:31.050820 kernel: dca service started, version 1.12.1 Mar 19 12:01:31.050833 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 19 12:01:31.050852 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 19 12:01:31.050866 kernel: PCI: Using configuration type 1 for base access Mar 19 12:01:31.050879 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 19 12:01:31.051057 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 19 12:01:31.051072 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 19 12:01:31.051085 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 19 12:01:31.051099 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 19 12:01:31.051112 kernel: ACPI: Added _OSI(Module Device) Mar 19 12:01:31.051125 kernel: ACPI: Added _OSI(Processor Device) Mar 19 12:01:31.051146 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 19 12:01:31.051160 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 19 12:01:31.051173 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 19 12:01:31.051186 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 19 12:01:31.051199 kernel: ACPI: Interpreter enabled Mar 19 12:01:31.051212 kernel: ACPI: PM: (supports S0 S5) Mar 19 12:01:31.051226 kernel: ACPI: Using IOAPIC for interrupt routing Mar 19 12:01:31.051239 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 19 12:01:31.051252 kernel: PCI: Using E820 reservations for host bridge windows Mar 19 12:01:31.051271 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 19 12:01:31.051299 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 19 12:01:31.051590 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 19 12:01:31.051785 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Mar 19 12:01:31.052017 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Mar 19 12:01:31.052038 kernel: PCI host bridge to bus 0000:00 Mar 19 12:01:31.052222 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 19 12:01:31.052426 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 19 12:01:31.052600 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 19 12:01:31.052771 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Mar 19 12:01:31.052962 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 19 12:01:31.053139 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Mar 19 12:01:31.053778 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 19 12:01:31.054064 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 19 12:01:31.054371 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Mar 19 12:01:31.054564 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Mar 19 12:01:31.054750 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Mar 19 12:01:31.056985 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Mar 19 12:01:31.057192 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 19 12:01:31.057433 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Mar 19 12:01:31.057638 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Mar 19 12:01:31.057866 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Mar 19 12:01:31.058082 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Mar 19 12:01:31.058297 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Mar 19 12:01:31.058493 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Mar 19 12:01:31.058697 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Mar 19 12:01:31.060014 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Mar 19 12:01:31.060230 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Mar 19 12:01:31.060439 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Mar 19 12:01:31.060641 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Mar 19 12:01:31.060829 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Mar 19 12:01:31.062081 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Mar 19 12:01:31.062303 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Mar 19 12:01:31.062541 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Mar 19 12:01:31.062733 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Mar 19 12:01:31.064984 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Mar 19 12:01:31.065189 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Mar 19 12:01:31.065399 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Mar 19 12:01:31.065589 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Mar 19 12:01:31.065791 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Mar 19 12:01:31.066018 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Mar 19 12:01:31.066209 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Mar 19 12:01:31.066444 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Mar 19 12:01:31.066634 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Mar 19 12:01:31.066831 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 19 12:01:31.069140 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 19 12:01:31.069389 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 19 12:01:31.069578 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Mar 19 12:01:31.069762 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Mar 19 12:01:31.069989 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 19 12:01:31.070176 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Mar 19 12:01:31.070414 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Mar 19 12:01:31.070617 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Mar 19 12:01:31.070806 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Mar 19 12:01:31.072038 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Mar 19 12:01:31.072229 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Mar 19 12:01:31.072461 kernel: pci_bus 0000:02: extended config space not accessible Mar 19 12:01:31.072714 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Mar 19 12:01:31.074968 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Mar 19 12:01:31.075167 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Mar 19 12:01:31.075375 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Mar 19 12:01:31.075579 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Mar 19 12:01:31.075771 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Mar 19 12:01:31.077001 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Mar 19 12:01:31.077195 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Mar 19 12:01:31.077406 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Mar 19 12:01:31.077613 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Mar 19 12:01:31.077807 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Mar 19 12:01:31.078018 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Mar 19 12:01:31.078206 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Mar 19 12:01:31.078407 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Mar 19 12:01:31.078598 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Mar 19 12:01:31.078784 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Mar 19 12:01:31.081025 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Mar 19 12:01:31.081218 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Mar 19 12:01:31.081420 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Mar 19 12:01:31.081604 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Mar 19 12:01:31.081792 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Mar 19 12:01:31.083022 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Mar 19 12:01:31.083210 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Mar 19 12:01:31.083413 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Mar 19 12:01:31.083608 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Mar 19 12:01:31.083791 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Mar 19 12:01:31.086019 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Mar 19 12:01:31.086208 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Mar 19 12:01:31.086407 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Mar 19 12:01:31.086428 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 19 12:01:31.086442 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 19 12:01:31.086456 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 19 12:01:31.086477 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 19 12:01:31.086491 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 19 12:01:31.086504 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 19 12:01:31.086518 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 19 12:01:31.086531 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 19 12:01:31.086545 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 19 12:01:31.086558 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 19 12:01:31.086571 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 19 12:01:31.086585 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 19 12:01:31.086603 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 19 12:01:31.086617 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 19 12:01:31.086630 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 19 12:01:31.086644 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 19 12:01:31.086657 kernel: iommu: Default domain type: Translated Mar 19 12:01:31.086671 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 19 12:01:31.086684 kernel: PCI: Using ACPI for IRQ routing Mar 19 12:01:31.086697 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 19 12:01:31.086711 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 19 12:01:31.086730 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Mar 19 12:01:31.086954 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 19 12:01:31.087140 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 19 12:01:31.087338 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 19 12:01:31.087359 kernel: vgaarb: loaded Mar 19 12:01:31.087373 kernel: clocksource: Switched to clocksource kvm-clock Mar 19 12:01:31.087386 kernel: VFS: Disk quotas dquot_6.6.0 Mar 19 12:01:31.087400 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 19 12:01:31.087421 kernel: pnp: PnP ACPI init Mar 19 12:01:31.087639 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 19 12:01:31.087662 kernel: pnp: PnP ACPI: found 5 devices Mar 19 12:01:31.087676 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 19 12:01:31.087689 kernel: NET: Registered PF_INET protocol family Mar 19 12:01:31.087703 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 19 12:01:31.087717 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Mar 19 12:01:31.087730 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 19 12:01:31.087744 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 19 12:01:31.087765 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Mar 19 12:01:31.087778 kernel: TCP: Hash tables configured (established 16384 bind 16384) Mar 19 12:01:31.087800 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 19 12:01:31.087813 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 19 12:01:31.087827 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 19 12:01:31.087840 kernel: NET: Registered PF_XDP protocol family Mar 19 12:01:31.089095 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Mar 19 12:01:31.089301 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Mar 19 12:01:31.089501 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Mar 19 12:01:31.089688 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Mar 19 12:01:31.089912 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Mar 19 12:01:31.090114 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Mar 19 12:01:31.090312 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Mar 19 12:01:31.090498 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Mar 19 12:01:31.090696 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Mar 19 12:01:31.093899 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Mar 19 12:01:31.094103 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Mar 19 12:01:31.094300 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Mar 19 12:01:31.094489 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Mar 19 12:01:31.094676 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Mar 19 12:01:31.094864 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Mar 19 12:01:31.095090 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Mar 19 12:01:31.095336 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Mar 19 12:01:31.095545 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Mar 19 12:01:31.095736 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Mar 19 12:01:31.097999 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Mar 19 12:01:31.098241 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Mar 19 12:01:31.098442 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Mar 19 12:01:31.098630 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Mar 19 12:01:31.098818 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Mar 19 12:01:31.099044 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Mar 19 12:01:31.099248 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Mar 19 12:01:31.099487 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Mar 19 12:01:31.099681 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Mar 19 12:01:31.100928 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Mar 19 12:01:31.101134 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Mar 19 12:01:31.101345 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Mar 19 12:01:31.101530 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Mar 19 12:01:31.101713 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Mar 19 12:01:31.102933 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Mar 19 12:01:31.103129 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Mar 19 12:01:31.103329 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Mar 19 12:01:31.103515 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Mar 19 12:01:31.103700 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Mar 19 12:01:31.105916 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Mar 19 12:01:31.106128 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Mar 19 12:01:31.106340 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Mar 19 12:01:31.106526 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Mar 19 12:01:31.106713 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Mar 19 12:01:31.106941 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Mar 19 12:01:31.107146 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Mar 19 12:01:31.107344 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Mar 19 12:01:31.107530 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Mar 19 12:01:31.107714 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Mar 19 12:01:31.109936 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Mar 19 12:01:31.110131 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Mar 19 12:01:31.110323 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 19 12:01:31.110495 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 19 12:01:31.110672 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 19 12:01:31.110841 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Mar 19 12:01:31.111049 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 19 12:01:31.111219 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Mar 19 12:01:31.111425 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Mar 19 12:01:31.111606 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Mar 19 12:01:31.111790 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Mar 19 12:01:31.112192 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Mar 19 12:01:31.112461 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Mar 19 12:01:31.112659 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Mar 19 12:01:31.112849 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Mar 19 12:01:31.113088 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Mar 19 12:01:31.113285 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Mar 19 12:01:31.113474 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Mar 19 12:01:31.113691 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Mar 19 12:01:31.113919 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Mar 19 12:01:31.114106 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Mar 19 12:01:31.114367 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Mar 19 12:01:31.114561 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Mar 19 12:01:31.114745 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Mar 19 12:01:31.114963 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Mar 19 12:01:31.115156 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Mar 19 12:01:31.115351 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Mar 19 12:01:31.115559 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Mar 19 12:01:31.115742 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Mar 19 12:01:31.115951 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Mar 19 12:01:31.116153 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Mar 19 12:01:31.116352 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Mar 19 12:01:31.116548 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Mar 19 12:01:31.116571 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 19 12:01:31.116586 kernel: PCI: CLS 0 bytes, default 64 Mar 19 12:01:31.116601 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Mar 19 12:01:31.116616 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Mar 19 12:01:31.116630 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Mar 19 12:01:31.116645 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Mar 19 12:01:31.116659 kernel: Initialise system trusted keyrings Mar 19 12:01:31.116683 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Mar 19 12:01:31.116697 kernel: Key type asymmetric registered Mar 19 12:01:31.116711 kernel: Asymmetric key parser 'x509' registered Mar 19 12:01:31.116725 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 19 12:01:31.116739 kernel: io scheduler mq-deadline registered Mar 19 12:01:31.116753 kernel: io scheduler kyber registered Mar 19 12:01:31.116768 kernel: io scheduler bfq registered Mar 19 12:01:31.116998 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Mar 19 12:01:31.117191 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Mar 19 12:01:31.117405 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 19 12:01:31.117596 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Mar 19 12:01:31.117784 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Mar 19 12:01:31.117995 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 19 12:01:31.118193 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Mar 19 12:01:31.118396 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Mar 19 12:01:31.118598 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 19 12:01:31.118800 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Mar 19 12:01:31.119073 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Mar 19 12:01:31.119262 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 19 12:01:31.119465 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Mar 19 12:01:31.119653 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Mar 19 12:01:31.119848 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 19 12:01:31.120053 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Mar 19 12:01:31.120238 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Mar 19 12:01:31.120437 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 19 12:01:31.120629 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Mar 19 12:01:31.120817 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Mar 19 12:01:31.121064 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 19 12:01:31.121254 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Mar 19 12:01:31.121455 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Mar 19 12:01:31.121642 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 19 12:01:31.121664 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 19 12:01:31.121679 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 19 12:01:31.121702 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 19 12:01:31.121717 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 19 12:01:31.121731 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 19 12:01:31.121746 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 19 12:01:31.121760 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 19 12:01:31.121780 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 19 12:01:31.122027 kernel: rtc_cmos 00:03: RTC can wake from S4 Mar 19 12:01:31.122051 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 19 12:01:31.122233 kernel: rtc_cmos 00:03: registered as rtc0 Mar 19 12:01:31.122427 kernel: rtc_cmos 00:03: setting system clock to 2025-03-19T12:01:30 UTC (1742385690) Mar 19 12:01:31.122605 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Mar 19 12:01:31.122625 kernel: intel_pstate: CPU model not supported Mar 19 12:01:31.122640 kernel: NET: Registered PF_INET6 protocol family Mar 19 12:01:31.122654 kernel: Segment Routing with IPv6 Mar 19 12:01:31.122668 kernel: In-situ OAM (IOAM) with IPv6 Mar 19 12:01:31.122682 kernel: NET: Registered PF_PACKET protocol family Mar 19 12:01:31.122696 kernel: Key type dns_resolver registered Mar 19 12:01:31.122718 kernel: IPI shorthand broadcast: enabled Mar 19 12:01:31.122733 kernel: sched_clock: Marking stable (1416026531, 234962524)->(1934622265, -283633210) Mar 19 12:01:31.122747 kernel: registered taskstats version 1 Mar 19 12:01:31.122761 kernel: Loading compiled-in X.509 certificates Mar 19 12:01:31.122776 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: ea8d6696bd19c98b32173a761210456cdad6b56b' Mar 19 12:01:31.122790 kernel: Key type .fscrypt registered Mar 19 12:01:31.122804 kernel: Key type fscrypt-provisioning registered Mar 19 12:01:31.122818 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 19 12:01:31.122832 kernel: ima: Allocated hash algorithm: sha1 Mar 19 12:01:31.122852 kernel: ima: No architecture policies found Mar 19 12:01:31.122866 kernel: clk: Disabling unused clocks Mar 19 12:01:31.122880 kernel: Freeing unused kernel image (initmem) memory: 43480K Mar 19 12:01:31.122916 kernel: Write protecting the kernel read-only data: 38912k Mar 19 12:01:31.122931 kernel: Freeing unused kernel image (rodata/data gap) memory: 1716K Mar 19 12:01:31.122945 kernel: Run /init as init process Mar 19 12:01:31.122959 kernel: with arguments: Mar 19 12:01:31.122973 kernel: /init Mar 19 12:01:31.122987 kernel: with environment: Mar 19 12:01:31.123009 kernel: HOME=/ Mar 19 12:01:31.123022 kernel: TERM=linux Mar 19 12:01:31.123036 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 19 12:01:31.123051 systemd[1]: Successfully made /usr/ read-only. Mar 19 12:01:31.123071 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 19 12:01:31.123086 systemd[1]: Detected virtualization kvm. Mar 19 12:01:31.123101 systemd[1]: Detected architecture x86-64. Mar 19 12:01:31.123121 systemd[1]: Running in initrd. Mar 19 12:01:31.123136 systemd[1]: No hostname configured, using default hostname. Mar 19 12:01:31.123152 systemd[1]: Hostname set to . Mar 19 12:01:31.123166 systemd[1]: Initializing machine ID from VM UUID. Mar 19 12:01:31.123181 systemd[1]: Queued start job for default target initrd.target. Mar 19 12:01:31.123196 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 19 12:01:31.123211 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 19 12:01:31.123227 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 19 12:01:31.123248 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 19 12:01:31.123263 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 19 12:01:31.123291 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 19 12:01:31.123308 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 19 12:01:31.123324 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 19 12:01:31.123339 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 19 12:01:31.123355 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 19 12:01:31.123376 systemd[1]: Reached target paths.target - Path Units. Mar 19 12:01:31.123391 systemd[1]: Reached target slices.target - Slice Units. Mar 19 12:01:31.123406 systemd[1]: Reached target swap.target - Swaps. Mar 19 12:01:31.123421 systemd[1]: Reached target timers.target - Timer Units. Mar 19 12:01:31.123436 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 19 12:01:31.123451 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 19 12:01:31.123466 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 19 12:01:31.123481 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 19 12:01:31.123497 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 19 12:01:31.123518 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 19 12:01:31.123533 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 19 12:01:31.123548 systemd[1]: Reached target sockets.target - Socket Units. Mar 19 12:01:31.123563 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 19 12:01:31.123578 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 19 12:01:31.123593 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 19 12:01:31.123614 systemd[1]: Starting systemd-fsck-usr.service... Mar 19 12:01:31.123630 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 19 12:01:31.123650 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 19 12:01:31.123666 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 19 12:01:31.123737 systemd-journald[202]: Collecting audit messages is disabled. Mar 19 12:01:31.123785 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 19 12:01:31.123807 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 19 12:01:31.123823 systemd[1]: Finished systemd-fsck-usr.service. Mar 19 12:01:31.123851 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 19 12:01:31.123866 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 19 12:01:31.123880 kernel: Bridge firewalling registered Mar 19 12:01:31.123901 systemd-journald[202]: Journal started Mar 19 12:01:31.123979 systemd-journald[202]: Runtime Journal (/run/log/journal/5ee36eaae34f4900b720c422caf5e68e) is 4.7M, max 37.9M, 33.2M free. Mar 19 12:01:31.064710 systemd-modules-load[203]: Inserted module 'overlay' Mar 19 12:01:31.102450 systemd-modules-load[203]: Inserted module 'br_netfilter' Mar 19 12:01:31.172706 systemd[1]: Started systemd-journald.service - Journal Service. Mar 19 12:01:31.172795 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 19 12:01:31.174982 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 19 12:01:31.176108 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 19 12:01:31.192116 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 19 12:01:31.194078 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 19 12:01:31.199208 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 19 12:01:31.210716 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 19 12:01:31.225242 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 19 12:01:31.235734 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 19 12:01:31.237998 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 19 12:01:31.252266 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 19 12:01:31.253388 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 19 12:01:31.260163 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 19 12:01:31.272233 dracut-cmdline[235]: dracut-dracut-053 Mar 19 12:01:31.278439 dracut-cmdline[235]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=08c32ef14ad6302a92b1d281c48443f5b56d59f0d37d38df628e5b6f012967bc Mar 19 12:01:31.324841 systemd-resolved[239]: Positive Trust Anchors: Mar 19 12:01:31.324869 systemd-resolved[239]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 19 12:01:31.324992 systemd-resolved[239]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 19 12:01:31.329752 systemd-resolved[239]: Defaulting to hostname 'linux'. Mar 19 12:01:31.332247 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 19 12:01:31.333188 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 19 12:01:31.404972 kernel: SCSI subsystem initialized Mar 19 12:01:31.416927 kernel: Loading iSCSI transport class v2.0-870. Mar 19 12:01:31.430931 kernel: iscsi: registered transport (tcp) Mar 19 12:01:31.457539 kernel: iscsi: registered transport (qla4xxx) Mar 19 12:01:31.457636 kernel: QLogic iSCSI HBA Driver Mar 19 12:01:31.529559 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 19 12:01:31.537187 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 19 12:01:31.573453 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 19 12:01:31.573576 kernel: device-mapper: uevent: version 1.0.3 Mar 19 12:01:31.573598 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 19 12:01:31.625959 kernel: raid6: sse2x4 gen() 13601 MB/s Mar 19 12:01:31.643942 kernel: raid6: sse2x2 gen() 9534 MB/s Mar 19 12:01:31.662587 kernel: raid6: sse2x1 gen() 9462 MB/s Mar 19 12:01:31.662711 kernel: raid6: using algorithm sse2x4 gen() 13601 MB/s Mar 19 12:01:31.681645 kernel: raid6: .... xor() 7625 MB/s, rmw enabled Mar 19 12:01:31.681773 kernel: raid6: using ssse3x2 recovery algorithm Mar 19 12:01:31.707941 kernel: xor: automatically using best checksumming function avx Mar 19 12:01:31.903949 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 19 12:01:31.919742 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 19 12:01:31.928216 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 19 12:01:31.949502 systemd-udevd[420]: Using default interface naming scheme 'v255'. Mar 19 12:01:31.958579 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 19 12:01:31.969276 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 19 12:01:31.989288 dracut-pre-trigger[427]: rd.md=0: removing MD RAID activation Mar 19 12:01:32.036502 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 19 12:01:32.044186 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 19 12:01:32.183954 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 19 12:01:32.193798 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 19 12:01:32.231349 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 19 12:01:32.233792 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 19 12:01:32.236107 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 19 12:01:32.237641 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 19 12:01:32.243074 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 19 12:01:32.278034 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 19 12:01:32.348108 kernel: cryptd: max_cpu_qlen set to 1000 Mar 19 12:01:32.350916 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Mar 19 12:01:32.398934 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Mar 19 12:01:32.399209 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 19 12:01:32.399294 kernel: GPT:17805311 != 125829119 Mar 19 12:01:32.399318 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 19 12:01:32.399336 kernel: GPT:17805311 != 125829119 Mar 19 12:01:32.399354 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 19 12:01:32.399372 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 19 12:01:32.399391 kernel: ACPI: bus type USB registered Mar 19 12:01:32.399409 kernel: usbcore: registered new interface driver usbfs Mar 19 12:01:32.399427 kernel: usbcore: registered new interface driver hub Mar 19 12:01:32.399446 kernel: usbcore: registered new device driver usb Mar 19 12:01:32.397933 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 19 12:01:32.398133 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 19 12:01:32.399534 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 19 12:01:32.403773 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 19 12:01:32.403984 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 19 12:01:32.406048 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 19 12:01:32.416229 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 19 12:01:32.424472 kernel: AVX version of gcm_enc/dec engaged. Mar 19 12:01:32.421491 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 19 12:01:32.456924 kernel: AES CTR mode by8 optimization enabled Mar 19 12:01:32.460976 kernel: BTRFS: device fsid 8d57424d-5abc-4888-810f-658d040a58e4 devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (468) Mar 19 12:01:32.479973 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Mar 19 12:01:32.484514 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Mar 19 12:01:32.484787 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Mar 19 12:01:32.485050 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Mar 19 12:01:32.485297 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Mar 19 12:01:32.485528 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Mar 19 12:01:32.485754 kernel: hub 1-0:1.0: USB hub found Mar 19 12:01:32.486058 kernel: hub 1-0:1.0: 4 ports detected Mar 19 12:01:32.486300 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Mar 19 12:01:32.486670 kernel: hub 2-0:1.0: USB hub found Mar 19 12:01:32.486948 kernel: hub 2-0:1.0: 4 ports detected Mar 19 12:01:32.502949 kernel: libata version 3.00 loaded. Mar 19 12:01:32.503807 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 19 12:01:32.631267 kernel: ahci 0000:00:1f.2: version 3.0 Mar 19 12:01:32.631628 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 19 12:01:32.631653 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (470) Mar 19 12:01:32.631673 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 19 12:01:32.631915 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 19 12:01:32.632143 kernel: scsi host0: ahci Mar 19 12:01:32.632454 kernel: scsi host1: ahci Mar 19 12:01:32.632704 kernel: scsi host2: ahci Mar 19 12:01:32.632984 kernel: scsi host3: ahci Mar 19 12:01:32.633205 kernel: scsi host4: ahci Mar 19 12:01:32.633438 kernel: scsi host5: ahci Mar 19 12:01:32.633654 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 41 Mar 19 12:01:32.633676 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 41 Mar 19 12:01:32.633704 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 41 Mar 19 12:01:32.633724 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 41 Mar 19 12:01:32.633742 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 41 Mar 19 12:01:32.633761 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 41 Mar 19 12:01:32.630222 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 19 12:01:32.632372 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 19 12:01:32.672462 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 19 12:01:32.685730 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 19 12:01:32.698943 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 19 12:01:32.717306 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 19 12:01:32.721077 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 19 12:01:32.727913 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Mar 19 12:01:32.735949 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 19 12:01:32.737615 disk-uuid[564]: Primary Header is updated. Mar 19 12:01:32.737615 disk-uuid[564]: Secondary Entries is updated. Mar 19 12:01:32.737615 disk-uuid[564]: Secondary Header is updated. Mar 19 12:01:32.750345 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 19 12:01:32.760270 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 19 12:01:32.878261 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 19 12:01:32.878346 kernel: ata3: SATA link down (SStatus 0 SControl 300) Mar 19 12:01:32.878911 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 19 12:01:32.881546 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 19 12:01:32.881582 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 19 12:01:32.884367 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 19 12:01:32.886360 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 19 12:01:32.897117 kernel: usbcore: registered new interface driver usbhid Mar 19 12:01:32.897157 kernel: usbhid: USB HID core driver Mar 19 12:01:32.905952 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Mar 19 12:01:32.911001 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Mar 19 12:01:33.753408 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 19 12:01:33.754768 disk-uuid[567]: The operation has completed successfully. Mar 19 12:01:33.833623 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 19 12:01:33.833795 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 19 12:01:33.875090 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 19 12:01:33.881616 sh[584]: Success Mar 19 12:01:33.897946 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Mar 19 12:01:33.958149 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 19 12:01:33.959365 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 19 12:01:33.962245 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 19 12:01:34.005416 kernel: BTRFS info (device dm-0): first mount of filesystem 8d57424d-5abc-4888-810f-658d040a58e4 Mar 19 12:01:34.005501 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 19 12:01:34.007504 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 19 12:01:34.010871 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 19 12:01:34.010937 kernel: BTRFS info (device dm-0): using free space tree Mar 19 12:01:34.023321 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 19 12:01:34.024911 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 19 12:01:34.030395 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 19 12:01:34.035104 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 19 12:01:34.056627 kernel: BTRFS info (device vda6): first mount of filesystem 3c2c2d54-a06e-4f36-8d13-ab30a5d0eab5 Mar 19 12:01:34.056700 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 19 12:01:34.056721 kernel: BTRFS info (device vda6): using free space tree Mar 19 12:01:34.060905 kernel: BTRFS info (device vda6): auto enabling async discard Mar 19 12:01:34.076601 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 19 12:01:34.079454 kernel: BTRFS info (device vda6): last unmount of filesystem 3c2c2d54-a06e-4f36-8d13-ab30a5d0eab5 Mar 19 12:01:34.087196 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 19 12:01:34.096108 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 19 12:01:34.194261 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 19 12:01:34.217151 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 19 12:01:34.338429 ignition[681]: Ignition 2.20.0 Mar 19 12:01:34.338456 ignition[681]: Stage: fetch-offline Mar 19 12:01:34.341671 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 19 12:01:34.338537 ignition[681]: no configs at "/usr/lib/ignition/base.d" Mar 19 12:01:34.338557 ignition[681]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 19 12:01:34.338717 ignition[681]: parsed url from cmdline: "" Mar 19 12:01:34.338725 ignition[681]: no config URL provided Mar 19 12:01:34.338735 ignition[681]: reading system config file "/usr/lib/ignition/user.ign" Mar 19 12:01:34.338753 ignition[681]: no config at "/usr/lib/ignition/user.ign" Mar 19 12:01:34.338763 ignition[681]: failed to fetch config: resource requires networking Mar 19 12:01:34.352221 systemd-networkd[771]: lo: Link UP Mar 19 12:01:34.339041 ignition[681]: Ignition finished successfully Mar 19 12:01:34.352228 systemd-networkd[771]: lo: Gained carrier Mar 19 12:01:34.355347 systemd-networkd[771]: Enumeration completed Mar 19 12:01:34.355947 systemd-networkd[771]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 19 12:01:34.355955 systemd-networkd[771]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 19 12:01:34.357088 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 19 12:01:34.358380 systemd-networkd[771]: eth0: Link UP Mar 19 12:01:34.358386 systemd-networkd[771]: eth0: Gained carrier Mar 19 12:01:34.358399 systemd-networkd[771]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 19 12:01:34.359933 systemd[1]: Reached target network.target - Network. Mar 19 12:01:34.372223 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 19 12:01:34.404050 systemd-networkd[771]: eth0: DHCPv4 address 10.230.57.154/30, gateway 10.230.57.153 acquired from 10.230.57.153 Mar 19 12:01:34.436418 ignition[780]: Ignition 2.20.0 Mar 19 12:01:34.436435 ignition[780]: Stage: fetch Mar 19 12:01:34.436742 ignition[780]: no configs at "/usr/lib/ignition/base.d" Mar 19 12:01:34.436775 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 19 12:01:34.436952 ignition[780]: parsed url from cmdline: "" Mar 19 12:01:34.436960 ignition[780]: no config URL provided Mar 19 12:01:34.436971 ignition[780]: reading system config file "/usr/lib/ignition/user.ign" Mar 19 12:01:34.437000 ignition[780]: no config at "/usr/lib/ignition/user.ign" Mar 19 12:01:34.437146 ignition[780]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Mar 19 12:01:34.437471 ignition[780]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Mar 19 12:01:34.437531 ignition[780]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Mar 19 12:01:34.457093 ignition[780]: GET result: OK Mar 19 12:01:34.457814 ignition[780]: parsing config with SHA512: 5a91a1304fba8f638cd579b2b3c2153c31c7e9ad20f068b3a0267218cc7e25f3bd9dbe1a842cc3bc0e181781c131e2f7faf2f0b0a1947b74e90d3f8a4c030b67 Mar 19 12:01:34.463623 unknown[780]: fetched base config from "system" Mar 19 12:01:34.463641 unknown[780]: fetched base config from "system" Mar 19 12:01:34.464148 ignition[780]: fetch: fetch complete Mar 19 12:01:34.463651 unknown[780]: fetched user config from "openstack" Mar 19 12:01:34.464157 ignition[780]: fetch: fetch passed Mar 19 12:01:34.467325 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 19 12:01:34.464239 ignition[780]: Ignition finished successfully Mar 19 12:01:34.485221 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 19 12:01:34.508621 ignition[787]: Ignition 2.20.0 Mar 19 12:01:34.509693 ignition[787]: Stage: kargs Mar 19 12:01:34.509991 ignition[787]: no configs at "/usr/lib/ignition/base.d" Mar 19 12:01:34.510014 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 19 12:01:34.513472 ignition[787]: kargs: kargs passed Mar 19 12:01:34.514301 ignition[787]: Ignition finished successfully Mar 19 12:01:34.516271 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 19 12:01:34.522144 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 19 12:01:34.541661 ignition[793]: Ignition 2.20.0 Mar 19 12:01:34.541684 ignition[793]: Stage: disks Mar 19 12:01:34.541979 ignition[793]: no configs at "/usr/lib/ignition/base.d" Mar 19 12:01:34.542001 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 19 12:01:34.543447 ignition[793]: disks: disks passed Mar 19 12:01:34.544651 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 19 12:01:34.543521 ignition[793]: Ignition finished successfully Mar 19 12:01:34.546328 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 19 12:01:34.547721 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 19 12:01:34.549127 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 19 12:01:34.550568 systemd[1]: Reached target sysinit.target - System Initialization. Mar 19 12:01:34.552100 systemd[1]: Reached target basic.target - Basic System. Mar 19 12:01:34.560090 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 19 12:01:34.579762 systemd-fsck[801]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Mar 19 12:01:34.583382 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 19 12:01:35.011036 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 19 12:01:35.121916 kernel: EXT4-fs (vda9): mounted filesystem 303a73dd-e104-408b-9302-bf91b04ba1ca r/w with ordered data mode. Quota mode: none. Mar 19 12:01:35.123235 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 19 12:01:35.124509 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 19 12:01:35.138084 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 19 12:01:35.141348 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 19 12:01:35.143194 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 19 12:01:35.144781 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Mar 19 12:01:35.146168 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 19 12:01:35.146218 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 19 12:01:35.163545 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (809) Mar 19 12:01:35.163585 kernel: BTRFS info (device vda6): first mount of filesystem 3c2c2d54-a06e-4f36-8d13-ab30a5d0eab5 Mar 19 12:01:35.163606 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 19 12:01:35.163624 kernel: BTRFS info (device vda6): using free space tree Mar 19 12:01:35.168272 kernel: BTRFS info (device vda6): auto enabling async discard Mar 19 12:01:35.168071 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 19 12:01:35.180200 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 19 12:01:35.186559 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 19 12:01:35.265107 initrd-setup-root[837]: cut: /sysroot/etc/passwd: No such file or directory Mar 19 12:01:35.277912 initrd-setup-root[844]: cut: /sysroot/etc/group: No such file or directory Mar 19 12:01:35.285477 initrd-setup-root[851]: cut: /sysroot/etc/shadow: No such file or directory Mar 19 12:01:35.292398 initrd-setup-root[858]: cut: /sysroot/etc/gshadow: No such file or directory Mar 19 12:01:35.401233 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 19 12:01:35.409018 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 19 12:01:35.411862 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 19 12:01:35.425916 kernel: BTRFS info (device vda6): last unmount of filesystem 3c2c2d54-a06e-4f36-8d13-ab30a5d0eab5 Mar 19 12:01:35.451935 ignition[925]: INFO : Ignition 2.20.0 Mar 19 12:01:35.451935 ignition[925]: INFO : Stage: mount Mar 19 12:01:35.451935 ignition[925]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 19 12:01:35.451935 ignition[925]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 19 12:01:35.457268 ignition[925]: INFO : mount: mount passed Mar 19 12:01:35.457268 ignition[925]: INFO : Ignition finished successfully Mar 19 12:01:35.455292 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 19 12:01:35.458385 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 19 12:01:35.699201 systemd-networkd[771]: eth0: Gained IPv6LL Mar 19 12:01:36.003277 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 19 12:01:37.208383 systemd-networkd[771]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8e66:24:19ff:fee6:399a/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8e66:24:19ff:fee6:399a/64 assigned by NDisc. Mar 19 12:01:37.208401 systemd-networkd[771]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Mar 19 12:01:42.313287 coreos-metadata[811]: Mar 19 12:01:42.313 WARN failed to locate config-drive, using the metadata service API instead Mar 19 12:01:42.335119 coreos-metadata[811]: Mar 19 12:01:42.335 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Mar 19 12:01:42.352097 coreos-metadata[811]: Mar 19 12:01:42.352 INFO Fetch successful Mar 19 12:01:42.353059 coreos-metadata[811]: Mar 19 12:01:42.352 INFO wrote hostname srv-z8dvi.gb1.brightbox.com to /sysroot/etc/hostname Mar 19 12:01:42.355145 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Mar 19 12:01:42.355451 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Mar 19 12:01:42.370116 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 19 12:01:42.393162 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 19 12:01:42.405974 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (942) Mar 19 12:01:42.409901 kernel: BTRFS info (device vda6): first mount of filesystem 3c2c2d54-a06e-4f36-8d13-ab30a5d0eab5 Mar 19 12:01:42.409952 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 19 12:01:42.411213 kernel: BTRFS info (device vda6): using free space tree Mar 19 12:01:42.417351 kernel: BTRFS info (device vda6): auto enabling async discard Mar 19 12:01:42.419638 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 19 12:01:42.461902 ignition[960]: INFO : Ignition 2.20.0 Mar 19 12:01:42.461902 ignition[960]: INFO : Stage: files Mar 19 12:01:42.463754 ignition[960]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 19 12:01:42.463754 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 19 12:01:42.463754 ignition[960]: DEBUG : files: compiled without relabeling support, skipping Mar 19 12:01:42.466559 ignition[960]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 19 12:01:42.466559 ignition[960]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 19 12:01:42.470930 ignition[960]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 19 12:01:42.471957 ignition[960]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 19 12:01:42.471957 ignition[960]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 19 12:01:42.471693 unknown[960]: wrote ssh authorized keys file for user: core Mar 19 12:01:42.474914 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Mar 19 12:01:42.474914 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Mar 19 12:01:42.683099 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 19 12:01:43.918707 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Mar 19 12:01:43.918707 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 19 12:01:43.930553 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Mar 19 12:01:44.532205 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 19 12:01:44.937130 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 19 12:01:44.938946 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 19 12:01:44.938946 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 19 12:01:44.938946 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 19 12:01:44.938946 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 19 12:01:44.938946 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 19 12:01:44.938946 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 19 12:01:44.938946 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 19 12:01:44.938946 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 19 12:01:44.938946 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 19 12:01:44.948746 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 19 12:01:44.948746 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 19 12:01:44.948746 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 19 12:01:44.948746 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 19 12:01:44.948746 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Mar 19 12:01:45.419030 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 19 12:01:46.985695 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 19 12:01:46.985695 ignition[960]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 19 12:01:46.992909 ignition[960]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 19 12:01:46.994312 ignition[960]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 19 12:01:46.994312 ignition[960]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 19 12:01:46.994312 ignition[960]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Mar 19 12:01:46.994312 ignition[960]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Mar 19 12:01:46.999760 ignition[960]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 19 12:01:46.999760 ignition[960]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 19 12:01:46.999760 ignition[960]: INFO : files: files passed Mar 19 12:01:46.999760 ignition[960]: INFO : Ignition finished successfully Mar 19 12:01:46.999416 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 19 12:01:47.017202 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 19 12:01:47.021449 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 19 12:01:47.023404 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 19 12:01:47.023590 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 19 12:01:47.051963 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 19 12:01:47.051963 initrd-setup-root-after-ignition[989]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 19 12:01:47.055911 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 19 12:01:47.058238 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 19 12:01:47.059643 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 19 12:01:47.070129 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 19 12:01:47.104873 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 19 12:01:47.105073 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 19 12:01:47.106997 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 19 12:01:47.108309 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 19 12:01:47.109941 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 19 12:01:47.124120 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 19 12:01:47.142054 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 19 12:01:47.149118 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 19 12:01:47.163797 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 19 12:01:47.165704 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 19 12:01:47.167186 systemd[1]: Stopped target timers.target - Timer Units. Mar 19 12:01:47.168136 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 19 12:01:47.168405 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 19 12:01:47.169996 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 19 12:01:47.170938 systemd[1]: Stopped target basic.target - Basic System. Mar 19 12:01:47.172353 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 19 12:01:47.173786 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 19 12:01:47.175158 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 19 12:01:47.176682 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 19 12:01:47.178263 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 19 12:01:47.179814 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 19 12:01:47.181338 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 19 12:01:47.182869 systemd[1]: Stopped target swap.target - Swaps. Mar 19 12:01:47.185791 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 19 12:01:47.186060 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 19 12:01:47.188081 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 19 12:01:47.189052 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 19 12:01:47.190538 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 19 12:01:47.190740 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 19 12:01:47.192204 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 19 12:01:47.192421 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 19 12:01:47.193870 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 19 12:01:47.194209 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 19 12:01:47.195280 systemd[1]: ignition-files.service: Deactivated successfully. Mar 19 12:01:47.195551 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 19 12:01:47.205188 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 19 12:01:47.208262 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 19 12:01:47.211668 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 19 12:01:47.211955 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 19 12:01:47.219635 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 19 12:01:47.219864 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 19 12:01:47.243063 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 19 12:01:47.244088 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 19 12:01:47.248031 ignition[1013]: INFO : Ignition 2.20.0 Mar 19 12:01:47.248031 ignition[1013]: INFO : Stage: umount Mar 19 12:01:47.249674 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 19 12:01:47.249674 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 19 12:01:47.255839 ignition[1013]: INFO : umount: umount passed Mar 19 12:01:47.257960 ignition[1013]: INFO : Ignition finished successfully Mar 19 12:01:47.260083 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 19 12:01:47.261107 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 19 12:01:47.264640 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 19 12:01:47.266738 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 19 12:01:47.266959 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 19 12:01:47.269164 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 19 12:01:47.269291 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 19 12:01:47.270724 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 19 12:01:47.270834 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 19 12:01:47.272073 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 19 12:01:47.272171 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 19 12:01:47.273399 systemd[1]: Stopped target network.target - Network. Mar 19 12:01:47.274645 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 19 12:01:47.274742 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 19 12:01:47.276131 systemd[1]: Stopped target paths.target - Path Units. Mar 19 12:01:47.277352 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 19 12:01:47.279447 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 19 12:01:47.280473 systemd[1]: Stopped target slices.target - Slice Units. Mar 19 12:01:47.281790 systemd[1]: Stopped target sockets.target - Socket Units. Mar 19 12:01:47.283287 systemd[1]: iscsid.socket: Deactivated successfully. Mar 19 12:01:47.283376 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 19 12:01:47.284917 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 19 12:01:47.284988 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 19 12:01:47.286227 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 19 12:01:47.286332 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 19 12:01:47.287529 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 19 12:01:47.287616 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 19 12:01:47.289038 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 19 12:01:47.289123 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 19 12:01:47.290606 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 19 12:01:47.292287 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 19 12:01:47.296170 systemd-networkd[771]: eth0: DHCPv6 lease lost Mar 19 12:01:47.302401 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 19 12:01:47.302632 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 19 12:01:47.306276 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 19 12:01:47.306643 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 19 12:01:47.306865 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 19 12:01:47.312304 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 19 12:01:47.314719 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 19 12:01:47.314842 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 19 12:01:47.321026 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 19 12:01:47.321781 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 19 12:01:47.321909 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 19 12:01:47.322848 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 19 12:01:47.323006 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 19 12:01:47.325082 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 19 12:01:47.325164 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 19 12:01:47.326994 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 19 12:01:47.327068 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 19 12:01:47.329297 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 19 12:01:47.335077 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 19 12:01:47.335186 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 19 12:01:47.340291 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 19 12:01:47.341956 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 19 12:01:47.344414 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 19 12:01:47.344507 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 19 12:01:47.346662 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 19 12:01:47.346729 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 19 12:01:47.347445 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 19 12:01:47.347538 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 19 12:01:47.350719 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 19 12:01:47.350821 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 19 12:01:47.352250 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 19 12:01:47.352366 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 19 12:01:47.359132 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 19 12:01:47.359988 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 19 12:01:47.360075 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 19 12:01:47.362309 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 19 12:01:47.362402 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 19 12:01:47.364516 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 19 12:01:47.364600 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 19 12:01:47.366759 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 19 12:01:47.366911 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 19 12:01:47.369170 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 19 12:01:47.369269 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 19 12:01:47.370004 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 19 12:01:47.370174 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 19 12:01:47.380259 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 19 12:01:47.380431 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 19 12:01:47.382415 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 19 12:01:47.398116 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 19 12:01:47.408122 systemd[1]: Switching root. Mar 19 12:01:47.441836 systemd-journald[202]: Journal stopped Mar 19 12:01:49.093110 systemd-journald[202]: Received SIGTERM from PID 1 (systemd). Mar 19 12:01:49.093328 kernel: SELinux: policy capability network_peer_controls=1 Mar 19 12:01:49.093385 kernel: SELinux: policy capability open_perms=1 Mar 19 12:01:49.093426 kernel: SELinux: policy capability extended_socket_class=1 Mar 19 12:01:49.093469 kernel: SELinux: policy capability always_check_network=0 Mar 19 12:01:49.093498 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 19 12:01:49.093547 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 19 12:01:49.093576 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 19 12:01:49.093597 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 19 12:01:49.093628 kernel: audit: type=1403 audit(1742385707.750:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 19 12:01:49.093665 systemd[1]: Successfully loaded SELinux policy in 58.458ms. Mar 19 12:01:49.093738 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 21.952ms. Mar 19 12:01:49.093774 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 19 12:01:49.093798 systemd[1]: Detected virtualization kvm. Mar 19 12:01:49.093834 systemd[1]: Detected architecture x86-64. Mar 19 12:01:49.093859 systemd[1]: Detected first boot. Mar 19 12:01:49.093909 systemd[1]: Hostname set to . Mar 19 12:01:49.093960 systemd[1]: Initializing machine ID from VM UUID. Mar 19 12:01:49.093984 zram_generator::config[1057]: No configuration found. Mar 19 12:01:49.094014 kernel: Guest personality initialized and is inactive Mar 19 12:01:49.094047 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Mar 19 12:01:49.094069 kernel: Initialized host personality Mar 19 12:01:49.094099 kernel: NET: Registered PF_VSOCK protocol family Mar 19 12:01:49.094137 systemd[1]: Populated /etc with preset unit settings. Mar 19 12:01:49.094162 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 19 12:01:49.094184 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 19 12:01:49.094205 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 19 12:01:49.094240 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 19 12:01:49.094273 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 19 12:01:49.094304 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 19 12:01:49.094327 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 19 12:01:49.094363 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 19 12:01:49.094387 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 19 12:01:49.094409 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 19 12:01:49.094441 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 19 12:01:49.094486 systemd[1]: Created slice user.slice - User and Session Slice. Mar 19 12:01:49.094509 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 19 12:01:49.094547 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 19 12:01:49.094591 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 19 12:01:49.094646 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 19 12:01:49.094694 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 19 12:01:49.094737 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 19 12:01:49.094761 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 19 12:01:49.094805 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 19 12:01:49.094838 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 19 12:01:49.094861 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 19 12:01:49.101690 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 19 12:01:49.101753 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 19 12:01:49.101779 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 19 12:01:49.101822 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 19 12:01:49.101846 systemd[1]: Reached target slices.target - Slice Units. Mar 19 12:01:49.101867 systemd[1]: Reached target swap.target - Swaps. Mar 19 12:01:49.101922 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 19 12:01:49.101947 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 19 12:01:49.101969 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 19 12:01:49.102001 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 19 12:01:49.102033 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 19 12:01:49.102056 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 19 12:01:49.102078 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 19 12:01:49.102099 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 19 12:01:49.102120 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 19 12:01:49.102156 systemd[1]: Mounting media.mount - External Media Directory... Mar 19 12:01:49.102181 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 19 12:01:49.102202 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 19 12:01:49.102223 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 19 12:01:49.102245 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 19 12:01:49.102276 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 19 12:01:49.102300 systemd[1]: Reached target machines.target - Containers. Mar 19 12:01:49.102322 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 19 12:01:49.102353 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 19 12:01:49.102391 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 19 12:01:49.102414 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 19 12:01:49.102436 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 19 12:01:49.102456 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 19 12:01:49.102478 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 19 12:01:49.102499 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 19 12:01:49.102520 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 19 12:01:49.102542 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 19 12:01:49.102594 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 19 12:01:49.102655 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 19 12:01:49.102680 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 19 12:01:49.102701 systemd[1]: Stopped systemd-fsck-usr.service. Mar 19 12:01:49.102741 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 19 12:01:49.102765 kernel: fuse: init (API version 7.39) Mar 19 12:01:49.102787 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 19 12:01:49.102808 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 19 12:01:49.102829 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 19 12:01:49.102868 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 19 12:01:49.102923 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 19 12:01:49.102958 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 19 12:01:49.102981 systemd[1]: verity-setup.service: Deactivated successfully. Mar 19 12:01:49.103020 systemd[1]: Stopped verity-setup.service. Mar 19 12:01:49.103061 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 19 12:01:49.103103 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 19 12:01:49.103126 kernel: loop: module loaded Mar 19 12:01:49.103147 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 19 12:01:49.103184 systemd[1]: Mounted media.mount - External Media Directory. Mar 19 12:01:49.103218 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 19 12:01:49.103250 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 19 12:01:49.103315 systemd-journald[1158]: Collecting audit messages is disabled. Mar 19 12:01:49.103394 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 19 12:01:49.103419 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 19 12:01:49.103441 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 19 12:01:49.103479 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 19 12:01:49.103503 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 19 12:01:49.103525 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 19 12:01:49.103551 systemd-journald[1158]: Journal started Mar 19 12:01:49.103584 systemd-journald[1158]: Runtime Journal (/run/log/journal/5ee36eaae34f4900b720c422caf5e68e) is 4.7M, max 37.9M, 33.2M free. Mar 19 12:01:48.669169 systemd[1]: Queued start job for default target multi-user.target. Mar 19 12:01:48.685082 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 19 12:01:49.106849 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 19 12:01:48.685956 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 19 12:01:49.110990 systemd[1]: Started systemd-journald.service - Journal Service. Mar 19 12:01:49.115943 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 19 12:01:49.117960 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 19 12:01:49.119161 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 19 12:01:49.119429 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 19 12:01:49.121723 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 19 12:01:49.122569 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 19 12:01:49.125494 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 19 12:01:49.127032 kernel: ACPI: bus type drm_connector registered Mar 19 12:01:49.128848 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 19 12:01:49.130522 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 19 12:01:49.131366 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 19 12:01:49.132862 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 19 12:01:49.144603 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 19 12:01:49.152870 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 19 12:01:49.162311 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 19 12:01:49.171018 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 19 12:01:49.171794 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 19 12:01:49.171844 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 19 12:01:49.174005 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 19 12:01:49.179074 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 19 12:01:49.183124 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 19 12:01:49.185155 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 19 12:01:49.194084 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 19 12:01:49.200129 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 19 12:01:49.202080 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 19 12:01:49.209073 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 19 12:01:49.209869 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 19 12:01:49.212514 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 19 12:01:49.227155 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 19 12:01:49.233120 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 19 12:01:49.238528 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 19 12:01:49.242174 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 19 12:01:49.243402 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 19 12:01:49.275137 systemd-journald[1158]: Time spent on flushing to /var/log/journal/5ee36eaae34f4900b720c422caf5e68e is 37.077ms for 1161 entries. Mar 19 12:01:49.275137 systemd-journald[1158]: System Journal (/var/log/journal/5ee36eaae34f4900b720c422caf5e68e) is 8M, max 584.8M, 576.8M free. Mar 19 12:01:49.392367 systemd-journald[1158]: Received client request to flush runtime journal. Mar 19 12:01:49.392450 kernel: loop0: detected capacity change from 0 to 147912 Mar 19 12:01:49.392511 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 19 12:01:49.285029 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 19 12:01:49.286085 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 19 12:01:49.299143 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 19 12:01:49.394368 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 19 12:01:49.417130 kernel: loop1: detected capacity change from 0 to 8 Mar 19 12:01:49.412996 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 19 12:01:49.432628 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 19 12:01:49.470937 kernel: loop2: detected capacity change from 0 to 138176 Mar 19 12:01:49.477322 systemd-tmpfiles[1197]: ACLs are not supported, ignoring. Mar 19 12:01:49.477357 systemd-tmpfiles[1197]: ACLs are not supported, ignoring. Mar 19 12:01:49.513933 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 19 12:01:49.526018 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 19 12:01:49.569222 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 19 12:01:49.579299 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 19 12:01:49.619448 udevadm[1218]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 19 12:01:49.663064 kernel: loop3: detected capacity change from 0 to 210664 Mar 19 12:01:49.692186 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 19 12:01:49.723133 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 19 12:01:49.731165 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 19 12:01:49.741655 kernel: loop4: detected capacity change from 0 to 147912 Mar 19 12:01:49.780940 kernel: loop5: detected capacity change from 0 to 8 Mar 19 12:01:49.797738 kernel: loop6: detected capacity change from 0 to 138176 Mar 19 12:01:49.832950 kernel: loop7: detected capacity change from 0 to 210664 Mar 19 12:01:49.849097 systemd-tmpfiles[1221]: ACLs are not supported, ignoring. Mar 19 12:01:49.849127 systemd-tmpfiles[1221]: ACLs are not supported, ignoring. Mar 19 12:01:49.871863 (sd-merge)[1222]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Mar 19 12:01:49.877071 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 19 12:01:49.877869 (sd-merge)[1222]: Merged extensions into '/usr'. Mar 19 12:01:49.895778 systemd[1]: Reload requested from client PID 1196 ('systemd-sysext') (unit systemd-sysext.service)... Mar 19 12:01:49.895827 systemd[1]: Reloading... Mar 19 12:01:50.079466 zram_generator::config[1250]: No configuration found. Mar 19 12:01:50.423277 ldconfig[1191]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 19 12:01:50.553732 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 19 12:01:50.652624 systemd[1]: Reloading finished in 755 ms. Mar 19 12:01:50.666839 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 19 12:01:50.688368 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 19 12:01:50.707212 systemd[1]: Starting ensure-sysext.service... Mar 19 12:01:50.715120 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 19 12:01:50.735974 systemd[1]: Reload requested from client PID 1308 ('systemctl') (unit ensure-sysext.service)... Mar 19 12:01:50.736009 systemd[1]: Reloading... Mar 19 12:01:50.806519 systemd-tmpfiles[1309]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 19 12:01:50.807100 systemd-tmpfiles[1309]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 19 12:01:50.809754 systemd-tmpfiles[1309]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 19 12:01:50.810229 systemd-tmpfiles[1309]: ACLs are not supported, ignoring. Mar 19 12:01:50.810352 systemd-tmpfiles[1309]: ACLs are not supported, ignoring. Mar 19 12:01:50.818236 systemd-tmpfiles[1309]: Detected autofs mount point /boot during canonicalization of boot. Mar 19 12:01:50.818255 systemd-tmpfiles[1309]: Skipping /boot Mar 19 12:01:50.845100 systemd-tmpfiles[1309]: Detected autofs mount point /boot during canonicalization of boot. Mar 19 12:01:50.845121 systemd-tmpfiles[1309]: Skipping /boot Mar 19 12:01:50.879938 zram_generator::config[1335]: No configuration found. Mar 19 12:01:51.082816 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 19 12:01:51.181720 systemd[1]: Reloading finished in 445 ms. Mar 19 12:01:51.202030 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 19 12:01:51.218629 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 19 12:01:51.233295 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 19 12:01:51.237213 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 19 12:01:51.241878 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 19 12:01:51.247260 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 19 12:01:51.253190 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 19 12:01:51.261238 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 19 12:01:51.267584 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 19 12:01:51.268232 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 19 12:01:51.276400 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 19 12:01:51.293241 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 19 12:01:51.300393 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 19 12:01:51.302420 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 19 12:01:51.302601 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 19 12:01:51.302779 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 19 12:01:51.324401 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 19 12:01:51.326391 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 19 12:01:51.334071 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 19 12:01:51.334401 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 19 12:01:51.339774 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 19 12:01:51.340208 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 19 12:01:51.349236 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 19 12:01:51.350180 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 19 12:01:51.350377 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 19 12:01:51.350580 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 19 12:01:51.363557 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 19 12:01:51.375760 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 19 12:01:51.376550 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 19 12:01:51.379823 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 19 12:01:51.381210 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 19 12:01:51.383730 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 19 12:01:51.384724 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 19 12:01:51.391635 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 19 12:01:51.397287 systemd-udevd[1403]: Using default interface naming scheme 'v255'. Mar 19 12:01:51.402015 systemd[1]: Finished ensure-sysext.service. Mar 19 12:01:51.405466 augenrules[1432]: No rules Mar 19 12:01:51.405689 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 19 12:01:51.406007 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 19 12:01:51.412634 systemd[1]: audit-rules.service: Deactivated successfully. Mar 19 12:01:51.413074 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 19 12:01:51.415738 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 19 12:01:51.427289 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 19 12:01:51.455545 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 19 12:01:51.458233 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 19 12:01:51.467627 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 19 12:01:51.477110 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 19 12:01:51.480198 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 19 12:01:51.603207 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 19 12:01:51.622509 systemd[1]: Reached target time-set.target - System Time Set. Mar 19 12:01:51.694716 systemd-resolved[1399]: Positive Trust Anchors: Mar 19 12:01:51.694739 systemd-resolved[1399]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 19 12:01:51.694784 systemd-resolved[1399]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 19 12:01:51.708657 systemd-resolved[1399]: Using system hostname 'srv-z8dvi.gb1.brightbox.com'. Mar 19 12:01:51.709981 systemd-networkd[1447]: lo: Link UP Mar 19 12:01:51.709988 systemd-networkd[1447]: lo: Gained carrier Mar 19 12:01:51.712208 systemd-networkd[1447]: Enumeration completed Mar 19 12:01:51.712346 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 19 12:01:51.721216 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 19 12:01:51.733938 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 19 12:01:51.734964 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 19 12:01:51.739472 systemd[1]: Reached target network.target - Network. Mar 19 12:01:51.740174 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 19 12:01:51.761489 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 19 12:01:51.781742 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 19 12:01:51.804951 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1452) Mar 19 12:01:51.954784 systemd-networkd[1447]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 19 12:01:51.954799 systemd-networkd[1447]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 19 12:01:51.958554 systemd-networkd[1447]: eth0: Link UP Mar 19 12:01:51.958569 systemd-networkd[1447]: eth0: Gained carrier Mar 19 12:01:51.958589 systemd-networkd[1447]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 19 12:01:51.972154 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Mar 19 12:01:51.988924 kernel: ACPI: button: Power Button [PWRF] Mar 19 12:01:52.000929 kernel: mousedev: PS/2 mouse device common for all mice Mar 19 12:01:52.003018 systemd-networkd[1447]: eth0: DHCPv4 address 10.230.57.154/30, gateway 10.230.57.153 acquired from 10.230.57.153 Mar 19 12:01:52.005371 systemd-timesyncd[1440]: Network configuration changed, trying to establish connection. Mar 19 12:01:52.038856 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 19 12:01:52.046161 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 19 12:01:52.063913 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 19 12:01:52.068721 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 19 12:01:52.069577 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 19 12:01:52.088539 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Mar 19 12:01:52.089545 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 19 12:01:52.162209 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 19 12:01:52.342917 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 19 12:01:52.374140 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 19 12:01:52.383161 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 19 12:01:52.402692 lvm[1490]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 19 12:01:52.443849 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 19 12:01:52.445577 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 19 12:01:52.446425 systemd[1]: Reached target sysinit.target - System Initialization. Mar 19 12:01:52.447330 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 19 12:01:52.448334 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 19 12:01:52.449555 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 19 12:01:52.450444 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 19 12:01:52.451271 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 19 12:01:52.452049 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 19 12:01:52.452101 systemd[1]: Reached target paths.target - Path Units. Mar 19 12:01:52.452728 systemd[1]: Reached target timers.target - Timer Units. Mar 19 12:01:52.455263 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 19 12:01:52.457854 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 19 12:01:52.463243 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 19 12:01:52.464236 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 19 12:01:52.465024 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 19 12:01:52.473693 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 19 12:01:52.475039 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 19 12:01:52.483166 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 19 12:01:52.484980 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 19 12:01:52.485820 systemd[1]: Reached target sockets.target - Socket Units. Mar 19 12:01:52.486495 systemd[1]: Reached target basic.target - Basic System. Mar 19 12:01:52.487230 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 19 12:01:52.487294 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 19 12:01:52.490914 lvm[1494]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 19 12:01:52.491029 systemd[1]: Starting containerd.service - containerd container runtime... Mar 19 12:01:52.501089 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 19 12:01:52.509137 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 19 12:01:52.518379 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 19 12:01:52.522830 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 19 12:01:52.523778 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 19 12:01:52.533626 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 19 12:01:52.540140 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 19 12:01:52.545264 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 19 12:01:52.550655 jq[1498]: false Mar 19 12:01:52.557160 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 19 12:01:52.569113 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 19 12:01:52.572150 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 19 12:01:52.572968 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 19 12:01:52.581841 extend-filesystems[1501]: Found loop4 Mar 19 12:01:52.586063 extend-filesystems[1501]: Found loop5 Mar 19 12:01:52.586063 extend-filesystems[1501]: Found loop6 Mar 19 12:01:52.586063 extend-filesystems[1501]: Found loop7 Mar 19 12:01:52.586063 extend-filesystems[1501]: Found vda Mar 19 12:01:52.586063 extend-filesystems[1501]: Found vda1 Mar 19 12:01:52.586063 extend-filesystems[1501]: Found vda2 Mar 19 12:01:52.586063 extend-filesystems[1501]: Found vda3 Mar 19 12:01:52.586063 extend-filesystems[1501]: Found usr Mar 19 12:01:52.586063 extend-filesystems[1501]: Found vda4 Mar 19 12:01:52.586063 extend-filesystems[1501]: Found vda6 Mar 19 12:01:52.586063 extend-filesystems[1501]: Found vda7 Mar 19 12:01:52.586063 extend-filesystems[1501]: Found vda9 Mar 19 12:01:52.586063 extend-filesystems[1501]: Checking size of /dev/vda9 Mar 19 12:01:52.584095 systemd[1]: Starting update-engine.service - Update Engine... Mar 19 12:01:52.592061 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 19 12:01:52.598256 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 19 12:01:52.612452 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 19 12:01:52.613933 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 19 12:01:52.614456 systemd[1]: motdgen.service: Deactivated successfully. Mar 19 12:01:52.615909 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 19 12:01:52.623155 extend-filesystems[1501]: Resized partition /dev/vda9 Mar 19 12:01:52.626294 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 19 12:01:52.626670 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 19 12:01:52.629164 extend-filesystems[1523]: resize2fs 1.47.1 (20-May-2024) Mar 19 12:01:52.634999 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Mar 19 12:01:52.656453 jq[1516]: true Mar 19 12:01:52.679530 (ntainerd)[1526]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 19 12:01:52.686529 dbus-daemon[1497]: [system] SELinux support is enabled Mar 19 12:01:52.688523 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 19 12:01:52.701947 tar[1521]: linux-amd64/helm Mar 19 12:01:52.698822 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 19 12:01:52.699932 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 19 12:01:52.699965 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 19 12:01:52.702042 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 19 12:01:52.702071 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 19 12:01:52.710670 dbus-daemon[1497]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1447 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Mar 19 12:01:52.719302 jq[1534]: true Mar 19 12:01:52.723152 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Mar 19 12:01:52.763605 update_engine[1511]: I20250319 12:01:52.762852 1511 main.cc:92] Flatcar Update Engine starting Mar 19 12:01:52.772322 systemd[1]: Started update-engine.service - Update Engine. Mar 19 12:01:52.774846 update_engine[1511]: I20250319 12:01:52.774459 1511 update_check_scheduler.cc:74] Next update check in 4m20s Mar 19 12:01:52.785703 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 19 12:01:53.014165 systemd-logind[1509]: Watching system buttons on /dev/input/event2 (Power Button) Mar 19 12:01:53.014212 systemd-logind[1509]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 19 12:01:53.014609 systemd-logind[1509]: New seat seat0. Mar 19 12:01:53.016017 systemd[1]: Started systemd-logind.service - User Login Management. Mar 19 12:01:53.042926 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Mar 19 12:01:53.126048 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1454) Mar 19 12:01:53.135130 bash[1556]: Updated "/home/core/.ssh/authorized_keys" Mar 19 12:01:53.135322 extend-filesystems[1523]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 19 12:01:53.135322 extend-filesystems[1523]: old_desc_blocks = 1, new_desc_blocks = 8 Mar 19 12:01:53.135322 extend-filesystems[1523]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Mar 19 12:01:53.136778 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 19 12:01:53.173024 extend-filesystems[1501]: Resized filesystem in /dev/vda9 Mar 19 12:01:53.140221 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 19 12:01:53.141497 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 19 12:01:53.214692 systemd[1]: Starting sshkeys.service... Mar 19 12:01:53.222694 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Mar 19 12:01:53.243075 dbus-daemon[1497]: [system] Successfully activated service 'org.freedesktop.hostname1' Mar 19 12:01:53.260531 dbus-daemon[1497]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1538 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Mar 19 12:01:53.269047 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 19 12:01:53.279289 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 19 12:01:53.282484 systemd[1]: Starting polkit.service - Authorization Manager... Mar 19 12:01:53.380298 polkitd[1564]: Started polkitd version 121 Mar 19 12:01:53.393583 sshd_keygen[1537]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 19 12:01:53.492778 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 19 12:01:53.502314 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 19 12:01:53.512675 polkitd[1564]: Loading rules from directory /etc/polkit-1/rules.d Mar 19 12:01:53.515125 polkitd[1564]: Loading rules from directory /usr/share/polkit-1/rules.d Mar 19 12:01:53.516176 systemd[1]: Started sshd@0-10.230.57.154:22-139.178.89.65:42478.service - OpenSSH per-connection server daemon (139.178.89.65:42478). Mar 19 12:01:53.518916 polkitd[1564]: Finished loading, compiling and executing 2 rules Mar 19 12:01:53.521196 dbus-daemon[1497]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Mar 19 12:01:53.521898 polkitd[1564]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Mar 19 12:01:53.522037 systemd[1]: Started polkit.service - Authorization Manager. Mar 19 12:01:53.565174 systemd[1]: issuegen.service: Deactivated successfully. Mar 19 12:01:53.567012 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 19 12:01:53.575449 systemd-hostnamed[1538]: Hostname set to (static) Mar 19 12:01:53.579801 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 19 12:01:53.598948 locksmithd[1541]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 19 12:01:53.640968 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 19 12:01:53.658117 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 19 12:01:53.670182 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 19 12:01:53.671338 systemd[1]: Reached target getty.target - Login Prompts. Mar 19 12:01:53.834644 containerd[1526]: time="2025-03-19T12:01:53.834378043Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Mar 19 12:01:53.882241 containerd[1526]: time="2025-03-19T12:01:53.881871543Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 19 12:01:53.886019 containerd[1526]: time="2025-03-19T12:01:53.885644929Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.83-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 19 12:01:53.886019 containerd[1526]: time="2025-03-19T12:01:53.885703891Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 19 12:01:53.886019 containerd[1526]: time="2025-03-19T12:01:53.885739653Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 19 12:01:53.886167 containerd[1526]: time="2025-03-19T12:01:53.886105634Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 19 12:01:53.886167 containerd[1526]: time="2025-03-19T12:01:53.886136098Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 19 12:01:53.886718 containerd[1526]: time="2025-03-19T12:01:53.886251221Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 19 12:01:53.886718 containerd[1526]: time="2025-03-19T12:01:53.886286994Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 19 12:01:53.886718 containerd[1526]: time="2025-03-19T12:01:53.886642247Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 19 12:01:53.886718 containerd[1526]: time="2025-03-19T12:01:53.886668404Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 19 12:01:53.886718 containerd[1526]: time="2025-03-19T12:01:53.886688799Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 19 12:01:53.886718 containerd[1526]: time="2025-03-19T12:01:53.886705171Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 19 12:01:53.887903 containerd[1526]: time="2025-03-19T12:01:53.887205325Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 19 12:01:53.887903 containerd[1526]: time="2025-03-19T12:01:53.887653498Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 19 12:01:53.887903 containerd[1526]: time="2025-03-19T12:01:53.887827404Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 19 12:01:53.887903 containerd[1526]: time="2025-03-19T12:01:53.887859774Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 19 12:01:53.888077 containerd[1526]: time="2025-03-19T12:01:53.888060039Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 19 12:01:53.888187 containerd[1526]: time="2025-03-19T12:01:53.888151701Z" level=info msg="metadata content store policy set" policy=shared Mar 19 12:01:53.893521 containerd[1526]: time="2025-03-19T12:01:53.893476386Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 19 12:01:53.893598 containerd[1526]: time="2025-03-19T12:01:53.893580224Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 19 12:01:53.893706 containerd[1526]: time="2025-03-19T12:01:53.893664869Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 19 12:01:53.893756 containerd[1526]: time="2025-03-19T12:01:53.893703348Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 19 12:01:53.893756 containerd[1526]: time="2025-03-19T12:01:53.893729779Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 19 12:01:53.894033 containerd[1526]: time="2025-03-19T12:01:53.893953925Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 19 12:01:53.894526 containerd[1526]: time="2025-03-19T12:01:53.894379159Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 19 12:01:53.894958 containerd[1526]: time="2025-03-19T12:01:53.894626739Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 19 12:01:53.894958 containerd[1526]: time="2025-03-19T12:01:53.894661859Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 19 12:01:53.894958 containerd[1526]: time="2025-03-19T12:01:53.894686535Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 19 12:01:53.894958 containerd[1526]: time="2025-03-19T12:01:53.894708524Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 19 12:01:53.894958 containerd[1526]: time="2025-03-19T12:01:53.894730245Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 19 12:01:53.894958 containerd[1526]: time="2025-03-19T12:01:53.894750581Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 19 12:01:53.894958 containerd[1526]: time="2025-03-19T12:01:53.894772337Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 19 12:01:53.894958 containerd[1526]: time="2025-03-19T12:01:53.894794332Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 19 12:01:53.894958 containerd[1526]: time="2025-03-19T12:01:53.894822412Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 19 12:01:53.894958 containerd[1526]: time="2025-03-19T12:01:53.894844184Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 19 12:01:53.894958 containerd[1526]: time="2025-03-19T12:01:53.894862683Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 19 12:01:53.894958 containerd[1526]: time="2025-03-19T12:01:53.894921484Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 19 12:01:53.894958 containerd[1526]: time="2025-03-19T12:01:53.894947454Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 19 12:01:53.895601 containerd[1526]: time="2025-03-19T12:01:53.894982142Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 19 12:01:53.895601 containerd[1526]: time="2025-03-19T12:01:53.895006707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 19 12:01:53.895601 containerd[1526]: time="2025-03-19T12:01:53.895032889Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 19 12:01:53.895601 containerd[1526]: time="2025-03-19T12:01:53.895056782Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 19 12:01:53.895601 containerd[1526]: time="2025-03-19T12:01:53.895076224Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 19 12:01:53.895601 containerd[1526]: time="2025-03-19T12:01:53.895097082Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 19 12:01:53.895601 containerd[1526]: time="2025-03-19T12:01:53.895119297Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 19 12:01:53.895601 containerd[1526]: time="2025-03-19T12:01:53.895142309Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 19 12:01:53.895601 containerd[1526]: time="2025-03-19T12:01:53.895162855Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 19 12:01:53.895601 containerd[1526]: time="2025-03-19T12:01:53.895183360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 19 12:01:53.895601 containerd[1526]: time="2025-03-19T12:01:53.895202960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 19 12:01:53.895601 containerd[1526]: time="2025-03-19T12:01:53.895224380Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 19 12:01:53.895601 containerd[1526]: time="2025-03-19T12:01:53.895260240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 19 12:01:53.895601 containerd[1526]: time="2025-03-19T12:01:53.895286445Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 19 12:01:53.895601 containerd[1526]: time="2025-03-19T12:01:53.895305335Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 19 12:01:53.896157 containerd[1526]: time="2025-03-19T12:01:53.895404533Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 19 12:01:53.896157 containerd[1526]: time="2025-03-19T12:01:53.895551151Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 19 12:01:53.896157 containerd[1526]: time="2025-03-19T12:01:53.895587413Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 19 12:01:53.896157 containerd[1526]: time="2025-03-19T12:01:53.895611403Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 19 12:01:53.896157 containerd[1526]: time="2025-03-19T12:01:53.895628523Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 19 12:01:53.896157 containerd[1526]: time="2025-03-19T12:01:53.895676292Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 19 12:01:53.896157 containerd[1526]: time="2025-03-19T12:01:53.895699067Z" level=info msg="NRI interface is disabled by configuration." Mar 19 12:01:53.896157 containerd[1526]: time="2025-03-19T12:01:53.895716300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 19 12:01:53.896421 containerd[1526]: time="2025-03-19T12:01:53.896145822Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 19 12:01:53.896421 containerd[1526]: time="2025-03-19T12:01:53.896225081Z" level=info msg="Connect containerd service" Mar 19 12:01:53.896421 containerd[1526]: time="2025-03-19T12:01:53.896291163Z" level=info msg="using legacy CRI server" Mar 19 12:01:53.896421 containerd[1526]: time="2025-03-19T12:01:53.896312050Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 19 12:01:53.899034 containerd[1526]: time="2025-03-19T12:01:53.896491585Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 19 12:01:53.899034 containerd[1526]: time="2025-03-19T12:01:53.897613861Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 19 12:01:53.899034 containerd[1526]: time="2025-03-19T12:01:53.897777646Z" level=info msg="Start subscribing containerd event" Mar 19 12:01:53.899034 containerd[1526]: time="2025-03-19T12:01:53.897858533Z" level=info msg="Start recovering state" Mar 19 12:01:53.899034 containerd[1526]: time="2025-03-19T12:01:53.897995815Z" level=info msg="Start event monitor" Mar 19 12:01:53.899034 containerd[1526]: time="2025-03-19T12:01:53.898033343Z" level=info msg="Start snapshots syncer" Mar 19 12:01:53.899034 containerd[1526]: time="2025-03-19T12:01:53.898054460Z" level=info msg="Start cni network conf syncer for default" Mar 19 12:01:53.899034 containerd[1526]: time="2025-03-19T12:01:53.898068623Z" level=info msg="Start streaming server" Mar 19 12:01:53.899034 containerd[1526]: time="2025-03-19T12:01:53.898841644Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 19 12:01:53.899034 containerd[1526]: time="2025-03-19T12:01:53.898942221Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 19 12:01:53.899170 systemd[1]: Started containerd.service - containerd container runtime. Mar 19 12:01:53.900661 containerd[1526]: time="2025-03-19T12:01:53.900593083Z" level=info msg="containerd successfully booted in 0.071026s" Mar 19 12:01:54.067433 systemd-networkd[1447]: eth0: Gained IPv6LL Mar 19 12:01:54.071371 systemd-timesyncd[1440]: Network configuration changed, trying to establish connection. Mar 19 12:01:54.076009 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 19 12:01:54.078354 systemd[1]: Reached target network-online.target - Network is Online. Mar 19 12:01:54.091257 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 12:01:54.098646 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 19 12:01:54.143375 tar[1521]: linux-amd64/LICENSE Mar 19 12:01:54.143375 tar[1521]: linux-amd64/README.md Mar 19 12:01:54.171384 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 19 12:01:54.179706 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 19 12:01:54.612657 sshd[1584]: Accepted publickey for core from 139.178.89.65 port 42478 ssh2: RSA SHA256:wd0V+bPIs7QJ731rPibTo3OGwYA2jJ2A4YRQxxbXCKc Mar 19 12:01:54.614848 sshd-session[1584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 12:01:54.627737 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 19 12:01:54.637922 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 19 12:01:54.656453 systemd-logind[1509]: New session 1 of user core. Mar 19 12:01:54.670172 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 19 12:01:54.685451 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 19 12:01:54.692011 (systemd)[1623]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 19 12:01:54.698213 systemd-logind[1509]: New session c1 of user core. Mar 19 12:01:54.990280 systemd[1623]: Queued start job for default target default.target. Mar 19 12:01:55.021341 systemd[1623]: Created slice app.slice - User Application Slice. Mar 19 12:01:55.021514 systemd[1623]: Reached target paths.target - Paths. Mar 19 12:01:55.021849 systemd[1623]: Reached target timers.target - Timers. Mar 19 12:01:55.027998 systemd[1623]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 19 12:01:55.059091 systemd[1623]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 19 12:01:55.060188 systemd[1623]: Reached target sockets.target - Sockets. Mar 19 12:01:55.060291 systemd[1623]: Reached target basic.target - Basic System. Mar 19 12:01:55.060382 systemd[1623]: Reached target default.target - Main User Target. Mar 19 12:01:55.060451 systemd[1623]: Startup finished in 351ms. Mar 19 12:01:55.060706 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 19 12:01:55.073270 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 19 12:01:55.576789 systemd-timesyncd[1440]: Network configuration changed, trying to establish connection. Mar 19 12:01:55.578088 systemd-networkd[1447]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8e66:24:19ff:fee6:399a/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8e66:24:19ff:fee6:399a/64 assigned by NDisc. Mar 19 12:01:55.578095 systemd-networkd[1447]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Mar 19 12:01:55.610110 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 12:01:55.622746 (kubelet)[1639]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 19 12:01:55.715348 systemd[1]: Started sshd@1-10.230.57.154:22-139.178.89.65:42480.service - OpenSSH per-connection server daemon (139.178.89.65:42480). Mar 19 12:01:56.478598 kubelet[1639]: E0319 12:01:56.478445 1639 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 19 12:01:56.480927 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 19 12:01:56.481216 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 19 12:01:56.481860 systemd[1]: kubelet.service: Consumed 1.807s CPU time, 242.9M memory peak. Mar 19 12:01:56.584798 systemd-timesyncd[1440]: Network configuration changed, trying to establish connection. Mar 19 12:01:56.609808 sshd[1642]: Accepted publickey for core from 139.178.89.65 port 42480 ssh2: RSA SHA256:wd0V+bPIs7QJ731rPibTo3OGwYA2jJ2A4YRQxxbXCKc Mar 19 12:01:56.612520 sshd-session[1642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 12:01:56.620578 systemd-logind[1509]: New session 2 of user core. Mar 19 12:01:56.629163 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 19 12:01:57.227451 sshd[1653]: Connection closed by 139.178.89.65 port 42480 Mar 19 12:01:57.228953 sshd-session[1642]: pam_unix(sshd:session): session closed for user core Mar 19 12:01:57.233441 systemd-logind[1509]: Session 2 logged out. Waiting for processes to exit. Mar 19 12:01:57.234265 systemd[1]: sshd@1-10.230.57.154:22-139.178.89.65:42480.service: Deactivated successfully. Mar 19 12:01:57.237194 systemd[1]: session-2.scope: Deactivated successfully. Mar 19 12:01:57.239671 systemd-logind[1509]: Removed session 2. Mar 19 12:01:57.268534 systemd-timesyncd[1440]: Network configuration changed, trying to establish connection. Mar 19 12:01:57.392343 systemd[1]: Started sshd@2-10.230.57.154:22-139.178.89.65:42496.service - OpenSSH per-connection server daemon (139.178.89.65:42496). Mar 19 12:01:58.282385 sshd[1659]: Accepted publickey for core from 139.178.89.65 port 42496 ssh2: RSA SHA256:wd0V+bPIs7QJ731rPibTo3OGwYA2jJ2A4YRQxxbXCKc Mar 19 12:01:58.284407 sshd-session[1659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 12:01:58.292469 systemd-logind[1509]: New session 3 of user core. Mar 19 12:01:58.305249 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 19 12:01:58.746599 login[1599]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Mar 19 12:01:58.749058 login[1600]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Mar 19 12:01:58.755202 systemd-logind[1509]: New session 5 of user core. Mar 19 12:01:58.774348 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 19 12:01:58.780687 systemd-logind[1509]: New session 4 of user core. Mar 19 12:01:58.786314 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 19 12:01:58.919914 sshd[1661]: Connection closed by 139.178.89.65 port 42496 Mar 19 12:01:58.919124 sshd-session[1659]: pam_unix(sshd:session): session closed for user core Mar 19 12:01:58.924267 systemd[1]: sshd@2-10.230.57.154:22-139.178.89.65:42496.service: Deactivated successfully. Mar 19 12:01:58.924781 systemd-logind[1509]: Session 3 logged out. Waiting for processes to exit. Mar 19 12:01:58.926841 systemd[1]: session-3.scope: Deactivated successfully. Mar 19 12:01:58.929346 systemd-logind[1509]: Removed session 3. Mar 19 12:01:59.698692 coreos-metadata[1496]: Mar 19 12:01:59.698 WARN failed to locate config-drive, using the metadata service API instead Mar 19 12:01:59.726788 coreos-metadata[1496]: Mar 19 12:01:59.726 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Mar 19 12:01:59.733283 coreos-metadata[1496]: Mar 19 12:01:59.733 INFO Fetch failed with 404: resource not found Mar 19 12:01:59.733283 coreos-metadata[1496]: Mar 19 12:01:59.733 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Mar 19 12:01:59.733948 coreos-metadata[1496]: Mar 19 12:01:59.733 INFO Fetch successful Mar 19 12:01:59.734146 coreos-metadata[1496]: Mar 19 12:01:59.734 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Mar 19 12:01:59.748065 coreos-metadata[1496]: Mar 19 12:01:59.747 INFO Fetch successful Mar 19 12:01:59.750726 coreos-metadata[1496]: Mar 19 12:01:59.750 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Mar 19 12:01:59.770477 coreos-metadata[1496]: Mar 19 12:01:59.770 INFO Fetch successful Mar 19 12:01:59.770631 coreos-metadata[1496]: Mar 19 12:01:59.770 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Mar 19 12:01:59.786143 coreos-metadata[1496]: Mar 19 12:01:59.786 INFO Fetch successful Mar 19 12:01:59.786492 coreos-metadata[1496]: Mar 19 12:01:59.786 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Mar 19 12:01:59.810413 coreos-metadata[1496]: Mar 19 12:01:59.810 INFO Fetch successful Mar 19 12:01:59.846306 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 19 12:01:59.847551 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 19 12:02:00.569406 coreos-metadata[1563]: Mar 19 12:02:00.569 WARN failed to locate config-drive, using the metadata service API instead Mar 19 12:02:00.592498 coreos-metadata[1563]: Mar 19 12:02:00.592 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Mar 19 12:02:00.619415 coreos-metadata[1563]: Mar 19 12:02:00.619 INFO Fetch successful Mar 19 12:02:00.619665 coreos-metadata[1563]: Mar 19 12:02:00.619 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Mar 19 12:02:00.648861 coreos-metadata[1563]: Mar 19 12:02:00.648 INFO Fetch successful Mar 19 12:02:00.651558 unknown[1563]: wrote ssh authorized keys file for user: core Mar 19 12:02:00.687453 update-ssh-keys[1702]: Updated "/home/core/.ssh/authorized_keys" Mar 19 12:02:00.687321 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 19 12:02:00.690519 systemd[1]: Finished sshkeys.service. Mar 19 12:02:00.694647 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 19 12:02:00.694929 systemd[1]: Startup finished in 1.597s (kernel) + 16.979s (initrd) + 13.000s (userspace) = 31.577s. Mar 19 12:02:06.732134 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 19 12:02:06.739179 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 12:02:07.004079 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 12:02:07.009697 (kubelet)[1713]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 19 12:02:07.087251 kubelet[1713]: E0319 12:02:07.087066 1713 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 19 12:02:07.091975 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 19 12:02:07.092285 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 19 12:02:07.093103 systemd[1]: kubelet.service: Consumed 335ms CPU time, 98.2M memory peak. Mar 19 12:02:09.078271 systemd[1]: Started sshd@3-10.230.57.154:22-139.178.89.65:34432.service - OpenSSH per-connection server daemon (139.178.89.65:34432). Mar 19 12:02:09.971448 sshd[1722]: Accepted publickey for core from 139.178.89.65 port 34432 ssh2: RSA SHA256:wd0V+bPIs7QJ731rPibTo3OGwYA2jJ2A4YRQxxbXCKc Mar 19 12:02:09.973699 sshd-session[1722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 12:02:09.983195 systemd-logind[1509]: New session 6 of user core. Mar 19 12:02:09.998335 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 19 12:02:10.589197 sshd[1724]: Connection closed by 139.178.89.65 port 34432 Mar 19 12:02:10.590264 sshd-session[1722]: pam_unix(sshd:session): session closed for user core Mar 19 12:02:10.595490 systemd[1]: sshd@3-10.230.57.154:22-139.178.89.65:34432.service: Deactivated successfully. Mar 19 12:02:10.597724 systemd[1]: session-6.scope: Deactivated successfully. Mar 19 12:02:10.598761 systemd-logind[1509]: Session 6 logged out. Waiting for processes to exit. Mar 19 12:02:10.600362 systemd-logind[1509]: Removed session 6. Mar 19 12:02:10.749265 systemd[1]: Started sshd@4-10.230.57.154:22-139.178.89.65:47408.service - OpenSSH per-connection server daemon (139.178.89.65:47408). Mar 19 12:02:11.640726 sshd[1730]: Accepted publickey for core from 139.178.89.65 port 47408 ssh2: RSA SHA256:wd0V+bPIs7QJ731rPibTo3OGwYA2jJ2A4YRQxxbXCKc Mar 19 12:02:11.642755 sshd-session[1730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 12:02:11.653159 systemd-logind[1509]: New session 7 of user core. Mar 19 12:02:11.665214 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 19 12:02:12.255387 sshd[1732]: Connection closed by 139.178.89.65 port 47408 Mar 19 12:02:12.256771 sshd-session[1730]: pam_unix(sshd:session): session closed for user core Mar 19 12:02:12.262923 systemd[1]: sshd@4-10.230.57.154:22-139.178.89.65:47408.service: Deactivated successfully. Mar 19 12:02:12.265437 systemd[1]: session-7.scope: Deactivated successfully. Mar 19 12:02:12.266476 systemd-logind[1509]: Session 7 logged out. Waiting for processes to exit. Mar 19 12:02:12.267789 systemd-logind[1509]: Removed session 7. Mar 19 12:02:12.417300 systemd[1]: Started sshd@5-10.230.57.154:22-139.178.89.65:47418.service - OpenSSH per-connection server daemon (139.178.89.65:47418). Mar 19 12:02:13.310510 sshd[1738]: Accepted publickey for core from 139.178.89.65 port 47418 ssh2: RSA SHA256:wd0V+bPIs7QJ731rPibTo3OGwYA2jJ2A4YRQxxbXCKc Mar 19 12:02:13.312639 sshd-session[1738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 12:02:13.320268 systemd-logind[1509]: New session 8 of user core. Mar 19 12:02:13.331177 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 19 12:02:13.936515 sshd[1740]: Connection closed by 139.178.89.65 port 47418 Mar 19 12:02:13.937497 sshd-session[1738]: pam_unix(sshd:session): session closed for user core Mar 19 12:02:13.942247 systemd-logind[1509]: Session 8 logged out. Waiting for processes to exit. Mar 19 12:02:13.942738 systemd[1]: sshd@5-10.230.57.154:22-139.178.89.65:47418.service: Deactivated successfully. Mar 19 12:02:13.945563 systemd[1]: session-8.scope: Deactivated successfully. Mar 19 12:02:13.948171 systemd-logind[1509]: Removed session 8. Mar 19 12:02:14.094273 systemd[1]: Started sshd@6-10.230.57.154:22-139.178.89.65:47424.service - OpenSSH per-connection server daemon (139.178.89.65:47424). Mar 19 12:02:14.985643 sshd[1746]: Accepted publickey for core from 139.178.89.65 port 47424 ssh2: RSA SHA256:wd0V+bPIs7QJ731rPibTo3OGwYA2jJ2A4YRQxxbXCKc Mar 19 12:02:14.987730 sshd-session[1746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 12:02:14.995313 systemd-logind[1509]: New session 9 of user core. Mar 19 12:02:15.006156 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 19 12:02:15.473675 sudo[1749]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 19 12:02:15.474197 sudo[1749]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 19 12:02:15.489560 sudo[1749]: pam_unix(sudo:session): session closed for user root Mar 19 12:02:15.633090 sshd[1748]: Connection closed by 139.178.89.65 port 47424 Mar 19 12:02:15.634228 sshd-session[1746]: pam_unix(sshd:session): session closed for user core Mar 19 12:02:15.639942 systemd[1]: sshd@6-10.230.57.154:22-139.178.89.65:47424.service: Deactivated successfully. Mar 19 12:02:15.642557 systemd[1]: session-9.scope: Deactivated successfully. Mar 19 12:02:15.643952 systemd-logind[1509]: Session 9 logged out. Waiting for processes to exit. Mar 19 12:02:15.645617 systemd-logind[1509]: Removed session 9. Mar 19 12:02:15.794331 systemd[1]: Started sshd@7-10.230.57.154:22-139.178.89.65:47428.service - OpenSSH per-connection server daemon (139.178.89.65:47428). Mar 19 12:02:16.682982 sshd[1755]: Accepted publickey for core from 139.178.89.65 port 47428 ssh2: RSA SHA256:wd0V+bPIs7QJ731rPibTo3OGwYA2jJ2A4YRQxxbXCKc Mar 19 12:02:16.684995 sshd-session[1755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 12:02:16.693137 systemd-logind[1509]: New session 10 of user core. Mar 19 12:02:16.700195 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 19 12:02:17.159468 sudo[1759]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 19 12:02:17.160560 sudo[1759]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 19 12:02:17.162026 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 19 12:02:17.171045 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 12:02:17.175489 sudo[1759]: pam_unix(sudo:session): session closed for user root Mar 19 12:02:17.184419 sudo[1758]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 19 12:02:17.184912 sudo[1758]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 19 12:02:17.208448 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 19 12:02:17.262134 augenrules[1784]: No rules Mar 19 12:02:17.263581 systemd[1]: audit-rules.service: Deactivated successfully. Mar 19 12:02:17.263965 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 19 12:02:17.266228 sudo[1758]: pam_unix(sudo:session): session closed for user root Mar 19 12:02:17.398128 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 12:02:17.410427 sshd[1757]: Connection closed by 139.178.89.65 port 47428 Mar 19 12:02:17.410854 sshd-session[1755]: pam_unix(sshd:session): session closed for user core Mar 19 12:02:17.414420 (kubelet)[1794]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 19 12:02:17.420400 systemd[1]: sshd@7-10.230.57.154:22-139.178.89.65:47428.service: Deactivated successfully. Mar 19 12:02:17.424357 systemd[1]: session-10.scope: Deactivated successfully. Mar 19 12:02:17.426329 systemd-logind[1509]: Session 10 logged out. Waiting for processes to exit. Mar 19 12:02:17.428429 systemd-logind[1509]: Removed session 10. Mar 19 12:02:17.509068 kubelet[1794]: E0319 12:02:17.508960 1794 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 19 12:02:17.511463 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 19 12:02:17.511721 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 19 12:02:17.512814 systemd[1]: kubelet.service: Consumed 286ms CPU time, 96.3M memory peak. Mar 19 12:02:17.569301 systemd[1]: Started sshd@8-10.230.57.154:22-139.178.89.65:47432.service - OpenSSH per-connection server daemon (139.178.89.65:47432). Mar 19 12:02:18.458218 sshd[1806]: Accepted publickey for core from 139.178.89.65 port 47432 ssh2: RSA SHA256:wd0V+bPIs7QJ731rPibTo3OGwYA2jJ2A4YRQxxbXCKc Mar 19 12:02:18.460157 sshd-session[1806]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 12:02:18.469000 systemd-logind[1509]: New session 11 of user core. Mar 19 12:02:18.472128 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 19 12:02:18.933167 sudo[1809]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 19 12:02:18.934181 sudo[1809]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 19 12:02:19.744401 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 19 12:02:19.745457 (dockerd)[1827]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 19 12:02:20.421084 dockerd[1827]: time="2025-03-19T12:02:20.420945317Z" level=info msg="Starting up" Mar 19 12:02:20.717321 systemd[1]: var-lib-docker-metacopy\x2dcheck256656464-merged.mount: Deactivated successfully. Mar 19 12:02:20.743913 dockerd[1827]: time="2025-03-19T12:02:20.743798913Z" level=info msg="Loading containers: start." Mar 19 12:02:20.968984 kernel: Initializing XFRM netlink socket Mar 19 12:02:21.019037 systemd-timesyncd[1440]: Network configuration changed, trying to establish connection. Mar 19 12:02:21.105934 systemd-networkd[1447]: docker0: Link UP Mar 19 12:02:21.137283 dockerd[1827]: time="2025-03-19T12:02:21.137199957Z" level=info msg="Loading containers: done." Mar 19 12:02:21.160921 dockerd[1827]: time="2025-03-19T12:02:21.160508056Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 19 12:02:21.160921 dockerd[1827]: time="2025-03-19T12:02:21.160664227Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Mar 19 12:02:21.160921 dockerd[1827]: time="2025-03-19T12:02:21.160905787Z" level=info msg="Daemon has completed initialization" Mar 19 12:02:21.163137 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2853008916-merged.mount: Deactivated successfully. Mar 19 12:02:21.208694 dockerd[1827]: time="2025-03-19T12:02:21.208572936Z" level=info msg="API listen on /run/docker.sock" Mar 19 12:02:21.209263 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 19 12:02:21.809585 systemd-timesyncd[1440]: Contacted time server [2a00:da00:1800:837c::1]:123 (2.flatcar.pool.ntp.org). Mar 19 12:02:21.809694 systemd-timesyncd[1440]: Initial clock synchronization to Wed 2025-03-19 12:02:21.809222 UTC. Mar 19 12:02:21.809783 systemd-resolved[1399]: Clock change detected. Flushing caches. Mar 19 12:02:23.326306 containerd[1526]: time="2025-03-19T12:02:23.325224304Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\"" Mar 19 12:02:24.311402 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2127886118.mount: Deactivated successfully. Mar 19 12:02:26.175163 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Mar 19 12:02:27.709398 containerd[1526]: time="2025-03-19T12:02:27.709309283Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 12:02:27.711300 containerd[1526]: time="2025-03-19T12:02:27.711237427Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.11: active requests=0, bytes read=32674581" Mar 19 12:02:27.712222 containerd[1526]: time="2025-03-19T12:02:27.711804569Z" level=info msg="ImageCreate event name:\"sha256:4db5a05c271eac8f5da2f95895ea1ccb9a38f48db3135ba3bdfe35941a396ea8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 12:02:27.716966 containerd[1526]: time="2025-03-19T12:02:27.716886891Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:77c54346965036acc7ac95c3200597ede36db9246179248dde21c1a3ecc1caf0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 12:02:27.718888 containerd[1526]: time="2025-03-19T12:02:27.718542564Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.11\" with image id \"sha256:4db5a05c271eac8f5da2f95895ea1ccb9a38f48db3135ba3bdfe35941a396ea8\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:77c54346965036acc7ac95c3200597ede36db9246179248dde21c1a3ecc1caf0\", size \"32671373\" in 4.393141235s" Mar 19 12:02:27.718888 containerd[1526]: time="2025-03-19T12:02:27.718613014Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\" returns image reference \"sha256:4db5a05c271eac8f5da2f95895ea1ccb9a38f48db3135ba3bdfe35941a396ea8\"" Mar 19 12:02:27.761521 containerd[1526]: time="2025-03-19T12:02:27.761457224Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\"" Mar 19 12:02:28.110432 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 19 12:02:28.116452 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 12:02:28.339411 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 12:02:28.347923 (kubelet)[2094]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 19 12:02:28.444226 kubelet[2094]: E0319 12:02:28.444049 2094 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 19 12:02:28.446999 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 19 12:02:28.447309 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 19 12:02:28.447811 systemd[1]: kubelet.service: Consumed 246ms CPU time, 97.7M memory peak. Mar 19 12:02:31.359026 containerd[1526]: time="2025-03-19T12:02:31.358930975Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 12:02:31.363217 containerd[1526]: time="2025-03-19T12:02:31.363124099Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.11: active requests=0, bytes read=29619780" Mar 19 12:02:31.367047 containerd[1526]: time="2025-03-19T12:02:31.366969030Z" level=info msg="ImageCreate event name:\"sha256:de1025c2d496829d3250130380737609ffcdd10a4dce6f2dcd03f23a85a15e6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 12:02:31.373051 containerd[1526]: time="2025-03-19T12:02:31.372962150Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d8874f3fb45591ecdac67a3035c730808f18b3ab13147495c7d77eb1960d4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 12:02:31.374898 containerd[1526]: time="2025-03-19T12:02:31.374734677Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.11\" with image id \"sha256:de1025c2d496829d3250130380737609ffcdd10a4dce6f2dcd03f23a85a15e6a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d8874f3fb45591ecdac67a3035c730808f18b3ab13147495c7d77eb1960d4f6f\", size \"31107380\" in 3.613215347s" Mar 19 12:02:31.374898 containerd[1526]: time="2025-03-19T12:02:31.374780896Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\" returns image reference \"sha256:de1025c2d496829d3250130380737609ffcdd10a4dce6f2dcd03f23a85a15e6a\"" Mar 19 12:02:31.410613 containerd[1526]: time="2025-03-19T12:02:31.410558412Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\"" Mar 19 12:02:34.152233 containerd[1526]: time="2025-03-19T12:02:34.151944684Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 12:02:34.153324 containerd[1526]: time="2025-03-19T12:02:34.153256609Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.11: active requests=0, bytes read=17903317" Mar 19 12:02:34.154809 containerd[1526]: time="2025-03-19T12:02:34.154736328Z" level=info msg="ImageCreate event name:\"sha256:11492f0faf138e933cadd6f533f03e401da9a35e53711e833f18afa6b185b2b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 12:02:34.158871 containerd[1526]: time="2025-03-19T12:02:34.158797809Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c699f8c97ae7ec819c8bd878d3db104ba72fc440d810d9030e09286b696017b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 12:02:34.160862 containerd[1526]: time="2025-03-19T12:02:34.160696845Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.11\" with image id \"sha256:11492f0faf138e933cadd6f533f03e401da9a35e53711e833f18afa6b185b2b7\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c699f8c97ae7ec819c8bd878d3db104ba72fc440d810d9030e09286b696017b5\", size \"19390935\" in 2.750058239s" Mar 19 12:02:34.160862 containerd[1526]: time="2025-03-19T12:02:34.160740607Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\" returns image reference \"sha256:11492f0faf138e933cadd6f533f03e401da9a35e53711e833f18afa6b185b2b7\"" Mar 19 12:02:34.212119 containerd[1526]: time="2025-03-19T12:02:34.212057070Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\"" Mar 19 12:02:35.881172 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3660262590.mount: Deactivated successfully. Mar 19 12:02:36.734016 containerd[1526]: time="2025-03-19T12:02:36.733926039Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 12:02:36.737798 containerd[1526]: time="2025-03-19T12:02:36.737715512Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.11: active requests=0, bytes read=29185380" Mar 19 12:02:36.739097 containerd[1526]: time="2025-03-19T12:02:36.739026282Z" level=info msg="ImageCreate event name:\"sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 12:02:36.742055 containerd[1526]: time="2025-03-19T12:02:36.741981359Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 12:02:36.743258 containerd[1526]: time="2025-03-19T12:02:36.743214908Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.11\" with image id \"sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc\", repo tag \"registry.k8s.io/kube-proxy:v1.30.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55\", size \"29184391\" in 2.530827576s" Mar 19 12:02:36.743349 containerd[1526]: time="2025-03-19T12:02:36.743285188Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\" returns image reference \"sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc\"" Mar 19 12:02:36.781879 containerd[1526]: time="2025-03-19T12:02:36.781769191Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Mar 19 12:02:37.435108 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1692973308.mount: Deactivated successfully. Mar 19 12:02:38.610565 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 19 12:02:38.625456 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 12:02:38.705281 update_engine[1511]: I20250319 12:02:38.705057 1511 update_attempter.cc:509] Updating boot flags... Mar 19 12:02:39.014610 containerd[1526]: time="2025-03-19T12:02:39.014013238Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 12:02:39.031561 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2185) Mar 19 12:02:39.031799 containerd[1526]: time="2025-03-19T12:02:39.031352878Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Mar 19 12:02:39.033662 containerd[1526]: time="2025-03-19T12:02:39.033568544Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 12:02:39.054227 containerd[1526]: time="2025-03-19T12:02:39.051506576Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 12:02:39.065212 containerd[1526]: time="2025-03-19T12:02:39.064832459Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.282981975s" Mar 19 12:02:39.065212 containerd[1526]: time="2025-03-19T12:02:39.064924474Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Mar 19 12:02:39.166215 containerd[1526]: time="2025-03-19T12:02:39.165710392Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Mar 19 12:02:39.198478 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 12:02:39.199562 (kubelet)[2200]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 19 12:02:39.300261 kubelet[2200]: E0319 12:02:39.298538 2200 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 19 12:02:39.301017 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 19 12:02:39.301340 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 19 12:02:39.302260 systemd[1]: kubelet.service: Consumed 253ms CPU time, 97.3M memory peak. Mar 19 12:02:39.753517 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount334012648.mount: Deactivated successfully. Mar 19 12:02:39.758313 containerd[1526]: time="2025-03-19T12:02:39.758152792Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 12:02:39.759417 containerd[1526]: time="2025-03-19T12:02:39.759352425Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Mar 19 12:02:39.760279 containerd[1526]: time="2025-03-19T12:02:39.760205182Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 12:02:39.763555 containerd[1526]: time="2025-03-19T12:02:39.763474765Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 12:02:39.764984 containerd[1526]: time="2025-03-19T12:02:39.764834031Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 599.062669ms" Mar 19 12:02:39.764984 containerd[1526]: time="2025-03-19T12:02:39.764875026Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Mar 19 12:02:39.797401 containerd[1526]: time="2025-03-19T12:02:39.797341218Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Mar 19 12:02:40.474001 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount921482500.mount: Deactivated successfully. Mar 19 12:02:45.366863 containerd[1526]: time="2025-03-19T12:02:45.366768371Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 12:02:45.368647 containerd[1526]: time="2025-03-19T12:02:45.368600944Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238579" Mar 19 12:02:45.369602 containerd[1526]: time="2025-03-19T12:02:45.369527770Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 12:02:45.377606 containerd[1526]: time="2025-03-19T12:02:45.377531309Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 12:02:45.379666 containerd[1526]: time="2025-03-19T12:02:45.379466137Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 5.582068742s" Mar 19 12:02:45.379666 containerd[1526]: time="2025-03-19T12:02:45.379516028Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Mar 19 12:02:49.360573 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Mar 19 12:02:49.375518 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 12:02:49.677078 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 12:02:49.688744 (kubelet)[2328]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 19 12:02:49.702092 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 12:02:49.705515 systemd[1]: kubelet.service: Deactivated successfully. Mar 19 12:02:49.705894 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 12:02:49.706266 systemd[1]: kubelet.service: Consumed 242ms CPU time, 91.2M memory peak. Mar 19 12:02:49.714680 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 12:02:49.745167 systemd[1]: Reload requested from client PID 2339 ('systemctl') (unit session-11.scope)... Mar 19 12:02:49.745454 systemd[1]: Reloading... Mar 19 12:02:49.956245 zram_generator::config[2381]: No configuration found. Mar 19 12:02:50.129948 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 19 12:02:50.286143 systemd[1]: Reloading finished in 539 ms. Mar 19 12:02:50.365456 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 12:02:50.377125 (kubelet)[2443]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 19 12:02:50.380011 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 12:02:50.380637 systemd[1]: kubelet.service: Deactivated successfully. Mar 19 12:02:50.381033 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 12:02:50.381147 systemd[1]: kubelet.service: Consumed 133ms CPU time, 83.3M memory peak. Mar 19 12:02:50.390785 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 12:02:50.546394 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 12:02:50.557787 (kubelet)[2455]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 19 12:02:50.627360 kubelet[2455]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 19 12:02:50.627360 kubelet[2455]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 19 12:02:50.627360 kubelet[2455]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 19 12:02:50.636787 kubelet[2455]: I0319 12:02:50.636573 2455 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 19 12:02:51.080718 kubelet[2455]: I0319 12:02:51.080557 2455 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 19 12:02:51.080718 kubelet[2455]: I0319 12:02:51.080604 2455 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 19 12:02:51.080999 kubelet[2455]: I0319 12:02:51.080914 2455 server.go:927] "Client rotation is on, will bootstrap in background" Mar 19 12:02:51.105206 kubelet[2455]: I0319 12:02:51.104937 2455 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 19 12:02:51.105206 kubelet[2455]: E0319 12:02:51.105103 2455 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.230.57.154:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.230.57.154:6443: connect: connection refused Mar 19 12:02:51.126778 kubelet[2455]: I0319 12:02:51.126719 2455 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 19 12:02:51.130301 kubelet[2455]: I0319 12:02:51.130207 2455 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 19 12:02:51.131777 kubelet[2455]: I0319 12:02:51.130281 2455 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-z8dvi.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 19 12:02:51.132474 kubelet[2455]: I0319 12:02:51.132415 2455 topology_manager.go:138] "Creating topology manager with none policy" Mar 19 12:02:51.132474 kubelet[2455]: I0319 12:02:51.132452 2455 container_manager_linux.go:301] "Creating device plugin manager" Mar 19 12:02:51.132721 kubelet[2455]: I0319 12:02:51.132685 2455 state_mem.go:36] "Initialized new in-memory state store" Mar 19 12:02:51.134786 kubelet[2455]: I0319 12:02:51.134332 2455 kubelet.go:400] "Attempting to sync node with API server" Mar 19 12:02:51.134786 kubelet[2455]: I0319 12:02:51.134369 2455 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 19 12:02:51.134786 kubelet[2455]: I0319 12:02:51.134437 2455 kubelet.go:312] "Adding apiserver pod source" Mar 19 12:02:51.134786 kubelet[2455]: I0319 12:02:51.134513 2455 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 19 12:02:51.134786 kubelet[2455]: W0319 12:02:51.134479 2455 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.57.154:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-z8dvi.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.57.154:6443: connect: connection refused Mar 19 12:02:51.134786 kubelet[2455]: E0319 12:02:51.134603 2455 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.230.57.154:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-z8dvi.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.57.154:6443: connect: connection refused Mar 19 12:02:51.138062 kubelet[2455]: W0319 12:02:51.137521 2455 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.57.154:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.230.57.154:6443: connect: connection refused Mar 19 12:02:51.138062 kubelet[2455]: E0319 12:02:51.137587 2455 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.230.57.154:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.230.57.154:6443: connect: connection refused Mar 19 12:02:51.138713 kubelet[2455]: I0319 12:02:51.138370 2455 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 19 12:02:51.140971 kubelet[2455]: I0319 12:02:51.140227 2455 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 19 12:02:51.140971 kubelet[2455]: W0319 12:02:51.140355 2455 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 19 12:02:51.141582 kubelet[2455]: I0319 12:02:51.141519 2455 server.go:1264] "Started kubelet" Mar 19 12:02:51.147217 kubelet[2455]: I0319 12:02:51.146956 2455 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 19 12:02:51.149165 kubelet[2455]: I0319 12:02:51.148939 2455 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 19 12:02:51.150287 kubelet[2455]: I0319 12:02:51.149957 2455 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 19 12:02:51.150490 kubelet[2455]: I0319 12:02:51.150377 2455 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 19 12:02:51.153224 kubelet[2455]: I0319 12:02:51.152630 2455 server.go:455] "Adding debug handlers to kubelet server" Mar 19 12:02:51.158625 kubelet[2455]: I0319 12:02:51.158096 2455 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 19 12:02:51.161569 kubelet[2455]: I0319 12:02:51.161539 2455 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 19 12:02:51.161696 kubelet[2455]: I0319 12:02:51.161674 2455 reconciler.go:26] "Reconciler: start to sync state" Mar 19 12:02:51.166203 kubelet[2455]: W0319 12:02:51.165269 2455 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.57.154:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.57.154:6443: connect: connection refused Mar 19 12:02:51.166203 kubelet[2455]: E0319 12:02:51.165332 2455 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.230.57.154:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.57.154:6443: connect: connection refused Mar 19 12:02:51.166203 kubelet[2455]: E0319 12:02:51.165406 2455 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.57.154:6443/api/v1/namespaces/default/events\": dial tcp 10.230.57.154:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-z8dvi.gb1.brightbox.com.182e32a096953b55 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-z8dvi.gb1.brightbox.com,UID:srv-z8dvi.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-z8dvi.gb1.brightbox.com,},FirstTimestamp:2025-03-19 12:02:51.141479253 +0000 UTC m=+0.578912736,LastTimestamp:2025-03-19 12:02:51.141479253 +0000 UTC m=+0.578912736,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-z8dvi.gb1.brightbox.com,}" Mar 19 12:02:51.166203 kubelet[2455]: E0319 12:02:51.165641 2455 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.57.154:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-z8dvi.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.57.154:6443: connect: connection refused" interval="200ms" Mar 19 12:02:51.166772 kubelet[2455]: I0319 12:02:51.166741 2455 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 19 12:02:51.168953 kubelet[2455]: E0319 12:02:51.168926 2455 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 19 12:02:51.169846 kubelet[2455]: I0319 12:02:51.169820 2455 factory.go:221] Registration of the containerd container factory successfully Mar 19 12:02:51.169975 kubelet[2455]: I0319 12:02:51.169956 2455 factory.go:221] Registration of the systemd container factory successfully Mar 19 12:02:51.183811 kubelet[2455]: I0319 12:02:51.183734 2455 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 19 12:02:51.186857 kubelet[2455]: I0319 12:02:51.186310 2455 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 19 12:02:51.186857 kubelet[2455]: I0319 12:02:51.186378 2455 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 19 12:02:51.186857 kubelet[2455]: I0319 12:02:51.186419 2455 kubelet.go:2337] "Starting kubelet main sync loop" Mar 19 12:02:51.186857 kubelet[2455]: E0319 12:02:51.186502 2455 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 19 12:02:51.198702 kubelet[2455]: W0319 12:02:51.198636 2455 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.57.154:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.57.154:6443: connect: connection refused Mar 19 12:02:51.198832 kubelet[2455]: E0319 12:02:51.198708 2455 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.230.57.154:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.57.154:6443: connect: connection refused Mar 19 12:02:51.221669 kubelet[2455]: I0319 12:02:51.221622 2455 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 19 12:02:51.221669 kubelet[2455]: I0319 12:02:51.221656 2455 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 19 12:02:51.221869 kubelet[2455]: I0319 12:02:51.221695 2455 state_mem.go:36] "Initialized new in-memory state store" Mar 19 12:02:51.223606 kubelet[2455]: I0319 12:02:51.223568 2455 policy_none.go:49] "None policy: Start" Mar 19 12:02:51.224870 kubelet[2455]: I0319 12:02:51.224798 2455 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 19 12:02:51.224969 kubelet[2455]: I0319 12:02:51.224830 2455 state_mem.go:35] "Initializing new in-memory state store" Mar 19 12:02:51.235220 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 19 12:02:51.249974 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 19 12:02:51.262356 kubelet[2455]: I0319 12:02:51.262316 2455 kubelet_node_status.go:73] "Attempting to register node" node="srv-z8dvi.gb1.brightbox.com" Mar 19 12:02:51.264137 kubelet[2455]: E0319 12:02:51.263556 2455 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.230.57.154:6443/api/v1/nodes\": dial tcp 10.230.57.154:6443: connect: connection refused" node="srv-z8dvi.gb1.brightbox.com" Mar 19 12:02:51.263631 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 19 12:02:51.265553 kubelet[2455]: I0319 12:02:51.265527 2455 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 19 12:02:51.266501 kubelet[2455]: I0319 12:02:51.266446 2455 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 19 12:02:51.267582 kubelet[2455]: I0319 12:02:51.267437 2455 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 19 12:02:51.269976 kubelet[2455]: E0319 12:02:51.269774 2455 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-z8dvi.gb1.brightbox.com\" not found" Mar 19 12:02:51.286797 kubelet[2455]: I0319 12:02:51.286724 2455 topology_manager.go:215] "Topology Admit Handler" podUID="1ff4b2597610b83c66cb835a9c6ccbd6" podNamespace="kube-system" podName="kube-apiserver-srv-z8dvi.gb1.brightbox.com" Mar 19 12:02:51.290173 kubelet[2455]: I0319 12:02:51.289167 2455 topology_manager.go:215] "Topology Admit Handler" podUID="a2ce60e151f60c21def6fdafb12c160d" podNamespace="kube-system" podName="kube-controller-manager-srv-z8dvi.gb1.brightbox.com" Mar 19 12:02:51.292642 kubelet[2455]: I0319 12:02:51.292571 2455 topology_manager.go:215] "Topology Admit Handler" podUID="6964ce632ab0af3306b5a2630d3b811d" podNamespace="kube-system" podName="kube-scheduler-srv-z8dvi.gb1.brightbox.com" Mar 19 12:02:51.301962 systemd[1]: Created slice kubepods-burstable-pod1ff4b2597610b83c66cb835a9c6ccbd6.slice - libcontainer container kubepods-burstable-pod1ff4b2597610b83c66cb835a9c6ccbd6.slice. Mar 19 12:02:51.323279 systemd[1]: Created slice kubepods-burstable-poda2ce60e151f60c21def6fdafb12c160d.slice - libcontainer container kubepods-burstable-poda2ce60e151f60c21def6fdafb12c160d.slice. Mar 19 12:02:51.336448 systemd[1]: Created slice kubepods-burstable-pod6964ce632ab0af3306b5a2630d3b811d.slice - libcontainer container kubepods-burstable-pod6964ce632ab0af3306b5a2630d3b811d.slice. Mar 19 12:02:51.366354 kubelet[2455]: E0319 12:02:51.366283 2455 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.57.154:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-z8dvi.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.57.154:6443: connect: connection refused" interval="400ms" Mar 19 12:02:51.463227 kubelet[2455]: I0319 12:02:51.463131 2455 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a2ce60e151f60c21def6fdafb12c160d-kubeconfig\") pod \"kube-controller-manager-srv-z8dvi.gb1.brightbox.com\" (UID: \"a2ce60e151f60c21def6fdafb12c160d\") " pod="kube-system/kube-controller-manager-srv-z8dvi.gb1.brightbox.com" Mar 19 12:02:51.463423 kubelet[2455]: I0319 12:02:51.463233 2455 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6964ce632ab0af3306b5a2630d3b811d-kubeconfig\") pod \"kube-scheduler-srv-z8dvi.gb1.brightbox.com\" (UID: \"6964ce632ab0af3306b5a2630d3b811d\") " pod="kube-system/kube-scheduler-srv-z8dvi.gb1.brightbox.com" Mar 19 12:02:51.463423 kubelet[2455]: I0319 12:02:51.463323 2455 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a2ce60e151f60c21def6fdafb12c160d-ca-certs\") pod \"kube-controller-manager-srv-z8dvi.gb1.brightbox.com\" (UID: \"a2ce60e151f60c21def6fdafb12c160d\") " pod="kube-system/kube-controller-manager-srv-z8dvi.gb1.brightbox.com" Mar 19 12:02:51.463423 kubelet[2455]: I0319 12:02:51.463353 2455 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a2ce60e151f60c21def6fdafb12c160d-flexvolume-dir\") pod \"kube-controller-manager-srv-z8dvi.gb1.brightbox.com\" (UID: \"a2ce60e151f60c21def6fdafb12c160d\") " pod="kube-system/kube-controller-manager-srv-z8dvi.gb1.brightbox.com" Mar 19 12:02:51.463423 kubelet[2455]: I0319 12:02:51.463380 2455 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a2ce60e151f60c21def6fdafb12c160d-k8s-certs\") pod \"kube-controller-manager-srv-z8dvi.gb1.brightbox.com\" (UID: \"a2ce60e151f60c21def6fdafb12c160d\") " pod="kube-system/kube-controller-manager-srv-z8dvi.gb1.brightbox.com" Mar 19 12:02:51.463423 kubelet[2455]: I0319 12:02:51.463411 2455 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a2ce60e151f60c21def6fdafb12c160d-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-z8dvi.gb1.brightbox.com\" (UID: \"a2ce60e151f60c21def6fdafb12c160d\") " pod="kube-system/kube-controller-manager-srv-z8dvi.gb1.brightbox.com" Mar 19 12:02:51.463703 kubelet[2455]: I0319 12:02:51.463447 2455 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1ff4b2597610b83c66cb835a9c6ccbd6-ca-certs\") pod \"kube-apiserver-srv-z8dvi.gb1.brightbox.com\" (UID: \"1ff4b2597610b83c66cb835a9c6ccbd6\") " pod="kube-system/kube-apiserver-srv-z8dvi.gb1.brightbox.com" Mar 19 12:02:51.463703 kubelet[2455]: I0319 12:02:51.463475 2455 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1ff4b2597610b83c66cb835a9c6ccbd6-k8s-certs\") pod \"kube-apiserver-srv-z8dvi.gb1.brightbox.com\" (UID: \"1ff4b2597610b83c66cb835a9c6ccbd6\") " pod="kube-system/kube-apiserver-srv-z8dvi.gb1.brightbox.com" Mar 19 12:02:51.463703 kubelet[2455]: I0319 12:02:51.463503 2455 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1ff4b2597610b83c66cb835a9c6ccbd6-usr-share-ca-certificates\") pod \"kube-apiserver-srv-z8dvi.gb1.brightbox.com\" (UID: \"1ff4b2597610b83c66cb835a9c6ccbd6\") " pod="kube-system/kube-apiserver-srv-z8dvi.gb1.brightbox.com" Mar 19 12:02:51.466339 kubelet[2455]: I0319 12:02:51.466264 2455 kubelet_node_status.go:73] "Attempting to register node" node="srv-z8dvi.gb1.brightbox.com" Mar 19 12:02:51.466736 kubelet[2455]: E0319 12:02:51.466691 2455 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.230.57.154:6443/api/v1/nodes\": dial tcp 10.230.57.154:6443: connect: connection refused" node="srv-z8dvi.gb1.brightbox.com" Mar 19 12:02:51.621115 containerd[1526]: time="2025-03-19T12:02:51.620511660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-z8dvi.gb1.brightbox.com,Uid:1ff4b2597610b83c66cb835a9c6ccbd6,Namespace:kube-system,Attempt:0,}" Mar 19 12:02:51.634246 containerd[1526]: time="2025-03-19T12:02:51.634141035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-z8dvi.gb1.brightbox.com,Uid:a2ce60e151f60c21def6fdafb12c160d,Namespace:kube-system,Attempt:0,}" Mar 19 12:02:51.641857 containerd[1526]: time="2025-03-19T12:02:51.641616334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-z8dvi.gb1.brightbox.com,Uid:6964ce632ab0af3306b5a2630d3b811d,Namespace:kube-system,Attempt:0,}" Mar 19 12:02:51.767118 kubelet[2455]: E0319 12:02:51.767010 2455 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.57.154:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-z8dvi.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.57.154:6443: connect: connection refused" interval="800ms" Mar 19 12:02:51.870447 kubelet[2455]: I0319 12:02:51.870406 2455 kubelet_node_status.go:73] "Attempting to register node" node="srv-z8dvi.gb1.brightbox.com" Mar 19 12:02:51.871025 kubelet[2455]: E0319 12:02:51.870956 2455 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.230.57.154:6443/api/v1/nodes\": dial tcp 10.230.57.154:6443: connect: connection refused" node="srv-z8dvi.gb1.brightbox.com" Mar 19 12:02:52.060024 kubelet[2455]: W0319 12:02:52.059832 2455 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.57.154:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-z8dvi.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.57.154:6443: connect: connection refused Mar 19 12:02:52.060024 kubelet[2455]: E0319 12:02:52.059929 2455 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.230.57.154:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-z8dvi.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.57.154:6443: connect: connection refused Mar 19 12:02:52.145574 kubelet[2455]: W0319 12:02:52.145440 2455 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.57.154:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.57.154:6443: connect: connection refused Mar 19 12:02:52.145574 kubelet[2455]: E0319 12:02:52.145533 2455 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.230.57.154:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.57.154:6443: connect: connection refused Mar 19 12:02:52.185131 kubelet[2455]: W0319 12:02:52.184978 2455 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.57.154:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.230.57.154:6443: connect: connection refused Mar 19 12:02:52.185131 kubelet[2455]: E0319 12:02:52.185080 2455 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.230.57.154:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.230.57.154:6443: connect: connection refused Mar 19 12:02:52.331639 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount311178153.mount: Deactivated successfully. Mar 19 12:02:52.364370 containerd[1526]: time="2025-03-19T12:02:52.364288270Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 19 12:02:52.367664 containerd[1526]: time="2025-03-19T12:02:52.367617764Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Mar 19 12:02:52.369853 containerd[1526]: time="2025-03-19T12:02:52.369742476Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 19 12:02:52.379295 containerd[1526]: time="2025-03-19T12:02:52.379242185Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 19 12:02:52.381018 containerd[1526]: time="2025-03-19T12:02:52.380970208Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 19 12:02:52.388605 containerd[1526]: time="2025-03-19T12:02:52.388569340Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 19 12:02:52.393896 containerd[1526]: time="2025-03-19T12:02:52.393841685Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 19 12:02:52.394784 containerd[1526]: time="2025-03-19T12:02:52.394704940Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 19 12:02:52.396789 containerd[1526]: time="2025-03-19T12:02:52.396006018Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 775.2939ms" Mar 19 12:02:52.401996 containerd[1526]: time="2025-03-19T12:02:52.401955646Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 760.235339ms" Mar 19 12:02:52.407895 containerd[1526]: time="2025-03-19T12:02:52.407856265Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 773.573797ms" Mar 19 12:02:52.505306 kubelet[2455]: W0319 12:02:52.505170 2455 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.57.154:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.57.154:6443: connect: connection refused Mar 19 12:02:52.505306 kubelet[2455]: E0319 12:02:52.505270 2455 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.230.57.154:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.57.154:6443: connect: connection refused Mar 19 12:02:52.567941 kubelet[2455]: E0319 12:02:52.567864 2455 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.57.154:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-z8dvi.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.57.154:6443: connect: connection refused" interval="1.6s" Mar 19 12:02:52.634658 containerd[1526]: time="2025-03-19T12:02:52.634443148Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 12:02:52.634658 containerd[1526]: time="2025-03-19T12:02:52.634592042Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 12:02:52.637458 containerd[1526]: time="2025-03-19T12:02:52.634627897Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 12:02:52.637864 containerd[1526]: time="2025-03-19T12:02:52.637379041Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 12:02:52.637864 containerd[1526]: time="2025-03-19T12:02:52.637437488Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 12:02:52.637864 containerd[1526]: time="2025-03-19T12:02:52.637455620Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 12:02:52.637864 containerd[1526]: time="2025-03-19T12:02:52.637747321Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 12:02:52.641153 containerd[1526]: time="2025-03-19T12:02:52.639776587Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 12:02:52.650617 containerd[1526]: time="2025-03-19T12:02:52.650492820Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 12:02:52.650806 containerd[1526]: time="2025-03-19T12:02:52.650684668Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 12:02:52.650806 containerd[1526]: time="2025-03-19T12:02:52.650717740Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 12:02:52.651108 containerd[1526]: time="2025-03-19T12:02:52.650964005Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 12:02:52.679375 kubelet[2455]: I0319 12:02:52.679336 2455 kubelet_node_status.go:73] "Attempting to register node" node="srv-z8dvi.gb1.brightbox.com" Mar 19 12:02:52.680798 kubelet[2455]: E0319 12:02:52.680760 2455 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.230.57.154:6443/api/v1/nodes\": dial tcp 10.230.57.154:6443: connect: connection refused" node="srv-z8dvi.gb1.brightbox.com" Mar 19 12:02:52.685454 systemd[1]: Started cri-containerd-52ad17bc08f54966a6727802788f3d60227cec637d83460c4c95ad61b905c1bf.scope - libcontainer container 52ad17bc08f54966a6727802788f3d60227cec637d83460c4c95ad61b905c1bf. Mar 19 12:02:52.691541 systemd[1]: Started cri-containerd-9f37764a126065349936ac59f89ced138c32a5c53b35cdff255136f45c7bd5fe.scope - libcontainer container 9f37764a126065349936ac59f89ced138c32a5c53b35cdff255136f45c7bd5fe. Mar 19 12:02:52.717561 systemd[1]: Started cri-containerd-2c13c9042a57a7f183339a35a70eeb7af796e7125be141676f04182b9c718c98.scope - libcontainer container 2c13c9042a57a7f183339a35a70eeb7af796e7125be141676f04182b9c718c98. Mar 19 12:02:52.879590 containerd[1526]: time="2025-03-19T12:02:52.879484026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-z8dvi.gb1.brightbox.com,Uid:a2ce60e151f60c21def6fdafb12c160d,Namespace:kube-system,Attempt:0,} returns sandbox id \"2c13c9042a57a7f183339a35a70eeb7af796e7125be141676f04182b9c718c98\"" Mar 19 12:02:52.880637 containerd[1526]: time="2025-03-19T12:02:52.880552656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-z8dvi.gb1.brightbox.com,Uid:1ff4b2597610b83c66cb835a9c6ccbd6,Namespace:kube-system,Attempt:0,} returns sandbox id \"52ad17bc08f54966a6727802788f3d60227cec637d83460c4c95ad61b905c1bf\"" Mar 19 12:02:52.885462 containerd[1526]: time="2025-03-19T12:02:52.885350269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-z8dvi.gb1.brightbox.com,Uid:6964ce632ab0af3306b5a2630d3b811d,Namespace:kube-system,Attempt:0,} returns sandbox id \"9f37764a126065349936ac59f89ced138c32a5c53b35cdff255136f45c7bd5fe\"" Mar 19 12:02:52.889219 containerd[1526]: time="2025-03-19T12:02:52.889063727Z" level=info msg="CreateContainer within sandbox \"2c13c9042a57a7f183339a35a70eeb7af796e7125be141676f04182b9c718c98\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 19 12:02:52.889884 containerd[1526]: time="2025-03-19T12:02:52.889851278Z" level=info msg="CreateContainer within sandbox \"52ad17bc08f54966a6727802788f3d60227cec637d83460c4c95ad61b905c1bf\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 19 12:02:52.893219 containerd[1526]: time="2025-03-19T12:02:52.892989615Z" level=info msg="CreateContainer within sandbox \"9f37764a126065349936ac59f89ced138c32a5c53b35cdff255136f45c7bd5fe\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 19 12:02:52.999565 containerd[1526]: time="2025-03-19T12:02:52.999460001Z" level=info msg="CreateContainer within sandbox \"2c13c9042a57a7f183339a35a70eeb7af796e7125be141676f04182b9c718c98\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f5891bb32912ee6fb72c5bb64cf691fc481449bb8d04acdd76191047257790aa\"" Mar 19 12:02:53.010698 containerd[1526]: time="2025-03-19T12:02:53.010648113Z" level=info msg="StartContainer for \"f5891bb32912ee6fb72c5bb64cf691fc481449bb8d04acdd76191047257790aa\"" Mar 19 12:02:53.011586 containerd[1526]: time="2025-03-19T12:02:53.011395483Z" level=info msg="CreateContainer within sandbox \"52ad17bc08f54966a6727802788f3d60227cec637d83460c4c95ad61b905c1bf\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"99444077c4952352e50da4698a2ede63494c886fc4db44e0d2bb9d0ef2687c28\"" Mar 19 12:02:53.012288 containerd[1526]: time="2025-03-19T12:02:53.012220088Z" level=info msg="StartContainer for \"99444077c4952352e50da4698a2ede63494c886fc4db44e0d2bb9d0ef2687c28\"" Mar 19 12:02:53.019869 containerd[1526]: time="2025-03-19T12:02:53.019707898Z" level=info msg="CreateContainer within sandbox \"9f37764a126065349936ac59f89ced138c32a5c53b35cdff255136f45c7bd5fe\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"54727de2adc88b250abd0a7bf8f3777b456f0922f87d3e8d7b85c6bf58dc87c0\"" Mar 19 12:02:53.023201 containerd[1526]: time="2025-03-19T12:02:53.022000373Z" level=info msg="StartContainer for \"54727de2adc88b250abd0a7bf8f3777b456f0922f87d3e8d7b85c6bf58dc87c0\"" Mar 19 12:02:53.064393 systemd[1]: Started cri-containerd-f5891bb32912ee6fb72c5bb64cf691fc481449bb8d04acdd76191047257790aa.scope - libcontainer container f5891bb32912ee6fb72c5bb64cf691fc481449bb8d04acdd76191047257790aa. Mar 19 12:02:53.218423 systemd[1]: Started cri-containerd-54727de2adc88b250abd0a7bf8f3777b456f0922f87d3e8d7b85c6bf58dc87c0.scope - libcontainer container 54727de2adc88b250abd0a7bf8f3777b456f0922f87d3e8d7b85c6bf58dc87c0. Mar 19 12:02:53.226288 kubelet[2455]: E0319 12:02:53.226065 2455 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.230.57.154:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.230.57.154:6443: connect: connection refused Mar 19 12:02:53.242408 systemd[1]: Started cri-containerd-99444077c4952352e50da4698a2ede63494c886fc4db44e0d2bb9d0ef2687c28.scope - libcontainer container 99444077c4952352e50da4698a2ede63494c886fc4db44e0d2bb9d0ef2687c28. Mar 19 12:02:53.316875 containerd[1526]: time="2025-03-19T12:02:53.316819056Z" level=info msg="StartContainer for \"f5891bb32912ee6fb72c5bb64cf691fc481449bb8d04acdd76191047257790aa\" returns successfully" Mar 19 12:02:53.372772 containerd[1526]: time="2025-03-19T12:02:53.372637299Z" level=info msg="StartContainer for \"99444077c4952352e50da4698a2ede63494c886fc4db44e0d2bb9d0ef2687c28\" returns successfully" Mar 19 12:02:53.388205 containerd[1526]: time="2025-03-19T12:02:53.388024838Z" level=info msg="StartContainer for \"54727de2adc88b250abd0a7bf8f3777b456f0922f87d3e8d7b85c6bf58dc87c0\" returns successfully" Mar 19 12:02:53.831777 kubelet[2455]: W0319 12:02:53.831486 2455 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.57.154:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.230.57.154:6443: connect: connection refused Mar 19 12:02:53.831777 kubelet[2455]: E0319 12:02:53.831632 2455 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.230.57.154:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.230.57.154:6443: connect: connection refused Mar 19 12:02:53.943214 kubelet[2455]: W0319 12:02:53.942242 2455 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.57.154:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-z8dvi.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.57.154:6443: connect: connection refused Mar 19 12:02:53.943214 kubelet[2455]: E0319 12:02:53.942377 2455 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.230.57.154:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-z8dvi.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.57.154:6443: connect: connection refused Mar 19 12:02:54.285222 kubelet[2455]: I0319 12:02:54.284157 2455 kubelet_node_status.go:73] "Attempting to register node" node="srv-z8dvi.gb1.brightbox.com" Mar 19 12:02:56.011523 kubelet[2455]: E0319 12:02:56.011453 2455 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-z8dvi.gb1.brightbox.com\" not found" node="srv-z8dvi.gb1.brightbox.com" Mar 19 12:02:56.114203 kubelet[2455]: I0319 12:02:56.112342 2455 kubelet_node_status.go:76] "Successfully registered node" node="srv-z8dvi.gb1.brightbox.com" Mar 19 12:02:56.138787 kubelet[2455]: E0319 12:02:56.138713 2455 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-z8dvi.gb1.brightbox.com\" not found" Mar 19 12:02:56.239954 kubelet[2455]: E0319 12:02:56.239893 2455 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-z8dvi.gb1.brightbox.com\" not found" Mar 19 12:02:56.340626 kubelet[2455]: E0319 12:02:56.340479 2455 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-z8dvi.gb1.brightbox.com\" not found" Mar 19 12:02:56.441532 kubelet[2455]: E0319 12:02:56.441468 2455 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-z8dvi.gb1.brightbox.com\" not found" Mar 19 12:02:56.542286 kubelet[2455]: E0319 12:02:56.542208 2455 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-z8dvi.gb1.brightbox.com\" not found" Mar 19 12:02:56.642397 kubelet[2455]: E0319 12:02:56.642325 2455 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-z8dvi.gb1.brightbox.com\" not found" Mar 19 12:02:56.743175 kubelet[2455]: E0319 12:02:56.743097 2455 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-z8dvi.gb1.brightbox.com\" not found" Mar 19 12:02:56.843979 kubelet[2455]: E0319 12:02:56.843781 2455 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-z8dvi.gb1.brightbox.com\" not found" Mar 19 12:02:56.945046 kubelet[2455]: E0319 12:02:56.944814 2455 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-z8dvi.gb1.brightbox.com\" not found" Mar 19 12:02:57.045869 kubelet[2455]: E0319 12:02:57.045779 2455 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-z8dvi.gb1.brightbox.com\" not found" Mar 19 12:02:57.146820 kubelet[2455]: E0319 12:02:57.146683 2455 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-z8dvi.gb1.brightbox.com\" not found" Mar 19 12:02:57.247581 kubelet[2455]: E0319 12:02:57.247425 2455 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-z8dvi.gb1.brightbox.com\" not found" Mar 19 12:02:57.348288 kubelet[2455]: E0319 12:02:57.348175 2455 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-z8dvi.gb1.brightbox.com\" not found" Mar 19 12:02:57.449295 kubelet[2455]: E0319 12:02:57.449170 2455 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-z8dvi.gb1.brightbox.com\" not found" Mar 19 12:02:57.549993 kubelet[2455]: E0319 12:02:57.549811 2455 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-z8dvi.gb1.brightbox.com\" not found" Mar 19 12:02:57.650329 kubelet[2455]: E0319 12:02:57.650218 2455 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-z8dvi.gb1.brightbox.com\" not found" Mar 19 12:02:57.751049 kubelet[2455]: E0319 12:02:57.750967 2455 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-z8dvi.gb1.brightbox.com\" not found" Mar 19 12:02:57.852327 kubelet[2455]: E0319 12:02:57.852047 2455 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-z8dvi.gb1.brightbox.com\" not found" Mar 19 12:02:57.952947 kubelet[2455]: E0319 12:02:57.952879 2455 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-z8dvi.gb1.brightbox.com\" not found" Mar 19 12:02:58.053523 kubelet[2455]: E0319 12:02:58.053477 2455 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-z8dvi.gb1.brightbox.com\" not found" Mar 19 12:02:58.062000 systemd[1]: Reload requested from client PID 2729 ('systemctl') (unit session-11.scope)... Mar 19 12:02:58.062029 systemd[1]: Reloading... Mar 19 12:02:58.153803 kubelet[2455]: E0319 12:02:58.153747 2455 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-z8dvi.gb1.brightbox.com\" not found" Mar 19 12:02:58.249232 zram_generator::config[2771]: No configuration found. Mar 19 12:02:58.254621 kubelet[2455]: E0319 12:02:58.254560 2455 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-z8dvi.gb1.brightbox.com\" not found" Mar 19 12:02:58.355051 kubelet[2455]: E0319 12:02:58.354982 2455 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-z8dvi.gb1.brightbox.com\" not found" Mar 19 12:02:58.452318 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 19 12:02:58.455600 kubelet[2455]: E0319 12:02:58.455548 2455 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-z8dvi.gb1.brightbox.com\" not found" Mar 19 12:02:58.556122 kubelet[2455]: E0319 12:02:58.556060 2455 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-z8dvi.gb1.brightbox.com\" not found" Mar 19 12:02:58.626903 systemd[1]: Reloading finished in 563 ms. Mar 19 12:02:58.659243 kubelet[2455]: E0319 12:02:58.657804 2455 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-z8dvi.gb1.brightbox.com\" not found" Mar 19 12:02:58.678296 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 12:02:58.693011 systemd[1]: kubelet.service: Deactivated successfully. Mar 19 12:02:58.693514 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 12:02:58.693607 systemd[1]: kubelet.service: Consumed 1.023s CPU time, 111.4M memory peak. Mar 19 12:02:58.700587 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 12:02:58.904010 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 12:02:58.917661 (kubelet)[2839]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 19 12:02:59.056230 kubelet[2839]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 19 12:02:59.056230 kubelet[2839]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 19 12:02:59.056230 kubelet[2839]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 19 12:02:59.056230 kubelet[2839]: I0319 12:02:59.055780 2839 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 19 12:02:59.077491 sudo[2851]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 19 12:02:59.078231 kubelet[2839]: I0319 12:02:59.077627 2839 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 19 12:02:59.078231 kubelet[2839]: I0319 12:02:59.077659 2839 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 19 12:02:59.078231 kubelet[2839]: I0319 12:02:59.078082 2839 server.go:927] "Client rotation is on, will bootstrap in background" Mar 19 12:02:59.079023 sudo[2851]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 19 12:02:59.083118 kubelet[2839]: I0319 12:02:59.082857 2839 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 19 12:02:59.085888 kubelet[2839]: I0319 12:02:59.085859 2839 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 19 12:02:59.107330 kubelet[2839]: I0319 12:02:59.107265 2839 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 19 12:02:59.108380 kubelet[2839]: I0319 12:02:59.108326 2839 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 19 12:02:59.108625 kubelet[2839]: I0319 12:02:59.108381 2839 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-z8dvi.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 19 12:02:59.109459 kubelet[2839]: I0319 12:02:59.109140 2839 topology_manager.go:138] "Creating topology manager with none policy" Mar 19 12:02:59.109459 kubelet[2839]: I0319 12:02:59.109169 2839 container_manager_linux.go:301] "Creating device plugin manager" Mar 19 12:02:59.110838 kubelet[2839]: I0319 12:02:59.110604 2839 state_mem.go:36] "Initialized new in-memory state store" Mar 19 12:02:59.110838 kubelet[2839]: I0319 12:02:59.110776 2839 kubelet.go:400] "Attempting to sync node with API server" Mar 19 12:02:59.110838 kubelet[2839]: I0319 12:02:59.110807 2839 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 19 12:02:59.110838 kubelet[2839]: I0319 12:02:59.110838 2839 kubelet.go:312] "Adding apiserver pod source" Mar 19 12:02:59.111061 kubelet[2839]: I0319 12:02:59.110866 2839 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 19 12:02:59.120234 kubelet[2839]: I0319 12:02:59.119646 2839 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 19 12:02:59.120234 kubelet[2839]: I0319 12:02:59.119913 2839 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 19 12:02:59.123176 kubelet[2839]: I0319 12:02:59.122649 2839 server.go:1264] "Started kubelet" Mar 19 12:02:59.133541 kubelet[2839]: I0319 12:02:59.133503 2839 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 19 12:02:59.143272 kubelet[2839]: I0319 12:02:59.140719 2839 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 19 12:02:59.150705 kubelet[2839]: I0319 12:02:59.150647 2839 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 19 12:02:59.154372 kubelet[2839]: I0319 12:02:59.154230 2839 server.go:455] "Adding debug handlers to kubelet server" Mar 19 12:02:59.157642 kubelet[2839]: I0319 12:02:59.157602 2839 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 19 12:02:59.161750 kubelet[2839]: I0319 12:02:59.157861 2839 reconciler.go:26] "Reconciler: start to sync state" Mar 19 12:02:59.161750 kubelet[2839]: E0319 12:02:59.159331 2839 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 19 12:02:59.166841 kubelet[2839]: I0319 12:02:59.166769 2839 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 19 12:02:59.170206 kubelet[2839]: I0319 12:02:59.167313 2839 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 19 12:02:59.182358 kubelet[2839]: I0319 12:02:59.182298 2839 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 19 12:02:59.191200 kubelet[2839]: I0319 12:02:59.184829 2839 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 19 12:02:59.191200 kubelet[2839]: I0319 12:02:59.184885 2839 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 19 12:02:59.191200 kubelet[2839]: I0319 12:02:59.184923 2839 kubelet.go:2337] "Starting kubelet main sync loop" Mar 19 12:02:59.191200 kubelet[2839]: E0319 12:02:59.184991 2839 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 19 12:02:59.204670 kubelet[2839]: I0319 12:02:59.204631 2839 factory.go:221] Registration of the systemd container factory successfully Mar 19 12:02:59.205401 kubelet[2839]: I0319 12:02:59.204963 2839 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 19 12:02:59.212677 kubelet[2839]: I0319 12:02:59.212501 2839 factory.go:221] Registration of the containerd container factory successfully Mar 19 12:02:59.272699 kubelet[2839]: I0319 12:02:59.271915 2839 kubelet_node_status.go:73] "Attempting to register node" node="srv-z8dvi.gb1.brightbox.com" Mar 19 12:02:59.283937 kubelet[2839]: I0319 12:02:59.283903 2839 kubelet_node_status.go:112] "Node was previously registered" node="srv-z8dvi.gb1.brightbox.com" Mar 19 12:02:59.284072 kubelet[2839]: I0319 12:02:59.284015 2839 kubelet_node_status.go:76] "Successfully registered node" node="srv-z8dvi.gb1.brightbox.com" Mar 19 12:02:59.287479 kubelet[2839]: E0319 12:02:59.287453 2839 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 19 12:02:59.350581 kubelet[2839]: I0319 12:02:59.350542 2839 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 19 12:02:59.350581 kubelet[2839]: I0319 12:02:59.350570 2839 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 19 12:02:59.350796 kubelet[2839]: I0319 12:02:59.350599 2839 state_mem.go:36] "Initialized new in-memory state store" Mar 19 12:02:59.350878 kubelet[2839]: I0319 12:02:59.350805 2839 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 19 12:02:59.350878 kubelet[2839]: I0319 12:02:59.350825 2839 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 19 12:02:59.350878 kubelet[2839]: I0319 12:02:59.350855 2839 policy_none.go:49] "None policy: Start" Mar 19 12:02:59.352340 kubelet[2839]: I0319 12:02:59.352315 2839 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 19 12:02:59.352417 kubelet[2839]: I0319 12:02:59.352359 2839 state_mem.go:35] "Initializing new in-memory state store" Mar 19 12:02:59.352822 kubelet[2839]: I0319 12:02:59.352602 2839 state_mem.go:75] "Updated machine memory state" Mar 19 12:02:59.368095 kubelet[2839]: I0319 12:02:59.367836 2839 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 19 12:02:59.368095 kubelet[2839]: I0319 12:02:59.368073 2839 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 19 12:02:59.370668 kubelet[2839]: I0319 12:02:59.369627 2839 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 19 12:02:59.488342 kubelet[2839]: I0319 12:02:59.488206 2839 topology_manager.go:215] "Topology Admit Handler" podUID="6964ce632ab0af3306b5a2630d3b811d" podNamespace="kube-system" podName="kube-scheduler-srv-z8dvi.gb1.brightbox.com" Mar 19 12:02:59.488471 kubelet[2839]: I0319 12:02:59.488343 2839 topology_manager.go:215] "Topology Admit Handler" podUID="1ff4b2597610b83c66cb835a9c6ccbd6" podNamespace="kube-system" podName="kube-apiserver-srv-z8dvi.gb1.brightbox.com" Mar 19 12:02:59.488471 kubelet[2839]: I0319 12:02:59.488453 2839 topology_manager.go:215] "Topology Admit Handler" podUID="a2ce60e151f60c21def6fdafb12c160d" podNamespace="kube-system" podName="kube-controller-manager-srv-z8dvi.gb1.brightbox.com" Mar 19 12:02:59.506788 kubelet[2839]: W0319 12:02:59.506744 2839 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 19 12:02:59.507495 kubelet[2839]: W0319 12:02:59.507450 2839 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 19 12:02:59.509476 kubelet[2839]: W0319 12:02:59.509446 2839 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 19 12:02:59.563175 kubelet[2839]: I0319 12:02:59.563110 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a2ce60e151f60c21def6fdafb12c160d-ca-certs\") pod \"kube-controller-manager-srv-z8dvi.gb1.brightbox.com\" (UID: \"a2ce60e151f60c21def6fdafb12c160d\") " pod="kube-system/kube-controller-manager-srv-z8dvi.gb1.brightbox.com" Mar 19 12:02:59.663899 kubelet[2839]: I0319 12:02:59.663783 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6964ce632ab0af3306b5a2630d3b811d-kubeconfig\") pod \"kube-scheduler-srv-z8dvi.gb1.brightbox.com\" (UID: \"6964ce632ab0af3306b5a2630d3b811d\") " pod="kube-system/kube-scheduler-srv-z8dvi.gb1.brightbox.com" Mar 19 12:02:59.664084 kubelet[2839]: I0319 12:02:59.663956 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1ff4b2597610b83c66cb835a9c6ccbd6-usr-share-ca-certificates\") pod \"kube-apiserver-srv-z8dvi.gb1.brightbox.com\" (UID: \"1ff4b2597610b83c66cb835a9c6ccbd6\") " pod="kube-system/kube-apiserver-srv-z8dvi.gb1.brightbox.com" Mar 19 12:02:59.664927 kubelet[2839]: I0319 12:02:59.664471 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a2ce60e151f60c21def6fdafb12c160d-k8s-certs\") pod \"kube-controller-manager-srv-z8dvi.gb1.brightbox.com\" (UID: \"a2ce60e151f60c21def6fdafb12c160d\") " pod="kube-system/kube-controller-manager-srv-z8dvi.gb1.brightbox.com" Mar 19 12:02:59.665036 kubelet[2839]: I0319 12:02:59.664971 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a2ce60e151f60c21def6fdafb12c160d-kubeconfig\") pod \"kube-controller-manager-srv-z8dvi.gb1.brightbox.com\" (UID: \"a2ce60e151f60c21def6fdafb12c160d\") " pod="kube-system/kube-controller-manager-srv-z8dvi.gb1.brightbox.com" Mar 19 12:02:59.665112 kubelet[2839]: I0319 12:02:59.665048 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a2ce60e151f60c21def6fdafb12c160d-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-z8dvi.gb1.brightbox.com\" (UID: \"a2ce60e151f60c21def6fdafb12c160d\") " pod="kube-system/kube-controller-manager-srv-z8dvi.gb1.brightbox.com" Mar 19 12:02:59.665112 kubelet[2839]: I0319 12:02:59.665098 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1ff4b2597610b83c66cb835a9c6ccbd6-ca-certs\") pod \"kube-apiserver-srv-z8dvi.gb1.brightbox.com\" (UID: \"1ff4b2597610b83c66cb835a9c6ccbd6\") " pod="kube-system/kube-apiserver-srv-z8dvi.gb1.brightbox.com" Mar 19 12:02:59.665253 kubelet[2839]: I0319 12:02:59.665133 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1ff4b2597610b83c66cb835a9c6ccbd6-k8s-certs\") pod \"kube-apiserver-srv-z8dvi.gb1.brightbox.com\" (UID: \"1ff4b2597610b83c66cb835a9c6ccbd6\") " pod="kube-system/kube-apiserver-srv-z8dvi.gb1.brightbox.com" Mar 19 12:02:59.665253 kubelet[2839]: I0319 12:02:59.665160 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a2ce60e151f60c21def6fdafb12c160d-flexvolume-dir\") pod \"kube-controller-manager-srv-z8dvi.gb1.brightbox.com\" (UID: \"a2ce60e151f60c21def6fdafb12c160d\") " pod="kube-system/kube-controller-manager-srv-z8dvi.gb1.brightbox.com" Mar 19 12:03:00.018617 sudo[2851]: pam_unix(sudo:session): session closed for user root Mar 19 12:03:00.115130 kubelet[2839]: I0319 12:03:00.115083 2839 apiserver.go:52] "Watching apiserver" Mar 19 12:03:00.158308 kubelet[2839]: I0319 12:03:00.158167 2839 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 19 12:03:00.248820 kubelet[2839]: I0319 12:03:00.247954 2839 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-z8dvi.gb1.brightbox.com" podStartSLOduration=1.247926008 podStartE2EDuration="1.247926008s" podCreationTimestamp="2025-03-19 12:02:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-19 12:03:00.243108638 +0000 UTC m=+1.287021908" watchObservedRunningTime="2025-03-19 12:03:00.247926008 +0000 UTC m=+1.291839262" Mar 19 12:03:00.270527 kubelet[2839]: I0319 12:03:00.270377 2839 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-z8dvi.gb1.brightbox.com" podStartSLOduration=1.270355037 podStartE2EDuration="1.270355037s" podCreationTimestamp="2025-03-19 12:02:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-19 12:03:00.269489524 +0000 UTC m=+1.313402806" watchObservedRunningTime="2025-03-19 12:03:00.270355037 +0000 UTC m=+1.314268306" Mar 19 12:03:00.270735 kubelet[2839]: I0319 12:03:00.270530 2839 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-z8dvi.gb1.brightbox.com" podStartSLOduration=1.270522114 podStartE2EDuration="1.270522114s" podCreationTimestamp="2025-03-19 12:02:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-19 12:03:00.257830095 +0000 UTC m=+1.301743376" watchObservedRunningTime="2025-03-19 12:03:00.270522114 +0000 UTC m=+1.314435371" Mar 19 12:03:01.952483 sudo[1809]: pam_unix(sudo:session): session closed for user root Mar 19 12:03:02.097311 sshd[1808]: Connection closed by 139.178.89.65 port 47432 Mar 19 12:03:02.098205 sshd-session[1806]: pam_unix(sshd:session): session closed for user core Mar 19 12:03:02.104328 systemd[1]: sshd@8-10.230.57.154:22-139.178.89.65:47432.service: Deactivated successfully. Mar 19 12:03:02.109305 systemd[1]: session-11.scope: Deactivated successfully. Mar 19 12:03:02.109986 systemd[1]: session-11.scope: Consumed 7.333s CPU time, 232.3M memory peak. Mar 19 12:03:02.114461 systemd-logind[1509]: Session 11 logged out. Waiting for processes to exit. Mar 19 12:03:02.116630 systemd-logind[1509]: Removed session 11. Mar 19 12:03:11.689499 kubelet[2839]: I0319 12:03:11.689360 2839 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 19 12:03:11.690606 kubelet[2839]: I0319 12:03:11.690263 2839 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 19 12:03:11.690700 containerd[1526]: time="2025-03-19T12:03:11.689883215Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 19 12:03:12.385897 kubelet[2839]: I0319 12:03:12.385693 2839 topology_manager.go:215] "Topology Admit Handler" podUID="32b4eb99-81fa-421d-8b0c-2b10079a35ed" podNamespace="kube-system" podName="kube-proxy-rc5kp" Mar 19 12:03:12.387774 kubelet[2839]: I0319 12:03:12.387433 2839 topology_manager.go:215] "Topology Admit Handler" podUID="72d8bf8f-c353-4f8a-a457-3a6f94f2aa00" podNamespace="kube-system" podName="cilium-8qgr8" Mar 19 12:03:12.412035 systemd[1]: Created slice kubepods-besteffort-pod32b4eb99_81fa_421d_8b0c_2b10079a35ed.slice - libcontainer container kubepods-besteffort-pod32b4eb99_81fa_421d_8b0c_2b10079a35ed.slice. Mar 19 12:03:12.433787 systemd[1]: Created slice kubepods-burstable-pod72d8bf8f_c353_4f8a_a457_3a6f94f2aa00.slice - libcontainer container kubepods-burstable-pod72d8bf8f_c353_4f8a_a457_3a6f94f2aa00.slice. Mar 19 12:03:12.450971 kubelet[2839]: I0319 12:03:12.450157 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/72d8bf8f-c353-4f8a-a457-3a6f94f2aa00-hostproc\") pod \"cilium-8qgr8\" (UID: \"72d8bf8f-c353-4f8a-a457-3a6f94f2aa00\") " pod="kube-system/cilium-8qgr8" Mar 19 12:03:12.450971 kubelet[2839]: I0319 12:03:12.450236 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/72d8bf8f-c353-4f8a-a457-3a6f94f2aa00-bpf-maps\") pod \"cilium-8qgr8\" (UID: \"72d8bf8f-c353-4f8a-a457-3a6f94f2aa00\") " pod="kube-system/cilium-8qgr8" Mar 19 12:03:12.450971 kubelet[2839]: I0319 12:03:12.450274 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/72d8bf8f-c353-4f8a-a457-3a6f94f2aa00-cilium-cgroup\") pod \"cilium-8qgr8\" (UID: \"72d8bf8f-c353-4f8a-a457-3a6f94f2aa00\") " pod="kube-system/cilium-8qgr8" Mar 19 12:03:12.450971 kubelet[2839]: I0319 12:03:12.450304 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/72d8bf8f-c353-4f8a-a457-3a6f94f2aa00-hubble-tls\") pod \"cilium-8qgr8\" (UID: \"72d8bf8f-c353-4f8a-a457-3a6f94f2aa00\") " pod="kube-system/cilium-8qgr8" Mar 19 12:03:12.450971 kubelet[2839]: I0319 12:03:12.450330 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/32b4eb99-81fa-421d-8b0c-2b10079a35ed-xtables-lock\") pod \"kube-proxy-rc5kp\" (UID: \"32b4eb99-81fa-421d-8b0c-2b10079a35ed\") " pod="kube-system/kube-proxy-rc5kp" Mar 19 12:03:12.450971 kubelet[2839]: I0319 12:03:12.450384 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/32b4eb99-81fa-421d-8b0c-2b10079a35ed-lib-modules\") pod \"kube-proxy-rc5kp\" (UID: \"32b4eb99-81fa-421d-8b0c-2b10079a35ed\") " pod="kube-system/kube-proxy-rc5kp" Mar 19 12:03:12.451504 kubelet[2839]: I0319 12:03:12.450412 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/72d8bf8f-c353-4f8a-a457-3a6f94f2aa00-cilium-run\") pod \"cilium-8qgr8\" (UID: \"72d8bf8f-c353-4f8a-a457-3a6f94f2aa00\") " pod="kube-system/cilium-8qgr8" Mar 19 12:03:12.451504 kubelet[2839]: I0319 12:03:12.450442 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/32b4eb99-81fa-421d-8b0c-2b10079a35ed-kube-proxy\") pod \"kube-proxy-rc5kp\" (UID: \"32b4eb99-81fa-421d-8b0c-2b10079a35ed\") " pod="kube-system/kube-proxy-rc5kp" Mar 19 12:03:12.451504 kubelet[2839]: I0319 12:03:12.450469 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/72d8bf8f-c353-4f8a-a457-3a6f94f2aa00-xtables-lock\") pod \"cilium-8qgr8\" (UID: \"72d8bf8f-c353-4f8a-a457-3a6f94f2aa00\") " pod="kube-system/cilium-8qgr8" Mar 19 12:03:12.451504 kubelet[2839]: I0319 12:03:12.450495 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/72d8bf8f-c353-4f8a-a457-3a6f94f2aa00-lib-modules\") pod \"cilium-8qgr8\" (UID: \"72d8bf8f-c353-4f8a-a457-3a6f94f2aa00\") " pod="kube-system/cilium-8qgr8" Mar 19 12:03:12.451504 kubelet[2839]: I0319 12:03:12.450544 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/72d8bf8f-c353-4f8a-a457-3a6f94f2aa00-etc-cni-netd\") pod \"cilium-8qgr8\" (UID: \"72d8bf8f-c353-4f8a-a457-3a6f94f2aa00\") " pod="kube-system/cilium-8qgr8" Mar 19 12:03:12.451504 kubelet[2839]: I0319 12:03:12.450574 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/72d8bf8f-c353-4f8a-a457-3a6f94f2aa00-host-proc-sys-net\") pod \"cilium-8qgr8\" (UID: \"72d8bf8f-c353-4f8a-a457-3a6f94f2aa00\") " pod="kube-system/cilium-8qgr8" Mar 19 12:03:12.451819 kubelet[2839]: I0319 12:03:12.450612 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74hnd\" (UniqueName: \"kubernetes.io/projected/72d8bf8f-c353-4f8a-a457-3a6f94f2aa00-kube-api-access-74hnd\") pod \"cilium-8qgr8\" (UID: \"72d8bf8f-c353-4f8a-a457-3a6f94f2aa00\") " pod="kube-system/cilium-8qgr8" Mar 19 12:03:12.451819 kubelet[2839]: I0319 12:03:12.450645 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/72d8bf8f-c353-4f8a-a457-3a6f94f2aa00-host-proc-sys-kernel\") pod \"cilium-8qgr8\" (UID: \"72d8bf8f-c353-4f8a-a457-3a6f94f2aa00\") " pod="kube-system/cilium-8qgr8" Mar 19 12:03:12.451819 kubelet[2839]: I0319 12:03:12.450682 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4rlg\" (UniqueName: \"kubernetes.io/projected/32b4eb99-81fa-421d-8b0c-2b10079a35ed-kube-api-access-f4rlg\") pod \"kube-proxy-rc5kp\" (UID: \"32b4eb99-81fa-421d-8b0c-2b10079a35ed\") " pod="kube-system/kube-proxy-rc5kp" Mar 19 12:03:12.451819 kubelet[2839]: I0319 12:03:12.450710 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/72d8bf8f-c353-4f8a-a457-3a6f94f2aa00-cni-path\") pod \"cilium-8qgr8\" (UID: \"72d8bf8f-c353-4f8a-a457-3a6f94f2aa00\") " pod="kube-system/cilium-8qgr8" Mar 19 12:03:12.451819 kubelet[2839]: I0319 12:03:12.450765 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/72d8bf8f-c353-4f8a-a457-3a6f94f2aa00-clustermesh-secrets\") pod \"cilium-8qgr8\" (UID: \"72d8bf8f-c353-4f8a-a457-3a6f94f2aa00\") " pod="kube-system/cilium-8qgr8" Mar 19 12:03:12.452071 kubelet[2839]: I0319 12:03:12.450797 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/72d8bf8f-c353-4f8a-a457-3a6f94f2aa00-cilium-config-path\") pod \"cilium-8qgr8\" (UID: \"72d8bf8f-c353-4f8a-a457-3a6f94f2aa00\") " pod="kube-system/cilium-8qgr8" Mar 19 12:03:12.499206 kubelet[2839]: I0319 12:03:12.498728 2839 topology_manager.go:215] "Topology Admit Handler" podUID="6552963a-9e71-42ae-8b73-d4b7ef6e393b" podNamespace="kube-system" podName="cilium-operator-599987898-djr6j" Mar 19 12:03:12.512044 systemd[1]: Created slice kubepods-besteffort-pod6552963a_9e71_42ae_8b73_d4b7ef6e393b.slice - libcontainer container kubepods-besteffort-pod6552963a_9e71_42ae_8b73_d4b7ef6e393b.slice. Mar 19 12:03:12.552141 kubelet[2839]: I0319 12:03:12.552040 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6552963a-9e71-42ae-8b73-d4b7ef6e393b-cilium-config-path\") pod \"cilium-operator-599987898-djr6j\" (UID: \"6552963a-9e71-42ae-8b73-d4b7ef6e393b\") " pod="kube-system/cilium-operator-599987898-djr6j" Mar 19 12:03:12.554366 kubelet[2839]: I0319 12:03:12.552795 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhtst\" (UniqueName: \"kubernetes.io/projected/6552963a-9e71-42ae-8b73-d4b7ef6e393b-kube-api-access-mhtst\") pod \"cilium-operator-599987898-djr6j\" (UID: \"6552963a-9e71-42ae-8b73-d4b7ef6e393b\") " pod="kube-system/cilium-operator-599987898-djr6j" Mar 19 12:03:12.729074 containerd[1526]: time="2025-03-19T12:03:12.728357529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rc5kp,Uid:32b4eb99-81fa-421d-8b0c-2b10079a35ed,Namespace:kube-system,Attempt:0,}" Mar 19 12:03:12.743143 containerd[1526]: time="2025-03-19T12:03:12.743089704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8qgr8,Uid:72d8bf8f-c353-4f8a-a457-3a6f94f2aa00,Namespace:kube-system,Attempt:0,}" Mar 19 12:03:12.800926 containerd[1526]: time="2025-03-19T12:03:12.800790124Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 12:03:12.801106 containerd[1526]: time="2025-03-19T12:03:12.800896443Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 12:03:12.801106 containerd[1526]: time="2025-03-19T12:03:12.800917889Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 12:03:12.801307 containerd[1526]: time="2025-03-19T12:03:12.801120917Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 12:03:12.808429 containerd[1526]: time="2025-03-19T12:03:12.808042168Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 12:03:12.808429 containerd[1526]: time="2025-03-19T12:03:12.808132093Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 12:03:12.808429 containerd[1526]: time="2025-03-19T12:03:12.808159252Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 12:03:12.808429 containerd[1526]: time="2025-03-19T12:03:12.808304174Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 12:03:12.820806 containerd[1526]: time="2025-03-19T12:03:12.820576823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-djr6j,Uid:6552963a-9e71-42ae-8b73-d4b7ef6e393b,Namespace:kube-system,Attempt:0,}" Mar 19 12:03:12.842353 systemd[1]: Started cri-containerd-e81c14af9fda1b59cbd85e2fe91e12fea68a14123e66d548ee71a61623f76574.scope - libcontainer container e81c14af9fda1b59cbd85e2fe91e12fea68a14123e66d548ee71a61623f76574. Mar 19 12:03:12.859583 systemd[1]: Started cri-containerd-d0f530ee1650b240c5ae55043ede5e486820038b9308421e45c0d90a89bee4e4.scope - libcontainer container d0f530ee1650b240c5ae55043ede5e486820038b9308421e45c0d90a89bee4e4. Mar 19 12:03:12.926896 containerd[1526]: time="2025-03-19T12:03:12.926697490Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 12:03:12.927727 containerd[1526]: time="2025-03-19T12:03:12.927542662Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 12:03:12.927727 containerd[1526]: time="2025-03-19T12:03:12.927588537Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 12:03:12.928017 containerd[1526]: time="2025-03-19T12:03:12.927963629Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 12:03:12.935019 containerd[1526]: time="2025-03-19T12:03:12.934698456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8qgr8,Uid:72d8bf8f-c353-4f8a-a457-3a6f94f2aa00,Namespace:kube-system,Attempt:0,} returns sandbox id \"d0f530ee1650b240c5ae55043ede5e486820038b9308421e45c0d90a89bee4e4\"" Mar 19 12:03:12.940446 containerd[1526]: time="2025-03-19T12:03:12.940321232Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 19 12:03:12.950256 containerd[1526]: time="2025-03-19T12:03:12.950112760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rc5kp,Uid:32b4eb99-81fa-421d-8b0c-2b10079a35ed,Namespace:kube-system,Attempt:0,} returns sandbox id \"e81c14af9fda1b59cbd85e2fe91e12fea68a14123e66d548ee71a61623f76574\"" Mar 19 12:03:12.959113 containerd[1526]: time="2025-03-19T12:03:12.959056315Z" level=info msg="CreateContainer within sandbox \"e81c14af9fda1b59cbd85e2fe91e12fea68a14123e66d548ee71a61623f76574\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 19 12:03:12.972221 systemd[1]: Started cri-containerd-e79a23c2185aa4a9bd8ed4b5e042005c818230d081e0ea73289b2c9c7185f102.scope - libcontainer container e79a23c2185aa4a9bd8ed4b5e042005c818230d081e0ea73289b2c9c7185f102. Mar 19 12:03:12.992209 containerd[1526]: time="2025-03-19T12:03:12.991912138Z" level=info msg="CreateContainer within sandbox \"e81c14af9fda1b59cbd85e2fe91e12fea68a14123e66d548ee71a61623f76574\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b1e5f19dfe34b46034b8bced6143730be01c84362f4c43733194ce77ded357b8\"" Mar 19 12:03:12.995211 containerd[1526]: time="2025-03-19T12:03:12.993480248Z" level=info msg="StartContainer for \"b1e5f19dfe34b46034b8bced6143730be01c84362f4c43733194ce77ded357b8\"" Mar 19 12:03:13.044409 systemd[1]: Started cri-containerd-b1e5f19dfe34b46034b8bced6143730be01c84362f4c43733194ce77ded357b8.scope - libcontainer container b1e5f19dfe34b46034b8bced6143730be01c84362f4c43733194ce77ded357b8. Mar 19 12:03:13.081855 containerd[1526]: time="2025-03-19T12:03:13.081797262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-djr6j,Uid:6552963a-9e71-42ae-8b73-d4b7ef6e393b,Namespace:kube-system,Attempt:0,} returns sandbox id \"e79a23c2185aa4a9bd8ed4b5e042005c818230d081e0ea73289b2c9c7185f102\"" Mar 19 12:03:13.131925 containerd[1526]: time="2025-03-19T12:03:13.131871033Z" level=info msg="StartContainer for \"b1e5f19dfe34b46034b8bced6143730be01c84362f4c43733194ce77ded357b8\" returns successfully" Mar 19 12:03:13.311297 kubelet[2839]: I0319 12:03:13.310346 2839 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rc5kp" podStartSLOduration=1.310322061 podStartE2EDuration="1.310322061s" podCreationTimestamp="2025-03-19 12:03:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-19 12:03:13.310018181 +0000 UTC m=+14.353931467" watchObservedRunningTime="2025-03-19 12:03:13.310322061 +0000 UTC m=+14.354235305" Mar 19 12:03:21.821779 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3148223222.mount: Deactivated successfully. Mar 19 12:03:25.041068 containerd[1526]: time="2025-03-19T12:03:25.041001210Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 12:03:25.043957 containerd[1526]: time="2025-03-19T12:03:25.043911184Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Mar 19 12:03:25.045217 containerd[1526]: time="2025-03-19T12:03:25.045136995Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 12:03:25.048710 containerd[1526]: time="2025-03-19T12:03:25.048503971Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 12.10813862s" Mar 19 12:03:25.048710 containerd[1526]: time="2025-03-19T12:03:25.048554696Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Mar 19 12:03:25.071761 containerd[1526]: time="2025-03-19T12:03:25.071685294Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 19 12:03:25.074511 containerd[1526]: time="2025-03-19T12:03:25.074248794Z" level=info msg="CreateContainer within sandbox \"d0f530ee1650b240c5ae55043ede5e486820038b9308421e45c0d90a89bee4e4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 19 12:03:25.145502 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3106919789.mount: Deactivated successfully. Mar 19 12:03:25.154771 containerd[1526]: time="2025-03-19T12:03:25.154719999Z" level=info msg="CreateContainer within sandbox \"d0f530ee1650b240c5ae55043ede5e486820038b9308421e45c0d90a89bee4e4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"de9fe057de46b681af26adb1cf8f1dc01fa73b756d0e48fcac11fd0c15fe5194\"" Mar 19 12:03:25.155851 containerd[1526]: time="2025-03-19T12:03:25.155701314Z" level=info msg="StartContainer for \"de9fe057de46b681af26adb1cf8f1dc01fa73b756d0e48fcac11fd0c15fe5194\"" Mar 19 12:03:25.434516 systemd[1]: Started cri-containerd-de9fe057de46b681af26adb1cf8f1dc01fa73b756d0e48fcac11fd0c15fe5194.scope - libcontainer container de9fe057de46b681af26adb1cf8f1dc01fa73b756d0e48fcac11fd0c15fe5194. Mar 19 12:03:25.489621 containerd[1526]: time="2025-03-19T12:03:25.489523179Z" level=info msg="StartContainer for \"de9fe057de46b681af26adb1cf8f1dc01fa73b756d0e48fcac11fd0c15fe5194\" returns successfully" Mar 19 12:03:25.510132 systemd[1]: cri-containerd-de9fe057de46b681af26adb1cf8f1dc01fa73b756d0e48fcac11fd0c15fe5194.scope: Deactivated successfully. Mar 19 12:03:25.627489 containerd[1526]: time="2025-03-19T12:03:25.614677677Z" level=info msg="shim disconnected" id=de9fe057de46b681af26adb1cf8f1dc01fa73b756d0e48fcac11fd0c15fe5194 namespace=k8s.io Mar 19 12:03:25.627815 containerd[1526]: time="2025-03-19T12:03:25.627768106Z" level=warning msg="cleaning up after shim disconnected" id=de9fe057de46b681af26adb1cf8f1dc01fa73b756d0e48fcac11fd0c15fe5194 namespace=k8s.io Mar 19 12:03:25.627928 containerd[1526]: time="2025-03-19T12:03:25.627899534Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 12:03:26.140122 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-de9fe057de46b681af26adb1cf8f1dc01fa73b756d0e48fcac11fd0c15fe5194-rootfs.mount: Deactivated successfully. Mar 19 12:03:26.356841 containerd[1526]: time="2025-03-19T12:03:26.356641087Z" level=info msg="CreateContainer within sandbox \"d0f530ee1650b240c5ae55043ede5e486820038b9308421e45c0d90a89bee4e4\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 19 12:03:26.377121 containerd[1526]: time="2025-03-19T12:03:26.377068379Z" level=info msg="CreateContainer within sandbox \"d0f530ee1650b240c5ae55043ede5e486820038b9308421e45c0d90a89bee4e4\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"54c32bfff386eaac48fb097b6fe17da30b323d37f0732be6f2432280fec5f382\"" Mar 19 12:03:26.378236 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3808685985.mount: Deactivated successfully. Mar 19 12:03:26.381950 containerd[1526]: time="2025-03-19T12:03:26.379907503Z" level=info msg="StartContainer for \"54c32bfff386eaac48fb097b6fe17da30b323d37f0732be6f2432280fec5f382\"" Mar 19 12:03:26.432468 systemd[1]: Started cri-containerd-54c32bfff386eaac48fb097b6fe17da30b323d37f0732be6f2432280fec5f382.scope - libcontainer container 54c32bfff386eaac48fb097b6fe17da30b323d37f0732be6f2432280fec5f382. Mar 19 12:03:26.473443 containerd[1526]: time="2025-03-19T12:03:26.473388949Z" level=info msg="StartContainer for \"54c32bfff386eaac48fb097b6fe17da30b323d37f0732be6f2432280fec5f382\" returns successfully" Mar 19 12:03:26.494952 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 19 12:03:26.495395 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 19 12:03:26.496108 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 19 12:03:26.502656 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 19 12:03:26.503046 systemd[1]: cri-containerd-54c32bfff386eaac48fb097b6fe17da30b323d37f0732be6f2432280fec5f382.scope: Deactivated successfully. Mar 19 12:03:26.544725 containerd[1526]: time="2025-03-19T12:03:26.544640659Z" level=info msg="shim disconnected" id=54c32bfff386eaac48fb097b6fe17da30b323d37f0732be6f2432280fec5f382 namespace=k8s.io Mar 19 12:03:26.544725 containerd[1526]: time="2025-03-19T12:03:26.544711928Z" level=warning msg="cleaning up after shim disconnected" id=54c32bfff386eaac48fb097b6fe17da30b323d37f0732be6f2432280fec5f382 namespace=k8s.io Mar 19 12:03:26.544725 containerd[1526]: time="2025-03-19T12:03:26.544729088Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 12:03:26.552592 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 19 12:03:26.572463 containerd[1526]: time="2025-03-19T12:03:26.572282228Z" level=warning msg="cleanup warnings time=\"2025-03-19T12:03:26Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 19 12:03:27.143488 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-54c32bfff386eaac48fb097b6fe17da30b323d37f0732be6f2432280fec5f382-rootfs.mount: Deactivated successfully. Mar 19 12:03:27.360577 containerd[1526]: time="2025-03-19T12:03:27.360396830Z" level=info msg="CreateContainer within sandbox \"d0f530ee1650b240c5ae55043ede5e486820038b9308421e45c0d90a89bee4e4\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 19 12:03:27.440257 containerd[1526]: time="2025-03-19T12:03:27.440068241Z" level=info msg="CreateContainer within sandbox \"d0f530ee1650b240c5ae55043ede5e486820038b9308421e45c0d90a89bee4e4\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"be7f60dc4d87b2042f4c5a7737ea74c11ac79a145db35ecf7ff2087c8dc2556c\"" Mar 19 12:03:27.441658 containerd[1526]: time="2025-03-19T12:03:27.440939686Z" level=info msg="StartContainer for \"be7f60dc4d87b2042f4c5a7737ea74c11ac79a145db35ecf7ff2087c8dc2556c\"" Mar 19 12:03:27.487475 systemd[1]: Started cri-containerd-be7f60dc4d87b2042f4c5a7737ea74c11ac79a145db35ecf7ff2087c8dc2556c.scope - libcontainer container be7f60dc4d87b2042f4c5a7737ea74c11ac79a145db35ecf7ff2087c8dc2556c. Mar 19 12:03:27.541884 systemd[1]: cri-containerd-be7f60dc4d87b2042f4c5a7737ea74c11ac79a145db35ecf7ff2087c8dc2556c.scope: Deactivated successfully. Mar 19 12:03:27.545940 containerd[1526]: time="2025-03-19T12:03:27.545847184Z" level=info msg="StartContainer for \"be7f60dc4d87b2042f4c5a7737ea74c11ac79a145db35ecf7ff2087c8dc2556c\" returns successfully" Mar 19 12:03:27.581895 containerd[1526]: time="2025-03-19T12:03:27.581817392Z" level=info msg="shim disconnected" id=be7f60dc4d87b2042f4c5a7737ea74c11ac79a145db35ecf7ff2087c8dc2556c namespace=k8s.io Mar 19 12:03:27.581895 containerd[1526]: time="2025-03-19T12:03:27.581892379Z" level=warning msg="cleaning up after shim disconnected" id=be7f60dc4d87b2042f4c5a7737ea74c11ac79a145db35ecf7ff2087c8dc2556c namespace=k8s.io Mar 19 12:03:27.582404 containerd[1526]: time="2025-03-19T12:03:27.581908388Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 12:03:28.144146 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-be7f60dc4d87b2042f4c5a7737ea74c11ac79a145db35ecf7ff2087c8dc2556c-rootfs.mount: Deactivated successfully. Mar 19 12:03:28.368213 containerd[1526]: time="2025-03-19T12:03:28.368011601Z" level=info msg="CreateContainer within sandbox \"d0f530ee1650b240c5ae55043ede5e486820038b9308421e45c0d90a89bee4e4\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 19 12:03:28.395643 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2571064855.mount: Deactivated successfully. Mar 19 12:03:28.400547 containerd[1526]: time="2025-03-19T12:03:28.400437556Z" level=info msg="CreateContainer within sandbox \"d0f530ee1650b240c5ae55043ede5e486820038b9308421e45c0d90a89bee4e4\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5e8d3f10cdef5a8fa0b4ea0c57d9f7d53805f619a3bfe2198f51e0af9092d7ae\"" Mar 19 12:03:28.401627 containerd[1526]: time="2025-03-19T12:03:28.401590242Z" level=info msg="StartContainer for \"5e8d3f10cdef5a8fa0b4ea0c57d9f7d53805f619a3bfe2198f51e0af9092d7ae\"" Mar 19 12:03:28.465424 systemd[1]: Started cri-containerd-5e8d3f10cdef5a8fa0b4ea0c57d9f7d53805f619a3bfe2198f51e0af9092d7ae.scope - libcontainer container 5e8d3f10cdef5a8fa0b4ea0c57d9f7d53805f619a3bfe2198f51e0af9092d7ae. Mar 19 12:03:28.559480 systemd[1]: cri-containerd-5e8d3f10cdef5a8fa0b4ea0c57d9f7d53805f619a3bfe2198f51e0af9092d7ae.scope: Deactivated successfully. Mar 19 12:03:28.563853 containerd[1526]: time="2025-03-19T12:03:28.563066680Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod72d8bf8f_c353_4f8a_a457_3a6f94f2aa00.slice/cri-containerd-5e8d3f10cdef5a8fa0b4ea0c57d9f7d53805f619a3bfe2198f51e0af9092d7ae.scope/memory.events\": no such file or directory" Mar 19 12:03:28.605141 containerd[1526]: time="2025-03-19T12:03:28.603998432Z" level=info msg="StartContainer for \"5e8d3f10cdef5a8fa0b4ea0c57d9f7d53805f619a3bfe2198f51e0af9092d7ae\" returns successfully" Mar 19 12:03:28.671255 containerd[1526]: time="2025-03-19T12:03:28.670881168Z" level=info msg="shim disconnected" id=5e8d3f10cdef5a8fa0b4ea0c57d9f7d53805f619a3bfe2198f51e0af9092d7ae namespace=k8s.io Mar 19 12:03:28.671255 containerd[1526]: time="2025-03-19T12:03:28.670947089Z" level=warning msg="cleaning up after shim disconnected" id=5e8d3f10cdef5a8fa0b4ea0c57d9f7d53805f619a3bfe2198f51e0af9092d7ae namespace=k8s.io Mar 19 12:03:28.671255 containerd[1526]: time="2025-03-19T12:03:28.670962148Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 12:03:28.989372 containerd[1526]: time="2025-03-19T12:03:28.989233368Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 12:03:28.990492 containerd[1526]: time="2025-03-19T12:03:28.990440025Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Mar 19 12:03:28.991488 containerd[1526]: time="2025-03-19T12:03:28.991408261Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 12:03:28.993801 containerd[1526]: time="2025-03-19T12:03:28.993757637Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.922016621s" Mar 19 12:03:28.993886 containerd[1526]: time="2025-03-19T12:03:28.993802979Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Mar 19 12:03:28.997778 containerd[1526]: time="2025-03-19T12:03:28.997733236Z" level=info msg="CreateContainer within sandbox \"e79a23c2185aa4a9bd8ed4b5e042005c818230d081e0ea73289b2c9c7185f102\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 19 12:03:29.018036 containerd[1526]: time="2025-03-19T12:03:29.017990593Z" level=info msg="CreateContainer within sandbox \"e79a23c2185aa4a9bd8ed4b5e042005c818230d081e0ea73289b2c9c7185f102\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"75672d87da98bcd23a1aad4fce568e82b3758f580261de15ae7d3a7d473bef63\"" Mar 19 12:03:29.020076 containerd[1526]: time="2025-03-19T12:03:29.018613715Z" level=info msg="StartContainer for \"75672d87da98bcd23a1aad4fce568e82b3758f580261de15ae7d3a7d473bef63\"" Mar 19 12:03:29.057417 systemd[1]: Started cri-containerd-75672d87da98bcd23a1aad4fce568e82b3758f580261de15ae7d3a7d473bef63.scope - libcontainer container 75672d87da98bcd23a1aad4fce568e82b3758f580261de15ae7d3a7d473bef63. Mar 19 12:03:29.100786 containerd[1526]: time="2025-03-19T12:03:29.100733469Z" level=info msg="StartContainer for \"75672d87da98bcd23a1aad4fce568e82b3758f580261de15ae7d3a7d473bef63\" returns successfully" Mar 19 12:03:29.145926 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5e8d3f10cdef5a8fa0b4ea0c57d9f7d53805f619a3bfe2198f51e0af9092d7ae-rootfs.mount: Deactivated successfully. Mar 19 12:03:29.379782 containerd[1526]: time="2025-03-19T12:03:29.379729684Z" level=info msg="CreateContainer within sandbox \"d0f530ee1650b240c5ae55043ede5e486820038b9308421e45c0d90a89bee4e4\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 19 12:03:29.417558 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount638846522.mount: Deactivated successfully. Mar 19 12:03:29.421442 containerd[1526]: time="2025-03-19T12:03:29.421312295Z" level=info msg="CreateContainer within sandbox \"d0f530ee1650b240c5ae55043ede5e486820038b9308421e45c0d90a89bee4e4\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f212149daf98d6d6e12bad78843727379cfb8bcb8b0eeb54f1c054159c156ba5\"" Mar 19 12:03:29.422605 containerd[1526]: time="2025-03-19T12:03:29.422569923Z" level=info msg="StartContainer for \"f212149daf98d6d6e12bad78843727379cfb8bcb8b0eeb54f1c054159c156ba5\"" Mar 19 12:03:29.521427 systemd[1]: Started cri-containerd-f212149daf98d6d6e12bad78843727379cfb8bcb8b0eeb54f1c054159c156ba5.scope - libcontainer container f212149daf98d6d6e12bad78843727379cfb8bcb8b0eeb54f1c054159c156ba5. Mar 19 12:03:29.685129 containerd[1526]: time="2025-03-19T12:03:29.684628915Z" level=info msg="StartContainer for \"f212149daf98d6d6e12bad78843727379cfb8bcb8b0eeb54f1c054159c156ba5\" returns successfully" Mar 19 12:03:30.145720 systemd[1]: run-containerd-runc-k8s.io-f212149daf98d6d6e12bad78843727379cfb8bcb8b0eeb54f1c054159c156ba5-runc.7RtHTt.mount: Deactivated successfully. Mar 19 12:03:30.208433 kubelet[2839]: I0319 12:03:30.205857 2839 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Mar 19 12:03:30.274556 kubelet[2839]: I0319 12:03:30.273168 2839 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-djr6j" podStartSLOduration=2.363672442 podStartE2EDuration="18.273128839s" podCreationTimestamp="2025-03-19 12:03:12 +0000 UTC" firstStartedPulling="2025-03-19 12:03:13.085913703 +0000 UTC m=+14.129826945" lastFinishedPulling="2025-03-19 12:03:28.9953701 +0000 UTC m=+30.039283342" observedRunningTime="2025-03-19 12:03:29.646255028 +0000 UTC m=+30.690168310" watchObservedRunningTime="2025-03-19 12:03:30.273128839 +0000 UTC m=+31.317042083" Mar 19 12:03:30.277233 kubelet[2839]: I0319 12:03:30.276962 2839 topology_manager.go:215] "Topology Admit Handler" podUID="49fccb47-784a-4992-95fd-7a73085b17d0" podNamespace="kube-system" podName="coredns-7db6d8ff4d-8kxq8" Mar 19 12:03:30.290874 systemd[1]: Created slice kubepods-burstable-pod49fccb47_784a_4992_95fd_7a73085b17d0.slice - libcontainer container kubepods-burstable-pod49fccb47_784a_4992_95fd_7a73085b17d0.slice. Mar 19 12:03:30.300391 kubelet[2839]: I0319 12:03:30.300344 2839 topology_manager.go:215] "Topology Admit Handler" podUID="6795c056-6188-456e-830c-e7762c9df199" podNamespace="kube-system" podName="coredns-7db6d8ff4d-c2chg" Mar 19 12:03:30.314619 systemd[1]: Created slice kubepods-burstable-pod6795c056_6188_456e_830c_e7762c9df199.slice - libcontainer container kubepods-burstable-pod6795c056_6188_456e_830c_e7762c9df199.slice. Mar 19 12:03:30.342482 kubelet[2839]: I0319 12:03:30.342431 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rkmh\" (UniqueName: \"kubernetes.io/projected/49fccb47-784a-4992-95fd-7a73085b17d0-kube-api-access-4rkmh\") pod \"coredns-7db6d8ff4d-8kxq8\" (UID: \"49fccb47-784a-4992-95fd-7a73085b17d0\") " pod="kube-system/coredns-7db6d8ff4d-8kxq8" Mar 19 12:03:30.343018 kubelet[2839]: I0319 12:03:30.342769 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6795c056-6188-456e-830c-e7762c9df199-config-volume\") pod \"coredns-7db6d8ff4d-c2chg\" (UID: \"6795c056-6188-456e-830c-e7762c9df199\") " pod="kube-system/coredns-7db6d8ff4d-c2chg" Mar 19 12:03:30.343018 kubelet[2839]: I0319 12:03:30.342875 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfztd\" (UniqueName: \"kubernetes.io/projected/6795c056-6188-456e-830c-e7762c9df199-kube-api-access-zfztd\") pod \"coredns-7db6d8ff4d-c2chg\" (UID: \"6795c056-6188-456e-830c-e7762c9df199\") " pod="kube-system/coredns-7db6d8ff4d-c2chg" Mar 19 12:03:30.343018 kubelet[2839]: I0319 12:03:30.342950 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/49fccb47-784a-4992-95fd-7a73085b17d0-config-volume\") pod \"coredns-7db6d8ff4d-8kxq8\" (UID: \"49fccb47-784a-4992-95fd-7a73085b17d0\") " pod="kube-system/coredns-7db6d8ff4d-8kxq8" Mar 19 12:03:30.597698 containerd[1526]: time="2025-03-19T12:03:30.597545989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8kxq8,Uid:49fccb47-784a-4992-95fd-7a73085b17d0,Namespace:kube-system,Attempt:0,}" Mar 19 12:03:30.623441 containerd[1526]: time="2025-03-19T12:03:30.623382149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-c2chg,Uid:6795c056-6188-456e-830c-e7762c9df199,Namespace:kube-system,Attempt:0,}" Mar 19 12:03:32.815424 systemd-networkd[1447]: cilium_host: Link UP Mar 19 12:03:32.819833 systemd-networkd[1447]: cilium_net: Link UP Mar 19 12:03:32.820524 systemd-networkd[1447]: cilium_net: Gained carrier Mar 19 12:03:32.821359 systemd-networkd[1447]: cilium_host: Gained carrier Mar 19 12:03:32.821879 systemd-networkd[1447]: cilium_net: Gained IPv6LL Mar 19 12:03:32.822612 systemd-networkd[1447]: cilium_host: Gained IPv6LL Mar 19 12:03:32.986348 systemd-networkd[1447]: cilium_vxlan: Link UP Mar 19 12:03:32.986360 systemd-networkd[1447]: cilium_vxlan: Gained carrier Mar 19 12:03:33.504362 kernel: NET: Registered PF_ALG protocol family Mar 19 12:03:34.289428 systemd-networkd[1447]: cilium_vxlan: Gained IPv6LL Mar 19 12:03:34.593668 systemd-networkd[1447]: lxc_health: Link UP Mar 19 12:03:34.600330 systemd-networkd[1447]: lxc_health: Gained carrier Mar 19 12:03:34.772949 kubelet[2839]: I0319 12:03:34.772871 2839 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-8qgr8" podStartSLOduration=10.639381349 podStartE2EDuration="22.772839435s" podCreationTimestamp="2025-03-19 12:03:12 +0000 UTC" firstStartedPulling="2025-03-19 12:03:12.937618763 +0000 UTC m=+13.981532006" lastFinishedPulling="2025-03-19 12:03:25.071076851 +0000 UTC m=+26.114990092" observedRunningTime="2025-03-19 12:03:30.494915646 +0000 UTC m=+31.538828930" watchObservedRunningTime="2025-03-19 12:03:34.772839435 +0000 UTC m=+35.816752676" Mar 19 12:03:35.244251 kernel: eth0: renamed from tmp86ba9 Mar 19 12:03:35.246451 systemd-networkd[1447]: lxcbfa37e621ad4: Link UP Mar 19 12:03:35.250625 systemd-networkd[1447]: lxcbfa37e621ad4: Gained carrier Mar 19 12:03:35.296431 systemd-networkd[1447]: lxc87c01e145656: Link UP Mar 19 12:03:35.307228 kernel: eth0: renamed from tmp3bfd9 Mar 19 12:03:35.311420 systemd-networkd[1447]: lxc87c01e145656: Gained carrier Mar 19 12:03:35.825562 systemd-networkd[1447]: lxc_health: Gained IPv6LL Mar 19 12:03:37.105536 systemd-networkd[1447]: lxc87c01e145656: Gained IPv6LL Mar 19 12:03:37.169504 systemd-networkd[1447]: lxcbfa37e621ad4: Gained IPv6LL Mar 19 12:03:40.633640 containerd[1526]: time="2025-03-19T12:03:40.632909581Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 12:03:40.633640 containerd[1526]: time="2025-03-19T12:03:40.633154715Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 12:03:40.633640 containerd[1526]: time="2025-03-19T12:03:40.633220606Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 12:03:40.633640 containerd[1526]: time="2025-03-19T12:03:40.633408749Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 12:03:40.689998 systemd[1]: Started cri-containerd-86ba98dfbadc4a84c52734801ae91de2e4c505604a22a6b6392ca10e298e1536.scope - libcontainer container 86ba98dfbadc4a84c52734801ae91de2e4c505604a22a6b6392ca10e298e1536. Mar 19 12:03:40.802152 containerd[1526]: time="2025-03-19T12:03:40.802002466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-c2chg,Uid:6795c056-6188-456e-830c-e7762c9df199,Namespace:kube-system,Attempt:0,} returns sandbox id \"86ba98dfbadc4a84c52734801ae91de2e4c505604a22a6b6392ca10e298e1536\"" Mar 19 12:03:40.812212 containerd[1526]: time="2025-03-19T12:03:40.811469713Z" level=info msg="CreateContainer within sandbox \"86ba98dfbadc4a84c52734801ae91de2e4c505604a22a6b6392ca10e298e1536\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 19 12:03:40.838851 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4104194606.mount: Deactivated successfully. Mar 19 12:03:40.850382 containerd[1526]: time="2025-03-19T12:03:40.850318055Z" level=info msg="CreateContainer within sandbox \"86ba98dfbadc4a84c52734801ae91de2e4c505604a22a6b6392ca10e298e1536\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fe8a47c2d5d8efe483b8327de12486f6a91a51730e04ef9aba93808ca5035190\"" Mar 19 12:03:40.852173 containerd[1526]: time="2025-03-19T12:03:40.852134519Z" level=info msg="StartContainer for \"fe8a47c2d5d8efe483b8327de12486f6a91a51730e04ef9aba93808ca5035190\"" Mar 19 12:03:40.875815 containerd[1526]: time="2025-03-19T12:03:40.875576464Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 12:03:40.875986 containerd[1526]: time="2025-03-19T12:03:40.875891788Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 12:03:40.876130 containerd[1526]: time="2025-03-19T12:03:40.876037149Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 12:03:40.878611 containerd[1526]: time="2025-03-19T12:03:40.878489053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 12:03:40.903055 systemd[1]: Started cri-containerd-fe8a47c2d5d8efe483b8327de12486f6a91a51730e04ef9aba93808ca5035190.scope - libcontainer container fe8a47c2d5d8efe483b8327de12486f6a91a51730e04ef9aba93808ca5035190. Mar 19 12:03:40.917410 systemd[1]: Started cri-containerd-3bfd959ac94259ff61bd43a584eb52bd4a679209478b1ca1560432cb74dbac60.scope - libcontainer container 3bfd959ac94259ff61bd43a584eb52bd4a679209478b1ca1560432cb74dbac60. Mar 19 12:03:40.968474 containerd[1526]: time="2025-03-19T12:03:40.968424118Z" level=info msg="StartContainer for \"fe8a47c2d5d8efe483b8327de12486f6a91a51730e04ef9aba93808ca5035190\" returns successfully" Mar 19 12:03:41.006672 containerd[1526]: time="2025-03-19T12:03:41.006620855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8kxq8,Uid:49fccb47-784a-4992-95fd-7a73085b17d0,Namespace:kube-system,Attempt:0,} returns sandbox id \"3bfd959ac94259ff61bd43a584eb52bd4a679209478b1ca1560432cb74dbac60\"" Mar 19 12:03:41.012386 containerd[1526]: time="2025-03-19T12:03:41.011976164Z" level=info msg="CreateContainer within sandbox \"3bfd959ac94259ff61bd43a584eb52bd4a679209478b1ca1560432cb74dbac60\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 19 12:03:41.054909 containerd[1526]: time="2025-03-19T12:03:41.054659774Z" level=info msg="CreateContainer within sandbox \"3bfd959ac94259ff61bd43a584eb52bd4a679209478b1ca1560432cb74dbac60\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"175d56bf9379ae1e8a3c27cf38dc92a190bf3c78401252afacdb67a28a083454\"" Mar 19 12:03:41.055888 containerd[1526]: time="2025-03-19T12:03:41.055625032Z" level=info msg="StartContainer for \"175d56bf9379ae1e8a3c27cf38dc92a190bf3c78401252afacdb67a28a083454\"" Mar 19 12:03:41.114596 systemd[1]: Started cri-containerd-175d56bf9379ae1e8a3c27cf38dc92a190bf3c78401252afacdb67a28a083454.scope - libcontainer container 175d56bf9379ae1e8a3c27cf38dc92a190bf3c78401252afacdb67a28a083454. Mar 19 12:03:41.172870 containerd[1526]: time="2025-03-19T12:03:41.172726890Z" level=info msg="StartContainer for \"175d56bf9379ae1e8a3c27cf38dc92a190bf3c78401252afacdb67a28a083454\" returns successfully" Mar 19 12:03:41.459627 kubelet[2839]: I0319 12:03:41.459122 2839 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-8kxq8" podStartSLOduration=29.459081848 podStartE2EDuration="29.459081848s" podCreationTimestamp="2025-03-19 12:03:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-19 12:03:41.456872298 +0000 UTC m=+42.500785573" watchObservedRunningTime="2025-03-19 12:03:41.459081848 +0000 UTC m=+42.502995105" Mar 19 12:03:41.644760 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2975407533.mount: Deactivated successfully. Mar 19 12:04:13.354870 systemd[1]: Started sshd@9-10.230.57.154:22-139.178.89.65:52094.service - OpenSSH per-connection server daemon (139.178.89.65:52094). Mar 19 12:04:14.310282 sshd[4211]: Accepted publickey for core from 139.178.89.65 port 52094 ssh2: RSA SHA256:wd0V+bPIs7QJ731rPibTo3OGwYA2jJ2A4YRQxxbXCKc Mar 19 12:04:14.313108 sshd-session[4211]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 12:04:14.323384 systemd-logind[1509]: New session 12 of user core. Mar 19 12:04:14.329418 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 19 12:04:15.462485 sshd[4215]: Connection closed by 139.178.89.65 port 52094 Mar 19 12:04:15.463798 sshd-session[4211]: pam_unix(sshd:session): session closed for user core Mar 19 12:04:15.469142 systemd-logind[1509]: Session 12 logged out. Waiting for processes to exit. Mar 19 12:04:15.470970 systemd[1]: sshd@9-10.230.57.154:22-139.178.89.65:52094.service: Deactivated successfully. Mar 19 12:04:15.474153 systemd[1]: session-12.scope: Deactivated successfully. Mar 19 12:04:15.476082 systemd-logind[1509]: Removed session 12. Mar 19 12:04:20.625827 systemd[1]: Started sshd@10-10.230.57.154:22-139.178.89.65:52106.service - OpenSSH per-connection server daemon (139.178.89.65:52106). Mar 19 12:04:21.566572 sshd[4228]: Accepted publickey for core from 139.178.89.65 port 52106 ssh2: RSA SHA256:wd0V+bPIs7QJ731rPibTo3OGwYA2jJ2A4YRQxxbXCKc Mar 19 12:04:21.569660 sshd-session[4228]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 12:04:21.576370 systemd-logind[1509]: New session 13 of user core. Mar 19 12:04:21.585433 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 19 12:04:22.283729 sshd[4230]: Connection closed by 139.178.89.65 port 52106 Mar 19 12:04:22.285347 sshd-session[4228]: pam_unix(sshd:session): session closed for user core Mar 19 12:04:22.291234 systemd-logind[1509]: Session 13 logged out. Waiting for processes to exit. Mar 19 12:04:22.292907 systemd[1]: sshd@10-10.230.57.154:22-139.178.89.65:52106.service: Deactivated successfully. Mar 19 12:04:22.295972 systemd[1]: session-13.scope: Deactivated successfully. Mar 19 12:04:22.297973 systemd-logind[1509]: Removed session 13. Mar 19 12:04:27.447552 systemd[1]: Started sshd@11-10.230.57.154:22-139.178.89.65:56976.service - OpenSSH per-connection server daemon (139.178.89.65:56976). Mar 19 12:04:28.344527 sshd[4243]: Accepted publickey for core from 139.178.89.65 port 56976 ssh2: RSA SHA256:wd0V+bPIs7QJ731rPibTo3OGwYA2jJ2A4YRQxxbXCKc Mar 19 12:04:28.346706 sshd-session[4243]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 12:04:28.354692 systemd-logind[1509]: New session 14 of user core. Mar 19 12:04:28.360366 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 19 12:04:29.041056 sshd[4245]: Connection closed by 139.178.89.65 port 56976 Mar 19 12:04:29.042061 sshd-session[4243]: pam_unix(sshd:session): session closed for user core Mar 19 12:04:29.047220 systemd[1]: sshd@11-10.230.57.154:22-139.178.89.65:56976.service: Deactivated successfully. Mar 19 12:04:29.049864 systemd[1]: session-14.scope: Deactivated successfully. Mar 19 12:04:29.051319 systemd-logind[1509]: Session 14 logged out. Waiting for processes to exit. Mar 19 12:04:29.052954 systemd-logind[1509]: Removed session 14. Mar 19 12:04:34.203617 systemd[1]: Started sshd@12-10.230.57.154:22-139.178.89.65:34218.service - OpenSSH per-connection server daemon (139.178.89.65:34218). Mar 19 12:04:35.108573 sshd[4258]: Accepted publickey for core from 139.178.89.65 port 34218 ssh2: RSA SHA256:wd0V+bPIs7QJ731rPibTo3OGwYA2jJ2A4YRQxxbXCKc Mar 19 12:04:35.110926 sshd-session[4258]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 12:04:35.117889 systemd-logind[1509]: New session 15 of user core. Mar 19 12:04:35.124421 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 19 12:04:35.816697 sshd[4261]: Connection closed by 139.178.89.65 port 34218 Mar 19 12:04:35.817742 sshd-session[4258]: pam_unix(sshd:session): session closed for user core Mar 19 12:04:35.823002 systemd[1]: sshd@12-10.230.57.154:22-139.178.89.65:34218.service: Deactivated successfully. Mar 19 12:04:35.825786 systemd[1]: session-15.scope: Deactivated successfully. Mar 19 12:04:35.826971 systemd-logind[1509]: Session 15 logged out. Waiting for processes to exit. Mar 19 12:04:35.828548 systemd-logind[1509]: Removed session 15. Mar 19 12:04:40.976684 systemd[1]: Started sshd@13-10.230.57.154:22-139.178.89.65:34230.service - OpenSSH per-connection server daemon (139.178.89.65:34230). Mar 19 12:04:41.871141 sshd[4274]: Accepted publickey for core from 139.178.89.65 port 34230 ssh2: RSA SHA256:wd0V+bPIs7QJ731rPibTo3OGwYA2jJ2A4YRQxxbXCKc Mar 19 12:04:41.873415 sshd-session[4274]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 12:04:41.880246 systemd-logind[1509]: New session 16 of user core. Mar 19 12:04:41.892476 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 19 12:04:42.577629 sshd[4276]: Connection closed by 139.178.89.65 port 34230 Mar 19 12:04:42.578594 sshd-session[4274]: pam_unix(sshd:session): session closed for user core Mar 19 12:04:42.584590 systemd[1]: sshd@13-10.230.57.154:22-139.178.89.65:34230.service: Deactivated successfully. Mar 19 12:04:42.587517 systemd[1]: session-16.scope: Deactivated successfully. Mar 19 12:04:42.588806 systemd-logind[1509]: Session 16 logged out. Waiting for processes to exit. Mar 19 12:04:42.590672 systemd-logind[1509]: Removed session 16. Mar 19 12:04:42.737625 systemd[1]: Started sshd@14-10.230.57.154:22-139.178.89.65:49492.service - OpenSSH per-connection server daemon (139.178.89.65:49492). Mar 19 12:04:43.629915 sshd[4288]: Accepted publickey for core from 139.178.89.65 port 49492 ssh2: RSA SHA256:wd0V+bPIs7QJ731rPibTo3OGwYA2jJ2A4YRQxxbXCKc Mar 19 12:04:43.632379 sshd-session[4288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 12:04:43.640422 systemd-logind[1509]: New session 17 of user core. Mar 19 12:04:43.644467 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 19 12:04:44.407218 sshd[4292]: Connection closed by 139.178.89.65 port 49492 Mar 19 12:04:44.406527 sshd-session[4288]: pam_unix(sshd:session): session closed for user core Mar 19 12:04:44.413136 systemd-logind[1509]: Session 17 logged out. Waiting for processes to exit. Mar 19 12:04:44.414469 systemd[1]: sshd@14-10.230.57.154:22-139.178.89.65:49492.service: Deactivated successfully. Mar 19 12:04:44.418562 systemd[1]: session-17.scope: Deactivated successfully. Mar 19 12:04:44.421154 systemd-logind[1509]: Removed session 17. Mar 19 12:04:44.578408 systemd[1]: Started sshd@15-10.230.57.154:22-139.178.89.65:49506.service - OpenSSH per-connection server daemon (139.178.89.65:49506). Mar 19 12:04:45.474562 sshd[4302]: Accepted publickey for core from 139.178.89.65 port 49506 ssh2: RSA SHA256:wd0V+bPIs7QJ731rPibTo3OGwYA2jJ2A4YRQxxbXCKc Mar 19 12:04:45.476582 sshd-session[4302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 12:04:45.484864 systemd-logind[1509]: New session 18 of user core. Mar 19 12:04:45.491436 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 19 12:04:46.179018 sshd[4304]: Connection closed by 139.178.89.65 port 49506 Mar 19 12:04:46.180018 sshd-session[4302]: pam_unix(sshd:session): session closed for user core Mar 19 12:04:46.185851 systemd[1]: sshd@15-10.230.57.154:22-139.178.89.65:49506.service: Deactivated successfully. Mar 19 12:04:46.188411 systemd[1]: session-18.scope: Deactivated successfully. Mar 19 12:04:46.189648 systemd-logind[1509]: Session 18 logged out. Waiting for processes to exit. Mar 19 12:04:46.191151 systemd-logind[1509]: Removed session 18. Mar 19 12:04:51.339558 systemd[1]: Started sshd@16-10.230.57.154:22-139.178.89.65:52538.service - OpenSSH per-connection server daemon (139.178.89.65:52538). Mar 19 12:04:52.229561 sshd[4316]: Accepted publickey for core from 139.178.89.65 port 52538 ssh2: RSA SHA256:wd0V+bPIs7QJ731rPibTo3OGwYA2jJ2A4YRQxxbXCKc Mar 19 12:04:52.231529 sshd-session[4316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 12:04:52.238995 systemd-logind[1509]: New session 19 of user core. Mar 19 12:04:52.244440 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 19 12:04:52.927929 sshd[4318]: Connection closed by 139.178.89.65 port 52538 Mar 19 12:04:52.927781 sshd-session[4316]: pam_unix(sshd:session): session closed for user core Mar 19 12:04:52.933225 systemd[1]: sshd@16-10.230.57.154:22-139.178.89.65:52538.service: Deactivated successfully. Mar 19 12:04:52.935974 systemd[1]: session-19.scope: Deactivated successfully. Mar 19 12:04:52.937081 systemd-logind[1509]: Session 19 logged out. Waiting for processes to exit. Mar 19 12:04:52.938916 systemd-logind[1509]: Removed session 19. Mar 19 12:04:58.087609 systemd[1]: Started sshd@17-10.230.57.154:22-139.178.89.65:52552.service - OpenSSH per-connection server daemon (139.178.89.65:52552). Mar 19 12:04:58.984504 sshd[4330]: Accepted publickey for core from 139.178.89.65 port 52552 ssh2: RSA SHA256:wd0V+bPIs7QJ731rPibTo3OGwYA2jJ2A4YRQxxbXCKc Mar 19 12:04:58.986597 sshd-session[4330]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 12:04:58.994594 systemd-logind[1509]: New session 20 of user core. Mar 19 12:04:59.001405 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 19 12:04:59.722769 sshd[4332]: Connection closed by 139.178.89.65 port 52552 Mar 19 12:04:59.724554 sshd-session[4330]: pam_unix(sshd:session): session closed for user core Mar 19 12:04:59.729449 systemd[1]: sshd@17-10.230.57.154:22-139.178.89.65:52552.service: Deactivated successfully. Mar 19 12:04:59.744289 systemd[1]: session-20.scope: Deactivated successfully. Mar 19 12:04:59.746866 systemd-logind[1509]: Session 20 logged out. Waiting for processes to exit. Mar 19 12:04:59.748599 systemd-logind[1509]: Removed session 20. Mar 19 12:04:59.885637 systemd[1]: Started sshd@18-10.230.57.154:22-139.178.89.65:52558.service - OpenSSH per-connection server daemon (139.178.89.65:52558). Mar 19 12:05:00.790278 sshd[4346]: Accepted publickey for core from 139.178.89.65 port 52558 ssh2: RSA SHA256:wd0V+bPIs7QJ731rPibTo3OGwYA2jJ2A4YRQxxbXCKc Mar 19 12:05:00.793407 sshd-session[4346]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 12:05:00.805382 systemd-logind[1509]: New session 21 of user core. Mar 19 12:05:00.811475 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 19 12:05:01.868409 sshd[4348]: Connection closed by 139.178.89.65 port 52558 Mar 19 12:05:01.869604 sshd-session[4346]: pam_unix(sshd:session): session closed for user core Mar 19 12:05:01.874448 systemd[1]: sshd@18-10.230.57.154:22-139.178.89.65:52558.service: Deactivated successfully. Mar 19 12:05:01.877621 systemd[1]: session-21.scope: Deactivated successfully. Mar 19 12:05:01.879853 systemd-logind[1509]: Session 21 logged out. Waiting for processes to exit. Mar 19 12:05:01.881314 systemd-logind[1509]: Removed session 21. Mar 19 12:05:02.028766 systemd[1]: Started sshd@19-10.230.57.154:22-139.178.89.65:35888.service - OpenSSH per-connection server daemon (139.178.89.65:35888). Mar 19 12:05:02.953222 sshd[4358]: Accepted publickey for core from 139.178.89.65 port 35888 ssh2: RSA SHA256:wd0V+bPIs7QJ731rPibTo3OGwYA2jJ2A4YRQxxbXCKc Mar 19 12:05:02.955231 sshd-session[4358]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 12:05:02.965942 systemd-logind[1509]: New session 22 of user core. Mar 19 12:05:02.973514 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 19 12:05:05.833275 sshd[4360]: Connection closed by 139.178.89.65 port 35888 Mar 19 12:05:05.834743 sshd-session[4358]: pam_unix(sshd:session): session closed for user core Mar 19 12:05:05.840170 systemd[1]: sshd@19-10.230.57.154:22-139.178.89.65:35888.service: Deactivated successfully. Mar 19 12:05:05.842787 systemd[1]: session-22.scope: Deactivated successfully. Mar 19 12:05:05.843101 systemd[1]: session-22.scope: Consumed 753ms CPU time, 65.2M memory peak. Mar 19 12:05:05.845004 systemd-logind[1509]: Session 22 logged out. Waiting for processes to exit. Mar 19 12:05:05.847475 systemd-logind[1509]: Removed session 22. Mar 19 12:05:05.992618 systemd[1]: Started sshd@20-10.230.57.154:22-139.178.89.65:35890.service - OpenSSH per-connection server daemon (139.178.89.65:35890). Mar 19 12:05:07.029696 sshd[4377]: Accepted publickey for core from 139.178.89.65 port 35890 ssh2: RSA SHA256:wd0V+bPIs7QJ731rPibTo3OGwYA2jJ2A4YRQxxbXCKc Mar 19 12:05:07.031734 sshd-session[4377]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 12:05:07.039610 systemd-logind[1509]: New session 23 of user core. Mar 19 12:05:07.049446 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 19 12:05:07.954107 sshd[4379]: Connection closed by 139.178.89.65 port 35890 Mar 19 12:05:07.953170 sshd-session[4377]: pam_unix(sshd:session): session closed for user core Mar 19 12:05:07.957555 systemd[1]: sshd@20-10.230.57.154:22-139.178.89.65:35890.service: Deactivated successfully. Mar 19 12:05:07.960156 systemd[1]: session-23.scope: Deactivated successfully. Mar 19 12:05:07.962444 systemd-logind[1509]: Session 23 logged out. Waiting for processes to exit. Mar 19 12:05:07.964169 systemd-logind[1509]: Removed session 23. Mar 19 12:05:08.121553 systemd[1]: Started sshd@21-10.230.57.154:22-139.178.89.65:35902.service - OpenSSH per-connection server daemon (139.178.89.65:35902). Mar 19 12:05:09.020083 sshd[4389]: Accepted publickey for core from 139.178.89.65 port 35902 ssh2: RSA SHA256:wd0V+bPIs7QJ731rPibTo3OGwYA2jJ2A4YRQxxbXCKc Mar 19 12:05:09.022091 sshd-session[4389]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 12:05:09.029730 systemd-logind[1509]: New session 24 of user core. Mar 19 12:05:09.036403 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 19 12:05:09.725254 sshd[4391]: Connection closed by 139.178.89.65 port 35902 Mar 19 12:05:09.726286 sshd-session[4389]: pam_unix(sshd:session): session closed for user core Mar 19 12:05:09.731525 systemd[1]: sshd@21-10.230.57.154:22-139.178.89.65:35902.service: Deactivated successfully. Mar 19 12:05:09.734301 systemd[1]: session-24.scope: Deactivated successfully. Mar 19 12:05:09.736011 systemd-logind[1509]: Session 24 logged out. Waiting for processes to exit. Mar 19 12:05:09.737667 systemd-logind[1509]: Removed session 24. Mar 19 12:05:14.885800 systemd[1]: Started sshd@22-10.230.57.154:22-139.178.89.65:37284.service - OpenSSH per-connection server daemon (139.178.89.65:37284). Mar 19 12:05:15.780754 sshd[4409]: Accepted publickey for core from 139.178.89.65 port 37284 ssh2: RSA SHA256:wd0V+bPIs7QJ731rPibTo3OGwYA2jJ2A4YRQxxbXCKc Mar 19 12:05:15.782846 sshd-session[4409]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 12:05:15.790544 systemd-logind[1509]: New session 25 of user core. Mar 19 12:05:15.797420 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 19 12:05:16.485657 sshd[4411]: Connection closed by 139.178.89.65 port 37284 Mar 19 12:05:16.487237 sshd-session[4409]: pam_unix(sshd:session): session closed for user core Mar 19 12:05:16.492255 systemd[1]: sshd@22-10.230.57.154:22-139.178.89.65:37284.service: Deactivated successfully. Mar 19 12:05:16.495673 systemd[1]: session-25.scope: Deactivated successfully. Mar 19 12:05:16.497716 systemd-logind[1509]: Session 25 logged out. Waiting for processes to exit. Mar 19 12:05:16.499353 systemd-logind[1509]: Removed session 25. Mar 19 12:05:21.649247 systemd[1]: Started sshd@23-10.230.57.154:22-139.178.89.65:58280.service - OpenSSH per-connection server daemon (139.178.89.65:58280). Mar 19 12:05:22.540121 sshd[4423]: Accepted publickey for core from 139.178.89.65 port 58280 ssh2: RSA SHA256:wd0V+bPIs7QJ731rPibTo3OGwYA2jJ2A4YRQxxbXCKc Mar 19 12:05:22.542093 sshd-session[4423]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 12:05:22.549067 systemd-logind[1509]: New session 26 of user core. Mar 19 12:05:22.558528 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 19 12:05:23.239390 sshd[4425]: Connection closed by 139.178.89.65 port 58280 Mar 19 12:05:23.240379 sshd-session[4423]: pam_unix(sshd:session): session closed for user core Mar 19 12:05:23.246145 systemd[1]: sshd@23-10.230.57.154:22-139.178.89.65:58280.service: Deactivated successfully. Mar 19 12:05:23.249645 systemd[1]: session-26.scope: Deactivated successfully. Mar 19 12:05:23.251207 systemd-logind[1509]: Session 26 logged out. Waiting for processes to exit. Mar 19 12:05:23.252786 systemd-logind[1509]: Removed session 26. Mar 19 12:05:28.402660 systemd[1]: Started sshd@24-10.230.57.154:22-139.178.89.65:58288.service - OpenSSH per-connection server daemon (139.178.89.65:58288). Mar 19 12:05:29.299226 sshd[4438]: Accepted publickey for core from 139.178.89.65 port 58288 ssh2: RSA SHA256:wd0V+bPIs7QJ731rPibTo3OGwYA2jJ2A4YRQxxbXCKc Mar 19 12:05:29.301513 sshd-session[4438]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 12:05:29.311372 systemd-logind[1509]: New session 27 of user core. Mar 19 12:05:29.315390 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 19 12:05:29.999523 sshd[4440]: Connection closed by 139.178.89.65 port 58288 Mar 19 12:05:30.000856 sshd-session[4438]: pam_unix(sshd:session): session closed for user core Mar 19 12:05:30.005300 systemd[1]: sshd@24-10.230.57.154:22-139.178.89.65:58288.service: Deactivated successfully. Mar 19 12:05:30.008876 systemd[1]: session-27.scope: Deactivated successfully. Mar 19 12:05:30.010258 systemd-logind[1509]: Session 27 logged out. Waiting for processes to exit. Mar 19 12:05:30.012402 systemd-logind[1509]: Removed session 27. Mar 19 12:05:30.165693 systemd[1]: Started sshd@25-10.230.57.154:22-139.178.89.65:58300.service - OpenSSH per-connection server daemon (139.178.89.65:58300). Mar 19 12:05:31.052662 sshd[4452]: Accepted publickey for core from 139.178.89.65 port 58300 ssh2: RSA SHA256:wd0V+bPIs7QJ731rPibTo3OGwYA2jJ2A4YRQxxbXCKc Mar 19 12:05:31.054590 sshd-session[4452]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 12:05:31.065754 systemd-logind[1509]: New session 28 of user core. Mar 19 12:05:31.073456 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 19 12:05:33.466967 kubelet[2839]: I0319 12:05:33.466814 2839 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-c2chg" podStartSLOduration=141.466671352 podStartE2EDuration="2m21.466671352s" podCreationTimestamp="2025-03-19 12:03:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-19 12:03:41.498612889 +0000 UTC m=+42.542526151" watchObservedRunningTime="2025-03-19 12:05:33.466671352 +0000 UTC m=+154.510584611" Mar 19 12:05:33.523152 containerd[1526]: time="2025-03-19T12:05:33.523090896Z" level=info msg="StopContainer for \"75672d87da98bcd23a1aad4fce568e82b3758f580261de15ae7d3a7d473bef63\" with timeout 30 (s)" Mar 19 12:05:33.530661 containerd[1526]: time="2025-03-19T12:05:33.529530201Z" level=info msg="Stop container \"75672d87da98bcd23a1aad4fce568e82b3758f580261de15ae7d3a7d473bef63\" with signal terminated" Mar 19 12:05:33.609867 systemd[1]: cri-containerd-75672d87da98bcd23a1aad4fce568e82b3758f580261de15ae7d3a7d473bef63.scope: Deactivated successfully. Mar 19 12:05:33.651783 containerd[1526]: time="2025-03-19T12:05:33.651623711Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 19 12:05:33.663839 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-75672d87da98bcd23a1aad4fce568e82b3758f580261de15ae7d3a7d473bef63-rootfs.mount: Deactivated successfully. Mar 19 12:05:33.671143 containerd[1526]: time="2025-03-19T12:05:33.668429073Z" level=info msg="StopContainer for \"f212149daf98d6d6e12bad78843727379cfb8bcb8b0eeb54f1c054159c156ba5\" with timeout 2 (s)" Mar 19 12:05:33.671143 containerd[1526]: time="2025-03-19T12:05:33.668864867Z" level=info msg="Stop container \"f212149daf98d6d6e12bad78843727379cfb8bcb8b0eeb54f1c054159c156ba5\" with signal terminated" Mar 19 12:05:33.682659 systemd-networkd[1447]: lxc_health: Link DOWN Mar 19 12:05:33.682756 systemd-networkd[1447]: lxc_health: Lost carrier Mar 19 12:05:33.703029 systemd[1]: cri-containerd-f212149daf98d6d6e12bad78843727379cfb8bcb8b0eeb54f1c054159c156ba5.scope: Deactivated successfully. Mar 19 12:05:33.703905 systemd[1]: cri-containerd-f212149daf98d6d6e12bad78843727379cfb8bcb8b0eeb54f1c054159c156ba5.scope: Consumed 10.406s CPU time, 194.5M memory peak, 70.3M read from disk, 13.3M written to disk. Mar 19 12:05:33.706652 containerd[1526]: time="2025-03-19T12:05:33.706111508Z" level=info msg="shim disconnected" id=75672d87da98bcd23a1aad4fce568e82b3758f580261de15ae7d3a7d473bef63 namespace=k8s.io Mar 19 12:05:33.707850 containerd[1526]: time="2025-03-19T12:05:33.706838440Z" level=warning msg="cleaning up after shim disconnected" id=75672d87da98bcd23a1aad4fce568e82b3758f580261de15ae7d3a7d473bef63 namespace=k8s.io Mar 19 12:05:33.707850 containerd[1526]: time="2025-03-19T12:05:33.707621104Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 12:05:33.746325 containerd[1526]: time="2025-03-19T12:05:33.745353625Z" level=info msg="StopContainer for \"75672d87da98bcd23a1aad4fce568e82b3758f580261de15ae7d3a7d473bef63\" returns successfully" Mar 19 12:05:33.747441 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f212149daf98d6d6e12bad78843727379cfb8bcb8b0eeb54f1c054159c156ba5-rootfs.mount: Deactivated successfully. Mar 19 12:05:33.750317 containerd[1526]: time="2025-03-19T12:05:33.750159981Z" level=info msg="StopPodSandbox for \"e79a23c2185aa4a9bd8ed4b5e042005c818230d081e0ea73289b2c9c7185f102\"" Mar 19 12:05:33.752241 containerd[1526]: time="2025-03-19T12:05:33.752077298Z" level=info msg="Container to stop \"75672d87da98bcd23a1aad4fce568e82b3758f580261de15ae7d3a7d473bef63\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 19 12:05:33.756047 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e79a23c2185aa4a9bd8ed4b5e042005c818230d081e0ea73289b2c9c7185f102-shm.mount: Deactivated successfully. Mar 19 12:05:33.757232 containerd[1526]: time="2025-03-19T12:05:33.756714156Z" level=info msg="shim disconnected" id=f212149daf98d6d6e12bad78843727379cfb8bcb8b0eeb54f1c054159c156ba5 namespace=k8s.io Mar 19 12:05:33.757232 containerd[1526]: time="2025-03-19T12:05:33.756775604Z" level=warning msg="cleaning up after shim disconnected" id=f212149daf98d6d6e12bad78843727379cfb8bcb8b0eeb54f1c054159c156ba5 namespace=k8s.io Mar 19 12:05:33.757232 containerd[1526]: time="2025-03-19T12:05:33.756792705Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 12:05:33.772249 systemd[1]: cri-containerd-e79a23c2185aa4a9bd8ed4b5e042005c818230d081e0ea73289b2c9c7185f102.scope: Deactivated successfully. Mar 19 12:05:33.788690 containerd[1526]: time="2025-03-19T12:05:33.788558036Z" level=info msg="StopContainer for \"f212149daf98d6d6e12bad78843727379cfb8bcb8b0eeb54f1c054159c156ba5\" returns successfully" Mar 19 12:05:33.789812 containerd[1526]: time="2025-03-19T12:05:33.789653966Z" level=info msg="StopPodSandbox for \"d0f530ee1650b240c5ae55043ede5e486820038b9308421e45c0d90a89bee4e4\"" Mar 19 12:05:33.789975 containerd[1526]: time="2025-03-19T12:05:33.789830430Z" level=info msg="Container to stop \"de9fe057de46b681af26adb1cf8f1dc01fa73b756d0e48fcac11fd0c15fe5194\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 19 12:05:33.789975 containerd[1526]: time="2025-03-19T12:05:33.789932823Z" level=info msg="Container to stop \"5e8d3f10cdef5a8fa0b4ea0c57d9f7d53805f619a3bfe2198f51e0af9092d7ae\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 19 12:05:33.790254 containerd[1526]: time="2025-03-19T12:05:33.789982036Z" level=info msg="Container to stop \"f212149daf98d6d6e12bad78843727379cfb8bcb8b0eeb54f1c054159c156ba5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 19 12:05:33.790254 containerd[1526]: time="2025-03-19T12:05:33.789999118Z" level=info msg="Container to stop \"be7f60dc4d87b2042f4c5a7737ea74c11ac79a145db35ecf7ff2087c8dc2556c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 19 12:05:33.790254 containerd[1526]: time="2025-03-19T12:05:33.790014500Z" level=info msg="Container to stop \"54c32bfff386eaac48fb097b6fe17da30b323d37f0732be6f2432280fec5f382\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 19 12:05:33.795762 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d0f530ee1650b240c5ae55043ede5e486820038b9308421e45c0d90a89bee4e4-shm.mount: Deactivated successfully. Mar 19 12:05:33.804368 systemd[1]: cri-containerd-d0f530ee1650b240c5ae55043ede5e486820038b9308421e45c0d90a89bee4e4.scope: Deactivated successfully. Mar 19 12:05:33.823281 containerd[1526]: time="2025-03-19T12:05:33.823177533Z" level=info msg="shim disconnected" id=e79a23c2185aa4a9bd8ed4b5e042005c818230d081e0ea73289b2c9c7185f102 namespace=k8s.io Mar 19 12:05:33.824120 containerd[1526]: time="2025-03-19T12:05:33.823869447Z" level=warning msg="cleaning up after shim disconnected" id=e79a23c2185aa4a9bd8ed4b5e042005c818230d081e0ea73289b2c9c7185f102 namespace=k8s.io Mar 19 12:05:33.824120 containerd[1526]: time="2025-03-19T12:05:33.823901212Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 12:05:33.861349 containerd[1526]: time="2025-03-19T12:05:33.861169311Z" level=info msg="TearDown network for sandbox \"e79a23c2185aa4a9bd8ed4b5e042005c818230d081e0ea73289b2c9c7185f102\" successfully" Mar 19 12:05:33.861349 containerd[1526]: time="2025-03-19T12:05:33.861285985Z" level=info msg="StopPodSandbox for \"e79a23c2185aa4a9bd8ed4b5e042005c818230d081e0ea73289b2c9c7185f102\" returns successfully" Mar 19 12:05:33.866098 containerd[1526]: time="2025-03-19T12:05:33.866005131Z" level=info msg="shim disconnected" id=d0f530ee1650b240c5ae55043ede5e486820038b9308421e45c0d90a89bee4e4 namespace=k8s.io Mar 19 12:05:33.866098 containerd[1526]: time="2025-03-19T12:05:33.866084391Z" level=warning msg="cleaning up after shim disconnected" id=d0f530ee1650b240c5ae55043ede5e486820038b9308421e45c0d90a89bee4e4 namespace=k8s.io Mar 19 12:05:33.866330 containerd[1526]: time="2025-03-19T12:05:33.866101876Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 12:05:33.899199 containerd[1526]: time="2025-03-19T12:05:33.898980008Z" level=info msg="TearDown network for sandbox \"d0f530ee1650b240c5ae55043ede5e486820038b9308421e45c0d90a89bee4e4\" successfully" Mar 19 12:05:33.899199 containerd[1526]: time="2025-03-19T12:05:33.899044395Z" level=info msg="StopPodSandbox for \"d0f530ee1650b240c5ae55043ede5e486820038b9308421e45c0d90a89bee4e4\" returns successfully" Mar 19 12:05:33.967795 kubelet[2839]: I0319 12:05:33.967490 2839 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mhtst\" (UniqueName: \"kubernetes.io/projected/6552963a-9e71-42ae-8b73-d4b7ef6e393b-kube-api-access-mhtst\") pod \"6552963a-9e71-42ae-8b73-d4b7ef6e393b\" (UID: \"6552963a-9e71-42ae-8b73-d4b7ef6e393b\") " Mar 19 12:05:33.967795 kubelet[2839]: I0319 12:05:33.967619 2839 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6552963a-9e71-42ae-8b73-d4b7ef6e393b-cilium-config-path\") pod \"6552963a-9e71-42ae-8b73-d4b7ef6e393b\" (UID: \"6552963a-9e71-42ae-8b73-d4b7ef6e393b\") " Mar 19 12:05:33.981885 kubelet[2839]: I0319 12:05:33.980583 2839 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6552963a-9e71-42ae-8b73-d4b7ef6e393b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6552963a-9e71-42ae-8b73-d4b7ef6e393b" (UID: "6552963a-9e71-42ae-8b73-d4b7ef6e393b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:05:33.983021 kubelet[2839]: I0319 12:05:33.982966 2839 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6552963a-9e71-42ae-8b73-d4b7ef6e393b-kube-api-access-mhtst" (OuterVolumeSpecName: "kube-api-access-mhtst") pod "6552963a-9e71-42ae-8b73-d4b7ef6e393b" (UID: "6552963a-9e71-42ae-8b73-d4b7ef6e393b"). InnerVolumeSpecName "kube-api-access-mhtst". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:05:34.069057 kubelet[2839]: I0319 12:05:34.068850 2839 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72d8bf8f-c353-4f8a-a457-3a6f94f2aa00-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "72d8bf8f-c353-4f8a-a457-3a6f94f2aa00" (UID: "72d8bf8f-c353-4f8a-a457-3a6f94f2aa00"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:05:34.069057 kubelet[2839]: I0319 12:05:34.068978 2839 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/72d8bf8f-c353-4f8a-a457-3a6f94f2aa00-lib-modules\") pod \"72d8bf8f-c353-4f8a-a457-3a6f94f2aa00\" (UID: \"72d8bf8f-c353-4f8a-a457-3a6f94f2aa00\") " Mar 19 12:05:34.069057 kubelet[2839]: I0319 12:05:34.069026 2839 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/72d8bf8f-c353-4f8a-a457-3a6f94f2aa00-host-proc-sys-kernel\") pod \"72d8bf8f-c353-4f8a-a457-3a6f94f2aa00\" (UID: \"72d8bf8f-c353-4f8a-a457-3a6f94f2aa00\") " Mar 19 12:05:34.069426 kubelet[2839]: I0319 12:05:34.069079 2839 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/72d8bf8f-c353-4f8a-a457-3a6f94f2aa00-cilium-config-path\") pod \"72d8bf8f-c353-4f8a-a457-3a6f94f2aa00\" (UID: \"72d8bf8f-c353-4f8a-a457-3a6f94f2aa00\") " Mar 19 12:05:34.069426 kubelet[2839]: I0319 12:05:34.069106 2839 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/72d8bf8f-c353-4f8a-a457-3a6f94f2aa00-xtables-lock\") pod \"72d8bf8f-c353-4f8a-a457-3a6f94f2aa00\" (UID: \"72d8bf8f-c353-4f8a-a457-3a6f94f2aa00\") " Mar 19 12:05:34.069426 kubelet[2839]: I0319 12:05:34.069131 2839 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/72d8bf8f-c353-4f8a-a457-3a6f94f2aa00-cilium-run\") pod \"72d8bf8f-c353-4f8a-a457-3a6f94f2aa00\" (UID: \"72d8bf8f-c353-4f8a-a457-3a6f94f2aa00\") " Mar 19 12:05:34.069426 kubelet[2839]: I0319 12:05:34.069393 2839 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74hnd\" (UniqueName: \"kubernetes.io/projected/72d8bf8f-c353-4f8a-a457-3a6f94f2aa00-kube-api-access-74hnd\") pod \"72d8bf8f-c353-4f8a-a457-3a6f94f2aa00\" (UID: \"72d8bf8f-c353-4f8a-a457-3a6f94f2aa00\") " Mar 19 12:05:34.069728 kubelet[2839]: I0319 12:05:34.069438 2839 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/72d8bf8f-c353-4f8a-a457-3a6f94f2aa00-etc-cni-netd\") pod \"72d8bf8f-c353-4f8a-a457-3a6f94f2aa00\" (UID: \"72d8bf8f-c353-4f8a-a457-3a6f94f2aa00\") " Mar 19 12:05:34.069728 kubelet[2839]: I0319 12:05:34.069464 2839 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/72d8bf8f-c353-4f8a-a457-3a6f94f2aa00-cni-path\") pod \"72d8bf8f-c353-4f8a-a457-3a6f94f2aa00\" (UID: \"72d8bf8f-c353-4f8a-a457-3a6f94f2aa00\") " Mar 19 12:05:34.069728 kubelet[2839]: I0319 12:05:34.069512 2839 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/72d8bf8f-c353-4f8a-a457-3a6f94f2aa00-clustermesh-secrets\") pod \"72d8bf8f-c353-4f8a-a457-3a6f94f2aa00\" (UID: \"72d8bf8f-c353-4f8a-a457-3a6f94f2aa00\") " Mar 19 12:05:34.069728 kubelet[2839]: I0319 12:05:34.069540 2839 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/72d8bf8f-c353-4f8a-a457-3a6f94f2aa00-hostproc\") pod \"72d8bf8f-c353-4f8a-a457-3a6f94f2aa00\" (UID: \"72d8bf8f-c353-4f8a-a457-3a6f94f2aa00\") " Mar 19 12:05:34.069728 kubelet[2839]: I0319 12:05:34.069562 2839 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/72d8bf8f-c353-4f8a-a457-3a6f94f2aa00-bpf-maps\") pod \"72d8bf8f-c353-4f8a-a457-3a6f94f2aa00\" (UID: \"72d8bf8f-c353-4f8a-a457-3a6f94f2aa00\") " Mar 19 12:05:34.069728 kubelet[2839]: I0319 12:05:34.069607 2839 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/72d8bf8f-c353-4f8a-a457-3a6f94f2aa00-hubble-tls\") pod \"72d8bf8f-c353-4f8a-a457-3a6f94f2aa00\" (UID: \"72d8bf8f-c353-4f8a-a457-3a6f94f2aa00\") " Mar 19 12:05:34.070001 kubelet[2839]: I0319 12:05:34.069650 2839 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/72d8bf8f-c353-4f8a-a457-3a6f94f2aa00-host-proc-sys-net\") pod \"72d8bf8f-c353-4f8a-a457-3a6f94f2aa00\" (UID: \"72d8bf8f-c353-4f8a-a457-3a6f94f2aa00\") " Mar 19 12:05:34.070001 kubelet[2839]: I0319 12:05:34.069745 2839 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/72d8bf8f-c353-4f8a-a457-3a6f94f2aa00-cilium-cgroup\") pod \"72d8bf8f-c353-4f8a-a457-3a6f94f2aa00\" (UID: \"72d8bf8f-c353-4f8a-a457-3a6f94f2aa00\") " Mar 19 12:05:34.070144 kubelet[2839]: I0319 12:05:34.070020 2839 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72d8bf8f-c353-4f8a-a457-3a6f94f2aa00-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "72d8bf8f-c353-4f8a-a457-3a6f94f2aa00" (UID: "72d8bf8f-c353-4f8a-a457-3a6f94f2aa00"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:05:34.070144 kubelet[2839]: I0319 12:05:34.070061 2839 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72d8bf8f-c353-4f8a-a457-3a6f94f2aa00-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "72d8bf8f-c353-4f8a-a457-3a6f94f2aa00" (UID: "72d8bf8f-c353-4f8a-a457-3a6f94f2aa00"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:05:34.072991 kubelet[2839]: I0319 12:05:34.072379 2839 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/72d8bf8f-c353-4f8a-a457-3a6f94f2aa00-lib-modules\") on node \"srv-z8dvi.gb1.brightbox.com\" DevicePath \"\"" Mar 19 12:05:34.072991 kubelet[2839]: I0319 12:05:34.072424 2839 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6552963a-9e71-42ae-8b73-d4b7ef6e393b-cilium-config-path\") on node \"srv-z8dvi.gb1.brightbox.com\" DevicePath \"\"" Mar 19 12:05:34.072991 kubelet[2839]: I0319 12:05:34.072443 2839 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-mhtst\" (UniqueName: \"kubernetes.io/projected/6552963a-9e71-42ae-8b73-d4b7ef6e393b-kube-api-access-mhtst\") on node \"srv-z8dvi.gb1.brightbox.com\" DevicePath \"\"" Mar 19 12:05:34.072991 kubelet[2839]: I0319 12:05:34.072482 2839 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72d8bf8f-c353-4f8a-a457-3a6f94f2aa00-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "72d8bf8f-c353-4f8a-a457-3a6f94f2aa00" (UID: "72d8bf8f-c353-4f8a-a457-3a6f94f2aa00"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:05:34.072991 kubelet[2839]: I0319 12:05:34.072539 2839 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72d8bf8f-c353-4f8a-a457-3a6f94f2aa00-cni-path" (OuterVolumeSpecName: "cni-path") pod "72d8bf8f-c353-4f8a-a457-3a6f94f2aa00" (UID: "72d8bf8f-c353-4f8a-a457-3a6f94f2aa00"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:05:34.075314 kubelet[2839]: I0319 12:05:34.075161 2839 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72d8bf8f-c353-4f8a-a457-3a6f94f2aa00-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "72d8bf8f-c353-4f8a-a457-3a6f94f2aa00" (UID: "72d8bf8f-c353-4f8a-a457-3a6f94f2aa00"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 12:05:34.075314 kubelet[2839]: I0319 12:05:34.075230 2839 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72d8bf8f-c353-4f8a-a457-3a6f94f2aa00-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "72d8bf8f-c353-4f8a-a457-3a6f94f2aa00" (UID: "72d8bf8f-c353-4f8a-a457-3a6f94f2aa00"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:05:34.075314 kubelet[2839]: I0319 12:05:34.075266 2839 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72d8bf8f-c353-4f8a-a457-3a6f94f2aa00-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "72d8bf8f-c353-4f8a-a457-3a6f94f2aa00" (UID: "72d8bf8f-c353-4f8a-a457-3a6f94f2aa00"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:05:34.076811 kubelet[2839]: I0319 12:05:34.076769 2839 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72d8bf8f-c353-4f8a-a457-3a6f94f2aa00-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "72d8bf8f-c353-4f8a-a457-3a6f94f2aa00" (UID: "72d8bf8f-c353-4f8a-a457-3a6f94f2aa00"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 12:05:34.076975 kubelet[2839]: I0319 12:05:34.076948 2839 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72d8bf8f-c353-4f8a-a457-3a6f94f2aa00-hostproc" (OuterVolumeSpecName: "hostproc") pod "72d8bf8f-c353-4f8a-a457-3a6f94f2aa00" (UID: "72d8bf8f-c353-4f8a-a457-3a6f94f2aa00"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:05:34.077125 kubelet[2839]: I0319 12:05:34.077099 2839 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72d8bf8f-c353-4f8a-a457-3a6f94f2aa00-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "72d8bf8f-c353-4f8a-a457-3a6f94f2aa00" (UID: "72d8bf8f-c353-4f8a-a457-3a6f94f2aa00"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:05:34.079001 kubelet[2839]: I0319 12:05:34.078964 2839 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72d8bf8f-c353-4f8a-a457-3a6f94f2aa00-kube-api-access-74hnd" (OuterVolumeSpecName: "kube-api-access-74hnd") pod "72d8bf8f-c353-4f8a-a457-3a6f94f2aa00" (UID: "72d8bf8f-c353-4f8a-a457-3a6f94f2aa00"). InnerVolumeSpecName "kube-api-access-74hnd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:05:34.079084 kubelet[2839]: I0319 12:05:34.079024 2839 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72d8bf8f-c353-4f8a-a457-3a6f94f2aa00-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "72d8bf8f-c353-4f8a-a457-3a6f94f2aa00" (UID: "72d8bf8f-c353-4f8a-a457-3a6f94f2aa00"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 12:05:34.080479 kubelet[2839]: I0319 12:05:34.080434 2839 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72d8bf8f-c353-4f8a-a457-3a6f94f2aa00-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "72d8bf8f-c353-4f8a-a457-3a6f94f2aa00" (UID: "72d8bf8f-c353-4f8a-a457-3a6f94f2aa00"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 12:05:34.173473 kubelet[2839]: I0319 12:05:34.173416 2839 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/72d8bf8f-c353-4f8a-a457-3a6f94f2aa00-xtables-lock\") on node \"srv-z8dvi.gb1.brightbox.com\" DevicePath \"\"" Mar 19 12:05:34.173733 kubelet[2839]: I0319 12:05:34.173711 2839 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/72d8bf8f-c353-4f8a-a457-3a6f94f2aa00-cilium-run\") on node \"srv-z8dvi.gb1.brightbox.com\" DevicePath \"\"" Mar 19 12:05:34.173870 kubelet[2839]: I0319 12:05:34.173848 2839 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-74hnd\" (UniqueName: \"kubernetes.io/projected/72d8bf8f-c353-4f8a-a457-3a6f94f2aa00-kube-api-access-74hnd\") on node \"srv-z8dvi.gb1.brightbox.com\" DevicePath \"\"" Mar 19 12:05:34.173984 kubelet[2839]: I0319 12:05:34.173962 2839 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/72d8bf8f-c353-4f8a-a457-3a6f94f2aa00-etc-cni-netd\") on node \"srv-z8dvi.gb1.brightbox.com\" DevicePath \"\"" Mar 19 12:05:34.174229 kubelet[2839]: I0319 12:05:34.174176 2839 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/72d8bf8f-c353-4f8a-a457-3a6f94f2aa00-cni-path\") on node \"srv-z8dvi.gb1.brightbox.com\" DevicePath \"\"" Mar 19 12:05:34.174364 kubelet[2839]: I0319 12:05:34.174326 2839 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/72d8bf8f-c353-4f8a-a457-3a6f94f2aa00-clustermesh-secrets\") on node \"srv-z8dvi.gb1.brightbox.com\" DevicePath \"\"" Mar 19 12:05:34.174481 kubelet[2839]: I0319 12:05:34.174460 2839 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/72d8bf8f-c353-4f8a-a457-3a6f94f2aa00-hostproc\") on node \"srv-z8dvi.gb1.brightbox.com\" DevicePath \"\"" Mar 19 12:05:34.174588 kubelet[2839]: I0319 12:05:34.174567 2839 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/72d8bf8f-c353-4f8a-a457-3a6f94f2aa00-bpf-maps\") on node \"srv-z8dvi.gb1.brightbox.com\" DevicePath \"\"" Mar 19 12:05:34.174712 kubelet[2839]: I0319 12:05:34.174691 2839 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/72d8bf8f-c353-4f8a-a457-3a6f94f2aa00-hubble-tls\") on node \"srv-z8dvi.gb1.brightbox.com\" DevicePath \"\"" Mar 19 12:05:34.174829 kubelet[2839]: I0319 12:05:34.174808 2839 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/72d8bf8f-c353-4f8a-a457-3a6f94f2aa00-host-proc-sys-net\") on node \"srv-z8dvi.gb1.brightbox.com\" DevicePath \"\"" Mar 19 12:05:34.174933 kubelet[2839]: I0319 12:05:34.174912 2839 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/72d8bf8f-c353-4f8a-a457-3a6f94f2aa00-cilium-cgroup\") on node \"srv-z8dvi.gb1.brightbox.com\" DevicePath \"\"" Mar 19 12:05:34.175034 kubelet[2839]: I0319 12:05:34.175015 2839 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/72d8bf8f-c353-4f8a-a457-3a6f94f2aa00-host-proc-sys-kernel\") on node \"srv-z8dvi.gb1.brightbox.com\" DevicePath \"\"" Mar 19 12:05:34.175137 kubelet[2839]: I0319 12:05:34.175117 2839 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/72d8bf8f-c353-4f8a-a457-3a6f94f2aa00-cilium-config-path\") on node \"srv-z8dvi.gb1.brightbox.com\" DevicePath \"\"" Mar 19 12:05:34.425004 kubelet[2839]: E0319 12:05:34.417406 2839 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 19 12:05:34.622215 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e79a23c2185aa4a9bd8ed4b5e042005c818230d081e0ea73289b2c9c7185f102-rootfs.mount: Deactivated successfully. Mar 19 12:05:34.622409 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d0f530ee1650b240c5ae55043ede5e486820038b9308421e45c0d90a89bee4e4-rootfs.mount: Deactivated successfully. Mar 19 12:05:34.622567 systemd[1]: var-lib-kubelet-pods-6552963a\x2d9e71\x2d42ae\x2d8b73\x2dd4b7ef6e393b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmhtst.mount: Deactivated successfully. Mar 19 12:05:34.622715 systemd[1]: var-lib-kubelet-pods-72d8bf8f\x2dc353\x2d4f8a\x2da457\x2d3a6f94f2aa00-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d74hnd.mount: Deactivated successfully. Mar 19 12:05:34.622846 systemd[1]: var-lib-kubelet-pods-72d8bf8f\x2dc353\x2d4f8a\x2da457\x2d3a6f94f2aa00-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 19 12:05:34.622965 systemd[1]: var-lib-kubelet-pods-72d8bf8f\x2dc353\x2d4f8a\x2da457\x2d3a6f94f2aa00-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 19 12:05:34.726916 systemd[1]: Removed slice kubepods-burstable-pod72d8bf8f_c353_4f8a_a457_3a6f94f2aa00.slice - libcontainer container kubepods-burstable-pod72d8bf8f_c353_4f8a_a457_3a6f94f2aa00.slice. Mar 19 12:05:34.727073 systemd[1]: kubepods-burstable-pod72d8bf8f_c353_4f8a_a457_3a6f94f2aa00.slice: Consumed 10.536s CPU time, 194.9M memory peak, 70.3M read from disk, 13.3M written to disk. Mar 19 12:05:34.733004 kubelet[2839]: I0319 12:05:34.732950 2839 scope.go:117] "RemoveContainer" containerID="f212149daf98d6d6e12bad78843727379cfb8bcb8b0eeb54f1c054159c156ba5" Mar 19 12:05:34.768748 containerd[1526]: time="2025-03-19T12:05:34.768689529Z" level=info msg="RemoveContainer for \"f212149daf98d6d6e12bad78843727379cfb8bcb8b0eeb54f1c054159c156ba5\"" Mar 19 12:05:34.770192 systemd[1]: Removed slice kubepods-besteffort-pod6552963a_9e71_42ae_8b73_d4b7ef6e393b.slice - libcontainer container kubepods-besteffort-pod6552963a_9e71_42ae_8b73_d4b7ef6e393b.slice. Mar 19 12:05:34.789334 containerd[1526]: time="2025-03-19T12:05:34.789272404Z" level=info msg="RemoveContainer for \"f212149daf98d6d6e12bad78843727379cfb8bcb8b0eeb54f1c054159c156ba5\" returns successfully" Mar 19 12:05:34.789746 kubelet[2839]: I0319 12:05:34.789696 2839 scope.go:117] "RemoveContainer" containerID="5e8d3f10cdef5a8fa0b4ea0c57d9f7d53805f619a3bfe2198f51e0af9092d7ae" Mar 19 12:05:34.791777 containerd[1526]: time="2025-03-19T12:05:34.791742882Z" level=info msg="RemoveContainer for \"5e8d3f10cdef5a8fa0b4ea0c57d9f7d53805f619a3bfe2198f51e0af9092d7ae\"" Mar 19 12:05:34.807318 containerd[1526]: time="2025-03-19T12:05:34.807174744Z" level=info msg="RemoveContainer for \"5e8d3f10cdef5a8fa0b4ea0c57d9f7d53805f619a3bfe2198f51e0af9092d7ae\" returns successfully" Mar 19 12:05:34.808145 kubelet[2839]: I0319 12:05:34.807697 2839 scope.go:117] "RemoveContainer" containerID="be7f60dc4d87b2042f4c5a7737ea74c11ac79a145db35ecf7ff2087c8dc2556c" Mar 19 12:05:34.809128 containerd[1526]: time="2025-03-19T12:05:34.809094452Z" level=info msg="RemoveContainer for \"be7f60dc4d87b2042f4c5a7737ea74c11ac79a145db35ecf7ff2087c8dc2556c\"" Mar 19 12:05:34.812301 containerd[1526]: time="2025-03-19T12:05:34.812257746Z" level=info msg="RemoveContainer for \"be7f60dc4d87b2042f4c5a7737ea74c11ac79a145db35ecf7ff2087c8dc2556c\" returns successfully" Mar 19 12:05:34.812493 kubelet[2839]: I0319 12:05:34.812466 2839 scope.go:117] "RemoveContainer" containerID="54c32bfff386eaac48fb097b6fe17da30b323d37f0732be6f2432280fec5f382" Mar 19 12:05:34.814034 containerd[1526]: time="2025-03-19T12:05:34.813989028Z" level=info msg="RemoveContainer for \"54c32bfff386eaac48fb097b6fe17da30b323d37f0732be6f2432280fec5f382\"" Mar 19 12:05:34.823253 containerd[1526]: time="2025-03-19T12:05:34.823200572Z" level=info msg="RemoveContainer for \"54c32bfff386eaac48fb097b6fe17da30b323d37f0732be6f2432280fec5f382\" returns successfully" Mar 19 12:05:34.823451 kubelet[2839]: I0319 12:05:34.823425 2839 scope.go:117] "RemoveContainer" containerID="de9fe057de46b681af26adb1cf8f1dc01fa73b756d0e48fcac11fd0c15fe5194" Mar 19 12:05:34.825153 containerd[1526]: time="2025-03-19T12:05:34.825124705Z" level=info msg="RemoveContainer for \"de9fe057de46b681af26adb1cf8f1dc01fa73b756d0e48fcac11fd0c15fe5194\"" Mar 19 12:05:34.831406 containerd[1526]: time="2025-03-19T12:05:34.831275186Z" level=info msg="RemoveContainer for \"de9fe057de46b681af26adb1cf8f1dc01fa73b756d0e48fcac11fd0c15fe5194\" returns successfully" Mar 19 12:05:34.831559 kubelet[2839]: I0319 12:05:34.831523 2839 scope.go:117] "RemoveContainer" containerID="f212149daf98d6d6e12bad78843727379cfb8bcb8b0eeb54f1c054159c156ba5" Mar 19 12:05:34.831856 containerd[1526]: time="2025-03-19T12:05:34.831797090Z" level=error msg="ContainerStatus for \"f212149daf98d6d6e12bad78843727379cfb8bcb8b0eeb54f1c054159c156ba5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f212149daf98d6d6e12bad78843727379cfb8bcb8b0eeb54f1c054159c156ba5\": not found" Mar 19 12:05:34.843773 kubelet[2839]: E0319 12:05:34.843698 2839 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f212149daf98d6d6e12bad78843727379cfb8bcb8b0eeb54f1c054159c156ba5\": not found" containerID="f212149daf98d6d6e12bad78843727379cfb8bcb8b0eeb54f1c054159c156ba5" Mar 19 12:05:34.843948 kubelet[2839]: I0319 12:05:34.843792 2839 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f212149daf98d6d6e12bad78843727379cfb8bcb8b0eeb54f1c054159c156ba5"} err="failed to get container status \"f212149daf98d6d6e12bad78843727379cfb8bcb8b0eeb54f1c054159c156ba5\": rpc error: code = NotFound desc = an error occurred when try to find container \"f212149daf98d6d6e12bad78843727379cfb8bcb8b0eeb54f1c054159c156ba5\": not found" Mar 19 12:05:34.844017 kubelet[2839]: I0319 12:05:34.843957 2839 scope.go:117] "RemoveContainer" containerID="5e8d3f10cdef5a8fa0b4ea0c57d9f7d53805f619a3bfe2198f51e0af9092d7ae" Mar 19 12:05:34.844439 containerd[1526]: time="2025-03-19T12:05:34.844367088Z" level=error msg="ContainerStatus for \"5e8d3f10cdef5a8fa0b4ea0c57d9f7d53805f619a3bfe2198f51e0af9092d7ae\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5e8d3f10cdef5a8fa0b4ea0c57d9f7d53805f619a3bfe2198f51e0af9092d7ae\": not found" Mar 19 12:05:34.844872 kubelet[2839]: E0319 12:05:34.844671 2839 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5e8d3f10cdef5a8fa0b4ea0c57d9f7d53805f619a3bfe2198f51e0af9092d7ae\": not found" containerID="5e8d3f10cdef5a8fa0b4ea0c57d9f7d53805f619a3bfe2198f51e0af9092d7ae" Mar 19 12:05:34.844872 kubelet[2839]: I0319 12:05:34.844710 2839 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5e8d3f10cdef5a8fa0b4ea0c57d9f7d53805f619a3bfe2198f51e0af9092d7ae"} err="failed to get container status \"5e8d3f10cdef5a8fa0b4ea0c57d9f7d53805f619a3bfe2198f51e0af9092d7ae\": rpc error: code = NotFound desc = an error occurred when try to find container \"5e8d3f10cdef5a8fa0b4ea0c57d9f7d53805f619a3bfe2198f51e0af9092d7ae\": not found" Mar 19 12:05:34.844872 kubelet[2839]: I0319 12:05:34.844753 2839 scope.go:117] "RemoveContainer" containerID="be7f60dc4d87b2042f4c5a7737ea74c11ac79a145db35ecf7ff2087c8dc2556c" Mar 19 12:05:34.845558 containerd[1526]: time="2025-03-19T12:05:34.845206888Z" level=error msg="ContainerStatus for \"be7f60dc4d87b2042f4c5a7737ea74c11ac79a145db35ecf7ff2087c8dc2556c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"be7f60dc4d87b2042f4c5a7737ea74c11ac79a145db35ecf7ff2087c8dc2556c\": not found" Mar 19 12:05:34.845667 kubelet[2839]: E0319 12:05:34.845393 2839 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"be7f60dc4d87b2042f4c5a7737ea74c11ac79a145db35ecf7ff2087c8dc2556c\": not found" containerID="be7f60dc4d87b2042f4c5a7737ea74c11ac79a145db35ecf7ff2087c8dc2556c" Mar 19 12:05:34.845667 kubelet[2839]: I0319 12:05:34.845432 2839 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"be7f60dc4d87b2042f4c5a7737ea74c11ac79a145db35ecf7ff2087c8dc2556c"} err="failed to get container status \"be7f60dc4d87b2042f4c5a7737ea74c11ac79a145db35ecf7ff2087c8dc2556c\": rpc error: code = NotFound desc = an error occurred when try to find container \"be7f60dc4d87b2042f4c5a7737ea74c11ac79a145db35ecf7ff2087c8dc2556c\": not found" Mar 19 12:05:34.845667 kubelet[2839]: I0319 12:05:34.845456 2839 scope.go:117] "RemoveContainer" containerID="54c32bfff386eaac48fb097b6fe17da30b323d37f0732be6f2432280fec5f382" Mar 19 12:05:34.845825 containerd[1526]: time="2025-03-19T12:05:34.845686888Z" level=error msg="ContainerStatus for \"54c32bfff386eaac48fb097b6fe17da30b323d37f0732be6f2432280fec5f382\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"54c32bfff386eaac48fb097b6fe17da30b323d37f0732be6f2432280fec5f382\": not found" Mar 19 12:05:34.845879 kubelet[2839]: E0319 12:05:34.845852 2839 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"54c32bfff386eaac48fb097b6fe17da30b323d37f0732be6f2432280fec5f382\": not found" containerID="54c32bfff386eaac48fb097b6fe17da30b323d37f0732be6f2432280fec5f382" Mar 19 12:05:34.846068 kubelet[2839]: I0319 12:05:34.845882 2839 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"54c32bfff386eaac48fb097b6fe17da30b323d37f0732be6f2432280fec5f382"} err="failed to get container status \"54c32bfff386eaac48fb097b6fe17da30b323d37f0732be6f2432280fec5f382\": rpc error: code = NotFound desc = an error occurred when try to find container \"54c32bfff386eaac48fb097b6fe17da30b323d37f0732be6f2432280fec5f382\": not found" Mar 19 12:05:34.846068 kubelet[2839]: I0319 12:05:34.845905 2839 scope.go:117] "RemoveContainer" containerID="de9fe057de46b681af26adb1cf8f1dc01fa73b756d0e48fcac11fd0c15fe5194" Mar 19 12:05:34.846231 containerd[1526]: time="2025-03-19T12:05:34.846153642Z" level=error msg="ContainerStatus for \"de9fe057de46b681af26adb1cf8f1dc01fa73b756d0e48fcac11fd0c15fe5194\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"de9fe057de46b681af26adb1cf8f1dc01fa73b756d0e48fcac11fd0c15fe5194\": not found" Mar 19 12:05:34.846828 kubelet[2839]: E0319 12:05:34.846340 2839 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"de9fe057de46b681af26adb1cf8f1dc01fa73b756d0e48fcac11fd0c15fe5194\": not found" containerID="de9fe057de46b681af26adb1cf8f1dc01fa73b756d0e48fcac11fd0c15fe5194" Mar 19 12:05:34.846828 kubelet[2839]: I0319 12:05:34.846373 2839 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"de9fe057de46b681af26adb1cf8f1dc01fa73b756d0e48fcac11fd0c15fe5194"} err="failed to get container status \"de9fe057de46b681af26adb1cf8f1dc01fa73b756d0e48fcac11fd0c15fe5194\": rpc error: code = NotFound desc = an error occurred when try to find container \"de9fe057de46b681af26adb1cf8f1dc01fa73b756d0e48fcac11fd0c15fe5194\": not found" Mar 19 12:05:34.846828 kubelet[2839]: I0319 12:05:34.846403 2839 scope.go:117] "RemoveContainer" containerID="75672d87da98bcd23a1aad4fce568e82b3758f580261de15ae7d3a7d473bef63" Mar 19 12:05:34.847920 containerd[1526]: time="2025-03-19T12:05:34.847885304Z" level=info msg="RemoveContainer for \"75672d87da98bcd23a1aad4fce568e82b3758f580261de15ae7d3a7d473bef63\"" Mar 19 12:05:34.851166 containerd[1526]: time="2025-03-19T12:05:34.851134027Z" level=info msg="RemoveContainer for \"75672d87da98bcd23a1aad4fce568e82b3758f580261de15ae7d3a7d473bef63\" returns successfully" Mar 19 12:05:34.851428 kubelet[2839]: I0319 12:05:34.851363 2839 scope.go:117] "RemoveContainer" containerID="75672d87da98bcd23a1aad4fce568e82b3758f580261de15ae7d3a7d473bef63" Mar 19 12:05:34.851858 containerd[1526]: time="2025-03-19T12:05:34.851821730Z" level=error msg="ContainerStatus for \"75672d87da98bcd23a1aad4fce568e82b3758f580261de15ae7d3a7d473bef63\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"75672d87da98bcd23a1aad4fce568e82b3758f580261de15ae7d3a7d473bef63\": not found" Mar 19 12:05:34.852133 kubelet[2839]: E0319 12:05:34.852100 2839 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"75672d87da98bcd23a1aad4fce568e82b3758f580261de15ae7d3a7d473bef63\": not found" containerID="75672d87da98bcd23a1aad4fce568e82b3758f580261de15ae7d3a7d473bef63" Mar 19 12:05:34.852239 kubelet[2839]: I0319 12:05:34.852139 2839 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"75672d87da98bcd23a1aad4fce568e82b3758f580261de15ae7d3a7d473bef63"} err="failed to get container status \"75672d87da98bcd23a1aad4fce568e82b3758f580261de15ae7d3a7d473bef63\": rpc error: code = NotFound desc = an error occurred when try to find container \"75672d87da98bcd23a1aad4fce568e82b3758f580261de15ae7d3a7d473bef63\": not found" Mar 19 12:05:35.190010 kubelet[2839]: I0319 12:05:35.189914 2839 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6552963a-9e71-42ae-8b73-d4b7ef6e393b" path="/var/lib/kubelet/pods/6552963a-9e71-42ae-8b73-d4b7ef6e393b/volumes" Mar 19 12:05:35.190901 kubelet[2839]: I0319 12:05:35.190854 2839 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72d8bf8f-c353-4f8a-a457-3a6f94f2aa00" path="/var/lib/kubelet/pods/72d8bf8f-c353-4f8a-a457-3a6f94f2aa00/volumes" Mar 19 12:05:35.567570 sshd[4454]: Connection closed by 139.178.89.65 port 58300 Mar 19 12:05:35.569237 sshd-session[4452]: pam_unix(sshd:session): session closed for user core Mar 19 12:05:35.574805 systemd-logind[1509]: Session 28 logged out. Waiting for processes to exit. Mar 19 12:05:35.576372 systemd[1]: sshd@25-10.230.57.154:22-139.178.89.65:58300.service: Deactivated successfully. Mar 19 12:05:35.579276 systemd[1]: session-28.scope: Deactivated successfully. Mar 19 12:05:35.579809 systemd[1]: session-28.scope: Consumed 1.329s CPU time, 26.2M memory peak. Mar 19 12:05:35.581714 systemd-logind[1509]: Removed session 28. Mar 19 12:05:35.738592 systemd[1]: Started sshd@26-10.230.57.154:22-139.178.89.65:41224.service - OpenSSH per-connection server daemon (139.178.89.65:41224). Mar 19 12:05:36.635660 sshd[4613]: Accepted publickey for core from 139.178.89.65 port 41224 ssh2: RSA SHA256:wd0V+bPIs7QJ731rPibTo3OGwYA2jJ2A4YRQxxbXCKc Mar 19 12:05:36.637933 sshd-session[4613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 12:05:36.648308 systemd-logind[1509]: New session 29 of user core. Mar 19 12:05:36.652438 systemd[1]: Started session-29.scope - Session 29 of User core. Mar 19 12:05:37.943164 kubelet[2839]: I0319 12:05:37.942798 2839 topology_manager.go:215] "Topology Admit Handler" podUID="9ed282e1-7a35-48be-a1de-bd7324ab6830" podNamespace="kube-system" podName="cilium-98dj4" Mar 19 12:05:37.943164 kubelet[2839]: E0319 12:05:37.942958 2839 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6552963a-9e71-42ae-8b73-d4b7ef6e393b" containerName="cilium-operator" Mar 19 12:05:37.943164 kubelet[2839]: E0319 12:05:37.942983 2839 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="72d8bf8f-c353-4f8a-a457-3a6f94f2aa00" containerName="cilium-agent" Mar 19 12:05:37.943164 kubelet[2839]: E0319 12:05:37.942998 2839 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="72d8bf8f-c353-4f8a-a457-3a6f94f2aa00" containerName="mount-cgroup" Mar 19 12:05:37.943164 kubelet[2839]: E0319 12:05:37.943009 2839 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="72d8bf8f-c353-4f8a-a457-3a6f94f2aa00" containerName="apply-sysctl-overwrites" Mar 19 12:05:37.943164 kubelet[2839]: E0319 12:05:37.943020 2839 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="72d8bf8f-c353-4f8a-a457-3a6f94f2aa00" containerName="mount-bpf-fs" Mar 19 12:05:37.943164 kubelet[2839]: E0319 12:05:37.943032 2839 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="72d8bf8f-c353-4f8a-a457-3a6f94f2aa00" containerName="clean-cilium-state" Mar 19 12:05:37.948554 kubelet[2839]: I0319 12:05:37.943092 2839 memory_manager.go:354] "RemoveStaleState removing state" podUID="6552963a-9e71-42ae-8b73-d4b7ef6e393b" containerName="cilium-operator" Mar 19 12:05:37.948554 kubelet[2839]: I0319 12:05:37.947767 2839 memory_manager.go:354] "RemoveStaleState removing state" podUID="72d8bf8f-c353-4f8a-a457-3a6f94f2aa00" containerName="cilium-agent" Mar 19 12:05:37.988877 systemd[1]: Created slice kubepods-burstable-pod9ed282e1_7a35_48be_a1de_bd7324ab6830.slice - libcontainer container kubepods-burstable-pod9ed282e1_7a35_48be_a1de_bd7324ab6830.slice. Mar 19 12:05:38.099644 kubelet[2839]: I0319 12:05:38.099495 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9ed282e1-7a35-48be-a1de-bd7324ab6830-cilium-run\") pod \"cilium-98dj4\" (UID: \"9ed282e1-7a35-48be-a1de-bd7324ab6830\") " pod="kube-system/cilium-98dj4" Mar 19 12:05:38.099644 kubelet[2839]: I0319 12:05:38.099562 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njfdh\" (UniqueName: \"kubernetes.io/projected/9ed282e1-7a35-48be-a1de-bd7324ab6830-kube-api-access-njfdh\") pod \"cilium-98dj4\" (UID: \"9ed282e1-7a35-48be-a1de-bd7324ab6830\") " pod="kube-system/cilium-98dj4" Mar 19 12:05:38.099644 kubelet[2839]: I0319 12:05:38.099613 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9ed282e1-7a35-48be-a1de-bd7324ab6830-lib-modules\") pod \"cilium-98dj4\" (UID: \"9ed282e1-7a35-48be-a1de-bd7324ab6830\") " pod="kube-system/cilium-98dj4" Mar 19 12:05:38.099644 kubelet[2839]: I0319 12:05:38.099644 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9ed282e1-7a35-48be-a1de-bd7324ab6830-hostproc\") pod \"cilium-98dj4\" (UID: \"9ed282e1-7a35-48be-a1de-bd7324ab6830\") " pod="kube-system/cilium-98dj4" Mar 19 12:05:38.100081 kubelet[2839]: I0319 12:05:38.099689 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9ed282e1-7a35-48be-a1de-bd7324ab6830-cilium-ipsec-secrets\") pod \"cilium-98dj4\" (UID: \"9ed282e1-7a35-48be-a1de-bd7324ab6830\") " pod="kube-system/cilium-98dj4" Mar 19 12:05:38.100081 kubelet[2839]: I0319 12:05:38.099717 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9ed282e1-7a35-48be-a1de-bd7324ab6830-bpf-maps\") pod \"cilium-98dj4\" (UID: \"9ed282e1-7a35-48be-a1de-bd7324ab6830\") " pod="kube-system/cilium-98dj4" Mar 19 12:05:38.100081 kubelet[2839]: I0319 12:05:38.099742 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9ed282e1-7a35-48be-a1de-bd7324ab6830-cilium-cgroup\") pod \"cilium-98dj4\" (UID: \"9ed282e1-7a35-48be-a1de-bd7324ab6830\") " pod="kube-system/cilium-98dj4" Mar 19 12:05:38.100081 kubelet[2839]: I0319 12:05:38.099806 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9ed282e1-7a35-48be-a1de-bd7324ab6830-xtables-lock\") pod \"cilium-98dj4\" (UID: \"9ed282e1-7a35-48be-a1de-bd7324ab6830\") " pod="kube-system/cilium-98dj4" Mar 19 12:05:38.100081 kubelet[2839]: I0319 12:05:38.099843 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9ed282e1-7a35-48be-a1de-bd7324ab6830-host-proc-sys-kernel\") pod \"cilium-98dj4\" (UID: \"9ed282e1-7a35-48be-a1de-bd7324ab6830\") " pod="kube-system/cilium-98dj4" Mar 19 12:05:38.100081 kubelet[2839]: I0319 12:05:38.099878 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9ed282e1-7a35-48be-a1de-bd7324ab6830-cilium-config-path\") pod \"cilium-98dj4\" (UID: \"9ed282e1-7a35-48be-a1de-bd7324ab6830\") " pod="kube-system/cilium-98dj4" Mar 19 12:05:38.100413 kubelet[2839]: I0319 12:05:38.099909 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9ed282e1-7a35-48be-a1de-bd7324ab6830-hubble-tls\") pod \"cilium-98dj4\" (UID: \"9ed282e1-7a35-48be-a1de-bd7324ab6830\") " pod="kube-system/cilium-98dj4" Mar 19 12:05:38.100413 kubelet[2839]: I0319 12:05:38.099942 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9ed282e1-7a35-48be-a1de-bd7324ab6830-host-proc-sys-net\") pod \"cilium-98dj4\" (UID: \"9ed282e1-7a35-48be-a1de-bd7324ab6830\") " pod="kube-system/cilium-98dj4" Mar 19 12:05:38.100413 kubelet[2839]: I0319 12:05:38.099994 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9ed282e1-7a35-48be-a1de-bd7324ab6830-clustermesh-secrets\") pod \"cilium-98dj4\" (UID: \"9ed282e1-7a35-48be-a1de-bd7324ab6830\") " pod="kube-system/cilium-98dj4" Mar 19 12:05:38.100413 kubelet[2839]: I0319 12:05:38.100043 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9ed282e1-7a35-48be-a1de-bd7324ab6830-cni-path\") pod \"cilium-98dj4\" (UID: \"9ed282e1-7a35-48be-a1de-bd7324ab6830\") " pod="kube-system/cilium-98dj4" Mar 19 12:05:38.100413 kubelet[2839]: I0319 12:05:38.100080 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9ed282e1-7a35-48be-a1de-bd7324ab6830-etc-cni-netd\") pod \"cilium-98dj4\" (UID: \"9ed282e1-7a35-48be-a1de-bd7324ab6830\") " pod="kube-system/cilium-98dj4" Mar 19 12:05:38.106630 sshd[4615]: Connection closed by 139.178.89.65 port 41224 Mar 19 12:05:38.107435 sshd-session[4613]: pam_unix(sshd:session): session closed for user core Mar 19 12:05:38.113323 systemd[1]: sshd@26-10.230.57.154:22-139.178.89.65:41224.service: Deactivated successfully. Mar 19 12:05:38.116057 systemd[1]: session-29.scope: Deactivated successfully. Mar 19 12:05:38.117266 systemd-logind[1509]: Session 29 logged out. Waiting for processes to exit. Mar 19 12:05:38.118820 systemd-logind[1509]: Removed session 29. Mar 19 12:05:38.275559 systemd[1]: Started sshd@27-10.230.57.154:22-139.178.89.65:41234.service - OpenSSH per-connection server daemon (139.178.89.65:41234). Mar 19 12:05:38.293460 containerd[1526]: time="2025-03-19T12:05:38.293402834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-98dj4,Uid:9ed282e1-7a35-48be-a1de-bd7324ab6830,Namespace:kube-system,Attempt:0,}" Mar 19 12:05:38.326252 containerd[1526]: time="2025-03-19T12:05:38.325733306Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 12:05:38.326252 containerd[1526]: time="2025-03-19T12:05:38.325841833Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 12:05:38.326252 containerd[1526]: time="2025-03-19T12:05:38.325877684Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 12:05:38.326875 containerd[1526]: time="2025-03-19T12:05:38.326069370Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 12:05:38.356406 systemd[1]: Started cri-containerd-3f52cb8b917c3eefac4711f45c2673645e019814e171b5fd97cb18c0a88d95fb.scope - libcontainer container 3f52cb8b917c3eefac4711f45c2673645e019814e171b5fd97cb18c0a88d95fb. Mar 19 12:05:38.397543 containerd[1526]: time="2025-03-19T12:05:38.397308578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-98dj4,Uid:9ed282e1-7a35-48be-a1de-bd7324ab6830,Namespace:kube-system,Attempt:0,} returns sandbox id \"3f52cb8b917c3eefac4711f45c2673645e019814e171b5fd97cb18c0a88d95fb\"" Mar 19 12:05:38.408514 containerd[1526]: time="2025-03-19T12:05:38.408295449Z" level=info msg="CreateContainer within sandbox \"3f52cb8b917c3eefac4711f45c2673645e019814e171b5fd97cb18c0a88d95fb\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 19 12:05:38.421642 containerd[1526]: time="2025-03-19T12:05:38.421043204Z" level=info msg="CreateContainer within sandbox \"3f52cb8b917c3eefac4711f45c2673645e019814e171b5fd97cb18c0a88d95fb\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"80f38e538e5f822b7f66a4f3359e861d278586c3ea534fa179880a096001f87c\"" Mar 19 12:05:38.422396 containerd[1526]: time="2025-03-19T12:05:38.422348130Z" level=info msg="StartContainer for \"80f38e538e5f822b7f66a4f3359e861d278586c3ea534fa179880a096001f87c\"" Mar 19 12:05:38.468461 systemd[1]: Started cri-containerd-80f38e538e5f822b7f66a4f3359e861d278586c3ea534fa179880a096001f87c.scope - libcontainer container 80f38e538e5f822b7f66a4f3359e861d278586c3ea534fa179880a096001f87c. Mar 19 12:05:38.506360 containerd[1526]: time="2025-03-19T12:05:38.505553004Z" level=info msg="StartContainer for \"80f38e538e5f822b7f66a4f3359e861d278586c3ea534fa179880a096001f87c\" returns successfully" Mar 19 12:05:38.525623 systemd[1]: cri-containerd-80f38e538e5f822b7f66a4f3359e861d278586c3ea534fa179880a096001f87c.scope: Deactivated successfully. Mar 19 12:05:38.526466 systemd[1]: cri-containerd-80f38e538e5f822b7f66a4f3359e861d278586c3ea534fa179880a096001f87c.scope: Consumed 30ms CPU time, 9.5M memory peak, 3.1M read from disk. Mar 19 12:05:38.568337 containerd[1526]: time="2025-03-19T12:05:38.568159987Z" level=info msg="shim disconnected" id=80f38e538e5f822b7f66a4f3359e861d278586c3ea534fa179880a096001f87c namespace=k8s.io Mar 19 12:05:38.568337 containerd[1526]: time="2025-03-19T12:05:38.568276715Z" level=warning msg="cleaning up after shim disconnected" id=80f38e538e5f822b7f66a4f3359e861d278586c3ea534fa179880a096001f87c namespace=k8s.io Mar 19 12:05:38.568337 containerd[1526]: time="2025-03-19T12:05:38.568293792Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 12:05:38.778707 containerd[1526]: time="2025-03-19T12:05:38.778397420Z" level=info msg="CreateContainer within sandbox \"3f52cb8b917c3eefac4711f45c2673645e019814e171b5fd97cb18c0a88d95fb\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 19 12:05:38.791672 containerd[1526]: time="2025-03-19T12:05:38.791612675Z" level=info msg="CreateContainer within sandbox \"3f52cb8b917c3eefac4711f45c2673645e019814e171b5fd97cb18c0a88d95fb\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d192f4ce449f367dd7ef75e47d3d2cd355ed49aafa263de89051932f1ff41606\"" Mar 19 12:05:38.793769 containerd[1526]: time="2025-03-19T12:05:38.793467812Z" level=info msg="StartContainer for \"d192f4ce449f367dd7ef75e47d3d2cd355ed49aafa263de89051932f1ff41606\"" Mar 19 12:05:38.833448 systemd[1]: Started cri-containerd-d192f4ce449f367dd7ef75e47d3d2cd355ed49aafa263de89051932f1ff41606.scope - libcontainer container d192f4ce449f367dd7ef75e47d3d2cd355ed49aafa263de89051932f1ff41606. Mar 19 12:05:38.876802 containerd[1526]: time="2025-03-19T12:05:38.875904953Z" level=info msg="StartContainer for \"d192f4ce449f367dd7ef75e47d3d2cd355ed49aafa263de89051932f1ff41606\" returns successfully" Mar 19 12:05:38.888829 systemd[1]: cri-containerd-d192f4ce449f367dd7ef75e47d3d2cd355ed49aafa263de89051932f1ff41606.scope: Deactivated successfully. Mar 19 12:05:38.889745 systemd[1]: cri-containerd-d192f4ce449f367dd7ef75e47d3d2cd355ed49aafa263de89051932f1ff41606.scope: Consumed 27ms CPU time, 7M memory peak, 1.9M read from disk. Mar 19 12:05:38.929511 containerd[1526]: time="2025-03-19T12:05:38.929152969Z" level=info msg="shim disconnected" id=d192f4ce449f367dd7ef75e47d3d2cd355ed49aafa263de89051932f1ff41606 namespace=k8s.io Mar 19 12:05:38.929511 containerd[1526]: time="2025-03-19T12:05:38.929255279Z" level=warning msg="cleaning up after shim disconnected" id=d192f4ce449f367dd7ef75e47d3d2cd355ed49aafa263de89051932f1ff41606 namespace=k8s.io Mar 19 12:05:38.929511 containerd[1526]: time="2025-03-19T12:05:38.929271267Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 12:05:39.174572 sshd[4630]: Accepted publickey for core from 139.178.89.65 port 41234 ssh2: RSA SHA256:wd0V+bPIs7QJ731rPibTo3OGwYA2jJ2A4YRQxxbXCKc Mar 19 12:05:39.176563 sshd-session[4630]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 12:05:39.184311 systemd-logind[1509]: New session 30 of user core. Mar 19 12:05:39.190371 systemd[1]: Started session-30.scope - Session 30 of User core. Mar 19 12:05:39.426351 kubelet[2839]: E0319 12:05:39.426168 2839 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 19 12:05:39.783227 containerd[1526]: time="2025-03-19T12:05:39.782990026Z" level=info msg="CreateContainer within sandbox \"3f52cb8b917c3eefac4711f45c2673645e019814e171b5fd97cb18c0a88d95fb\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 19 12:05:39.789519 sshd[4794]: Connection closed by 139.178.89.65 port 41234 Mar 19 12:05:39.790980 sshd-session[4630]: pam_unix(sshd:session): session closed for user core Mar 19 12:05:39.799815 systemd-logind[1509]: Session 30 logged out. Waiting for processes to exit. Mar 19 12:05:39.800750 systemd[1]: sshd@27-10.230.57.154:22-139.178.89.65:41234.service: Deactivated successfully. Mar 19 12:05:39.816616 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1014317466.mount: Deactivated successfully. Mar 19 12:05:39.818216 systemd[1]: session-30.scope: Deactivated successfully. Mar 19 12:05:39.821526 containerd[1526]: time="2025-03-19T12:05:39.821465459Z" level=info msg="CreateContainer within sandbox \"3f52cb8b917c3eefac4711f45c2673645e019814e171b5fd97cb18c0a88d95fb\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6802ea4c682f5cd6a4317332afae1fb7b62f93c022c23b897fcad14423b72744\"" Mar 19 12:05:39.821976 systemd-logind[1509]: Removed session 30. Mar 19 12:05:39.826163 containerd[1526]: time="2025-03-19T12:05:39.823273693Z" level=info msg="StartContainer for \"6802ea4c682f5cd6a4317332afae1fb7b62f93c022c23b897fcad14423b72744\"" Mar 19 12:05:39.867523 systemd[1]: Started cri-containerd-6802ea4c682f5cd6a4317332afae1fb7b62f93c022c23b897fcad14423b72744.scope - libcontainer container 6802ea4c682f5cd6a4317332afae1fb7b62f93c022c23b897fcad14423b72744. Mar 19 12:05:39.920789 containerd[1526]: time="2025-03-19T12:05:39.920645883Z" level=info msg="StartContainer for \"6802ea4c682f5cd6a4317332afae1fb7b62f93c022c23b897fcad14423b72744\" returns successfully" Mar 19 12:05:39.943870 systemd[1]: cri-containerd-6802ea4c682f5cd6a4317332afae1fb7b62f93c022c23b897fcad14423b72744.scope: Deactivated successfully. Mar 19 12:05:39.951578 systemd[1]: Started sshd@28-10.230.57.154:22-139.178.89.65:41242.service - OpenSSH per-connection server daemon (139.178.89.65:41242). Mar 19 12:05:39.978249 containerd[1526]: time="2025-03-19T12:05:39.978114903Z" level=info msg="shim disconnected" id=6802ea4c682f5cd6a4317332afae1fb7b62f93c022c23b897fcad14423b72744 namespace=k8s.io Mar 19 12:05:39.978658 containerd[1526]: time="2025-03-19T12:05:39.978501289Z" level=warning msg="cleaning up after shim disconnected" id=6802ea4c682f5cd6a4317332afae1fb7b62f93c022c23b897fcad14423b72744 namespace=k8s.io Mar 19 12:05:39.978658 containerd[1526]: time="2025-03-19T12:05:39.978527647Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 12:05:40.214757 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6802ea4c682f5cd6a4317332afae1fb7b62f93c022c23b897fcad14423b72744-rootfs.mount: Deactivated successfully. Mar 19 12:05:40.788447 containerd[1526]: time="2025-03-19T12:05:40.787931173Z" level=info msg="CreateContainer within sandbox \"3f52cb8b917c3eefac4711f45c2673645e019814e171b5fd97cb18c0a88d95fb\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 19 12:05:40.822998 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount482360691.mount: Deactivated successfully. Mar 19 12:05:40.831107 containerd[1526]: time="2025-03-19T12:05:40.830998792Z" level=info msg="CreateContainer within sandbox \"3f52cb8b917c3eefac4711f45c2673645e019814e171b5fd97cb18c0a88d95fb\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b77aea25ace114e44c5b15e554102d8baf1aa09435ae2a19a9de1b02642b34c4\"" Mar 19 12:05:40.833831 containerd[1526]: time="2025-03-19T12:05:40.832003484Z" level=info msg="StartContainer for \"b77aea25ace114e44c5b15e554102d8baf1aa09435ae2a19a9de1b02642b34c4\"" Mar 19 12:05:40.861444 sshd[4839]: Accepted publickey for core from 139.178.89.65 port 41242 ssh2: RSA SHA256:wd0V+bPIs7QJ731rPibTo3OGwYA2jJ2A4YRQxxbXCKc Mar 19 12:05:40.866395 sshd-session[4839]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 12:05:40.883409 systemd[1]: Started cri-containerd-b77aea25ace114e44c5b15e554102d8baf1aa09435ae2a19a9de1b02642b34c4.scope - libcontainer container b77aea25ace114e44c5b15e554102d8baf1aa09435ae2a19a9de1b02642b34c4. Mar 19 12:05:40.888763 systemd-logind[1509]: New session 31 of user core. Mar 19 12:05:40.893397 systemd[1]: Started session-31.scope - Session 31 of User core. Mar 19 12:05:40.943529 systemd[1]: cri-containerd-b77aea25ace114e44c5b15e554102d8baf1aa09435ae2a19a9de1b02642b34c4.scope: Deactivated successfully. Mar 19 12:05:40.950078 containerd[1526]: time="2025-03-19T12:05:40.949928375Z" level=info msg="StartContainer for \"b77aea25ace114e44c5b15e554102d8baf1aa09435ae2a19a9de1b02642b34c4\" returns successfully" Mar 19 12:05:40.982418 containerd[1526]: time="2025-03-19T12:05:40.982341687Z" level=info msg="shim disconnected" id=b77aea25ace114e44c5b15e554102d8baf1aa09435ae2a19a9de1b02642b34c4 namespace=k8s.io Mar 19 12:05:40.982418 containerd[1526]: time="2025-03-19T12:05:40.982413591Z" level=warning msg="cleaning up after shim disconnected" id=b77aea25ace114e44c5b15e554102d8baf1aa09435ae2a19a9de1b02642b34c4 namespace=k8s.io Mar 19 12:05:40.982418 containerd[1526]: time="2025-03-19T12:05:40.982429989Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 12:05:40.999113 containerd[1526]: time="2025-03-19T12:05:40.999009197Z" level=warning msg="cleanup warnings time=\"2025-03-19T12:05:40Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 19 12:05:41.215006 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b77aea25ace114e44c5b15e554102d8baf1aa09435ae2a19a9de1b02642b34c4-rootfs.mount: Deactivated successfully. Mar 19 12:05:41.800655 containerd[1526]: time="2025-03-19T12:05:41.800593125Z" level=info msg="CreateContainer within sandbox \"3f52cb8b917c3eefac4711f45c2673645e019814e171b5fd97cb18c0a88d95fb\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 19 12:05:41.828431 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4075056787.mount: Deactivated successfully. Mar 19 12:05:41.831880 containerd[1526]: time="2025-03-19T12:05:41.831697669Z" level=info msg="CreateContainer within sandbox \"3f52cb8b917c3eefac4711f45c2673645e019814e171b5fd97cb18c0a88d95fb\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"47ec190d53ba796679dc313382b7a7b43287355595c201f2a4d1b080efcd1c65\"" Mar 19 12:05:41.832420 containerd[1526]: time="2025-03-19T12:05:41.832386215Z" level=info msg="StartContainer for \"47ec190d53ba796679dc313382b7a7b43287355595c201f2a4d1b080efcd1c65\"" Mar 19 12:05:41.876433 systemd[1]: Started cri-containerd-47ec190d53ba796679dc313382b7a7b43287355595c201f2a4d1b080efcd1c65.scope - libcontainer container 47ec190d53ba796679dc313382b7a7b43287355595c201f2a4d1b080efcd1c65. Mar 19 12:05:41.925947 containerd[1526]: time="2025-03-19T12:05:41.925883168Z" level=info msg="StartContainer for \"47ec190d53ba796679dc313382b7a7b43287355595c201f2a4d1b080efcd1c65\" returns successfully" Mar 19 12:05:42.580721 kubelet[2839]: I0319 12:05:42.580659 2839 setters.go:580] "Node became not ready" node="srv-z8dvi.gb1.brightbox.com" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-03-19T12:05:42Z","lastTransitionTime":"2025-03-19T12:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 19 12:05:42.678271 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Mar 19 12:05:42.825859 kubelet[2839]: I0319 12:05:42.825738 2839 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-98dj4" podStartSLOduration=5.825710407 podStartE2EDuration="5.825710407s" podCreationTimestamp="2025-03-19 12:05:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-19 12:05:42.824370659 +0000 UTC m=+163.868283930" watchObservedRunningTime="2025-03-19 12:05:42.825710407 +0000 UTC m=+163.869623657" Mar 19 12:05:45.967524 systemd[1]: run-containerd-runc-k8s.io-47ec190d53ba796679dc313382b7a7b43287355595c201f2a4d1b080efcd1c65-runc.cSqPhd.mount: Deactivated successfully. Mar 19 12:05:46.517330 systemd-networkd[1447]: lxc_health: Link UP Mar 19 12:05:46.517900 systemd-networkd[1447]: lxc_health: Gained carrier Mar 19 12:05:48.177379 systemd-networkd[1447]: lxc_health: Gained IPv6LL Mar 19 12:05:48.480993 systemd[1]: run-containerd-runc-k8s.io-47ec190d53ba796679dc313382b7a7b43287355595c201f2a4d1b080efcd1c65-runc.mTEKRh.mount: Deactivated successfully. Mar 19 12:05:53.352606 sshd[4884]: Connection closed by 139.178.89.65 port 41242 Mar 19 12:05:53.354239 sshd-session[4839]: pam_unix(sshd:session): session closed for user core Mar 19 12:05:53.360443 systemd[1]: sshd@28-10.230.57.154:22-139.178.89.65:41242.service: Deactivated successfully. Mar 19 12:05:53.363427 systemd[1]: session-31.scope: Deactivated successfully. Mar 19 12:05:53.364715 systemd-logind[1509]: Session 31 logged out. Waiting for processes to exit. Mar 19 12:05:53.367022 systemd-logind[1509]: Removed session 31.