Mar 17 20:27:26.073980 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Mon Mar 17 16:09:25 -00 2025 Mar 17 20:27:26.074019 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=2a4a0f64c0160ed10b339be09fdc9d7e265b13f78aefc87616e79bf13c00bb1c Mar 17 20:27:26.074034 kernel: BIOS-provided physical RAM map: Mar 17 20:27:26.074051 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 17 20:27:26.074062 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 17 20:27:26.074072 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 17 20:27:26.074096 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Mar 17 20:27:26.074106 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Mar 17 20:27:26.074117 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 17 20:27:26.074127 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 17 20:27:26.074150 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 17 20:27:26.074168 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 17 20:27:26.074185 kernel: NX (Execute Disable) protection: active Mar 17 20:27:26.074196 kernel: APIC: Static calls initialized Mar 17 20:27:26.074221 kernel: SMBIOS 2.8 present. Mar 17 20:27:26.074239 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Mar 17 20:27:26.074251 kernel: Hypervisor detected: KVM Mar 17 20:27:26.074281 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 17 20:27:26.074293 kernel: kvm-clock: using sched offset of 5513359765 cycles Mar 17 20:27:26.074306 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 17 20:27:26.074318 kernel: tsc: Detected 2499.998 MHz processor Mar 17 20:27:26.074330 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 17 20:27:26.074342 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 17 20:27:26.074354 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Mar 17 20:27:26.075725 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 17 20:27:26.075747 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 17 20:27:26.075769 kernel: Using GB pages for direct mapping Mar 17 20:27:26.075781 kernel: ACPI: Early table checksum verification disabled Mar 17 20:27:26.075794 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Mar 17 20:27:26.075806 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 20:27:26.075818 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 20:27:26.075830 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 20:27:26.075842 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Mar 17 20:27:26.075854 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 20:27:26.075866 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 20:27:26.075884 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 20:27:26.075897 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 20:27:26.075909 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Mar 17 20:27:26.075921 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Mar 17 20:27:26.075933 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Mar 17 20:27:26.075952 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Mar 17 20:27:26.075965 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Mar 17 20:27:26.075991 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Mar 17 20:27:26.076006 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Mar 17 20:27:26.076018 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Mar 17 20:27:26.076031 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Mar 17 20:27:26.076043 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Mar 17 20:27:26.076055 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Mar 17 20:27:26.076067 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Mar 17 20:27:26.076080 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Mar 17 20:27:26.076098 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Mar 17 20:27:26.076111 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Mar 17 20:27:26.076123 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Mar 17 20:27:26.076135 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Mar 17 20:27:26.076147 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Mar 17 20:27:26.076159 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Mar 17 20:27:26.076171 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Mar 17 20:27:26.076183 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Mar 17 20:27:26.076201 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Mar 17 20:27:26.076214 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Mar 17 20:27:26.076232 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Mar 17 20:27:26.076245 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Mar 17 20:27:26.076257 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Mar 17 20:27:26.076270 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Mar 17 20:27:26.076283 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Mar 17 20:27:26.076296 kernel: Zone ranges: Mar 17 20:27:26.076308 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 17 20:27:26.076320 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Mar 17 20:27:26.076333 kernel: Normal empty Mar 17 20:27:26.076351 kernel: Movable zone start for each node Mar 17 20:27:26.076363 kernel: Early memory node ranges Mar 17 20:27:26.076375 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 17 20:27:26.076388 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Mar 17 20:27:26.076405 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Mar 17 20:27:26.076435 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 17 20:27:26.076448 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 17 20:27:26.076466 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Mar 17 20:27:26.076480 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 17 20:27:26.076499 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 17 20:27:26.076512 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 17 20:27:26.076532 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 17 20:27:26.076544 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 17 20:27:26.076557 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 17 20:27:26.076570 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 17 20:27:26.076582 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 17 20:27:26.076595 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 17 20:27:26.076607 kernel: TSC deadline timer available Mar 17 20:27:26.076626 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Mar 17 20:27:26.076639 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 17 20:27:26.076651 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 17 20:27:26.076691 kernel: Booting paravirtualized kernel on KVM Mar 17 20:27:26.076704 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 17 20:27:26.076717 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Mar 17 20:27:26.076729 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Mar 17 20:27:26.076742 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Mar 17 20:27:26.076754 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Mar 17 20:27:26.076774 kernel: kvm-guest: PV spinlocks enabled Mar 17 20:27:26.076787 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 17 20:27:26.076801 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=2a4a0f64c0160ed10b339be09fdc9d7e265b13f78aefc87616e79bf13c00bb1c Mar 17 20:27:26.076814 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 17 20:27:26.076826 kernel: random: crng init done Mar 17 20:27:26.076839 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 17 20:27:26.076851 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Mar 17 20:27:26.076864 kernel: Fallback order for Node 0: 0 Mar 17 20:27:26.076888 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Mar 17 20:27:26.076902 kernel: Policy zone: DMA32 Mar 17 20:27:26.076914 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 17 20:27:26.076927 kernel: software IO TLB: area num 16. Mar 17 20:27:26.076939 kernel: Memory: 1899468K/2096616K available (14336K kernel code, 2303K rwdata, 22860K rodata, 43476K init, 1596K bss, 196888K reserved, 0K cma-reserved) Mar 17 20:27:26.076964 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Mar 17 20:27:26.076976 kernel: Kernel/User page tables isolation: enabled Mar 17 20:27:26.076988 kernel: ftrace: allocating 37910 entries in 149 pages Mar 17 20:27:26.077001 kernel: ftrace: allocated 149 pages with 4 groups Mar 17 20:27:26.077031 kernel: Dynamic Preempt: voluntary Mar 17 20:27:26.077043 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 17 20:27:26.077057 kernel: rcu: RCU event tracing is enabled. Mar 17 20:27:26.077070 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Mar 17 20:27:26.077083 kernel: Trampoline variant of Tasks RCU enabled. Mar 17 20:27:26.077109 kernel: Rude variant of Tasks RCU enabled. Mar 17 20:27:26.077128 kernel: Tracing variant of Tasks RCU enabled. Mar 17 20:27:26.077141 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 17 20:27:26.077154 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Mar 17 20:27:26.077167 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Mar 17 20:27:26.077181 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 17 20:27:26.077194 kernel: Console: colour VGA+ 80x25 Mar 17 20:27:26.077213 kernel: printk: console [tty0] enabled Mar 17 20:27:26.077226 kernel: printk: console [ttyS0] enabled Mar 17 20:27:26.077239 kernel: ACPI: Core revision 20230628 Mar 17 20:27:26.077252 kernel: APIC: Switch to symmetric I/O mode setup Mar 17 20:27:26.077265 kernel: x2apic enabled Mar 17 20:27:26.077283 kernel: APIC: Switched APIC routing to: physical x2apic Mar 17 20:27:26.077302 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Mar 17 20:27:26.077317 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Mar 17 20:27:26.077330 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 17 20:27:26.077343 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Mar 17 20:27:26.077356 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Mar 17 20:27:26.077369 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 17 20:27:26.077382 kernel: Spectre V2 : Mitigation: Retpolines Mar 17 20:27:26.077395 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Mar 17 20:27:26.077415 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Mar 17 20:27:26.077428 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Mar 17 20:27:26.077441 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Mar 17 20:27:26.077454 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Mar 17 20:27:26.077467 kernel: MDS: Mitigation: Clear CPU buffers Mar 17 20:27:26.077480 kernel: MMIO Stale Data: Unknown: No mitigations Mar 17 20:27:26.077493 kernel: SRBDS: Unknown: Dependent on hypervisor status Mar 17 20:27:26.077506 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 17 20:27:26.077529 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 17 20:27:26.077549 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 17 20:27:26.077562 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 17 20:27:26.077581 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Mar 17 20:27:26.077605 kernel: Freeing SMP alternatives memory: 32K Mar 17 20:27:26.077621 kernel: pid_max: default: 32768 minimum: 301 Mar 17 20:27:26.077634 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 17 20:27:26.077647 kernel: landlock: Up and running. Mar 17 20:27:26.080732 kernel: SELinux: Initializing. Mar 17 20:27:26.080752 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 17 20:27:26.080766 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 17 20:27:26.080780 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Mar 17 20:27:26.080793 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Mar 17 20:27:26.080807 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Mar 17 20:27:26.080830 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Mar 17 20:27:26.080844 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Mar 17 20:27:26.080857 kernel: signal: max sigframe size: 1776 Mar 17 20:27:26.080870 kernel: rcu: Hierarchical SRCU implementation. Mar 17 20:27:26.080884 kernel: rcu: Max phase no-delay instances is 400. Mar 17 20:27:26.080898 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 17 20:27:26.080911 kernel: smp: Bringing up secondary CPUs ... Mar 17 20:27:26.080924 kernel: smpboot: x86: Booting SMP configuration: Mar 17 20:27:26.080937 kernel: .... node #0, CPUs: #1 Mar 17 20:27:26.080957 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Mar 17 20:27:26.080970 kernel: smp: Brought up 1 node, 2 CPUs Mar 17 20:27:26.080983 kernel: smpboot: Max logical packages: 16 Mar 17 20:27:26.080997 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Mar 17 20:27:26.081010 kernel: devtmpfs: initialized Mar 17 20:27:26.081023 kernel: x86/mm: Memory block size: 128MB Mar 17 20:27:26.081036 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 17 20:27:26.081049 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Mar 17 20:27:26.081063 kernel: pinctrl core: initialized pinctrl subsystem Mar 17 20:27:26.081082 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 17 20:27:26.081096 kernel: audit: initializing netlink subsys (disabled) Mar 17 20:27:26.081109 kernel: audit: type=2000 audit(1742243245.143:1): state=initialized audit_enabled=0 res=1 Mar 17 20:27:26.081122 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 17 20:27:26.081135 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 17 20:27:26.081149 kernel: cpuidle: using governor menu Mar 17 20:27:26.081162 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 17 20:27:26.081175 kernel: dca service started, version 1.12.1 Mar 17 20:27:26.081189 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 17 20:27:26.081207 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 17 20:27:26.081221 kernel: PCI: Using configuration type 1 for base access Mar 17 20:27:26.081235 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 17 20:27:26.081248 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 17 20:27:26.081261 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 17 20:27:26.081274 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 17 20:27:26.081299 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 17 20:27:26.081312 kernel: ACPI: Added _OSI(Module Device) Mar 17 20:27:26.081325 kernel: ACPI: Added _OSI(Processor Device) Mar 17 20:27:26.081344 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 17 20:27:26.081370 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 17 20:27:26.081383 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 17 20:27:26.081396 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 17 20:27:26.081409 kernel: ACPI: Interpreter enabled Mar 17 20:27:26.081423 kernel: ACPI: PM: (supports S0 S5) Mar 17 20:27:26.081436 kernel: ACPI: Using IOAPIC for interrupt routing Mar 17 20:27:26.081449 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 17 20:27:26.081462 kernel: PCI: Using E820 reservations for host bridge windows Mar 17 20:27:26.081481 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 17 20:27:26.081495 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 17 20:27:26.081820 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 17 20:27:26.082040 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Mar 17 20:27:26.082229 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Mar 17 20:27:26.082250 kernel: PCI host bridge to bus 0000:00 Mar 17 20:27:26.082462 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 17 20:27:26.082650 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 17 20:27:26.084899 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 17 20:27:26.085081 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Mar 17 20:27:26.085276 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 17 20:27:26.085449 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Mar 17 20:27:26.085621 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 17 20:27:26.087956 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 17 20:27:26.088210 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Mar 17 20:27:26.088401 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Mar 17 20:27:26.088589 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Mar 17 20:27:26.089816 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Mar 17 20:27:26.090005 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 17 20:27:26.090232 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Mar 17 20:27:26.090439 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Mar 17 20:27:26.090704 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Mar 17 20:27:26.090896 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Mar 17 20:27:26.091094 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Mar 17 20:27:26.091281 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Mar 17 20:27:26.091490 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Mar 17 20:27:26.095763 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Mar 17 20:27:26.096004 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Mar 17 20:27:26.096198 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Mar 17 20:27:26.096422 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Mar 17 20:27:26.096611 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Mar 17 20:27:26.096851 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Mar 17 20:27:26.097049 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Mar 17 20:27:26.097249 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Mar 17 20:27:26.097454 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Mar 17 20:27:26.100879 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Mar 17 20:27:26.101111 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Mar 17 20:27:26.101305 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Mar 17 20:27:26.101494 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Mar 17 20:27:26.101732 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Mar 17 20:27:26.101970 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Mar 17 20:27:26.102159 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Mar 17 20:27:26.102345 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Mar 17 20:27:26.102530 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Mar 17 20:27:26.104807 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 17 20:27:26.105006 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 17 20:27:26.105219 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 17 20:27:26.105441 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Mar 17 20:27:26.105655 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Mar 17 20:27:26.106021 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 17 20:27:26.106258 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Mar 17 20:27:26.106476 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Mar 17 20:27:26.108745 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Mar 17 20:27:26.109033 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Mar 17 20:27:26.109228 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Mar 17 20:27:26.109419 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Mar 17 20:27:26.109639 kernel: pci_bus 0000:02: extended config space not accessible Mar 17 20:27:26.109972 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Mar 17 20:27:26.110184 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Mar 17 20:27:26.110375 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Mar 17 20:27:26.110566 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Mar 17 20:27:26.110805 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Mar 17 20:27:26.111000 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Mar 17 20:27:26.111189 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Mar 17 20:27:26.111373 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Mar 17 20:27:26.111566 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Mar 17 20:27:26.111867 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Mar 17 20:27:26.112065 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Mar 17 20:27:26.112256 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Mar 17 20:27:26.112441 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Mar 17 20:27:26.112625 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Mar 17 20:27:26.112850 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Mar 17 20:27:26.113038 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Mar 17 20:27:26.113229 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Mar 17 20:27:26.113418 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Mar 17 20:27:26.113601 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Mar 17 20:27:26.113829 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Mar 17 20:27:26.114021 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Mar 17 20:27:26.114206 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Mar 17 20:27:26.114392 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Mar 17 20:27:26.114581 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Mar 17 20:27:26.114854 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Mar 17 20:27:26.115041 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Mar 17 20:27:26.115229 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Mar 17 20:27:26.115411 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Mar 17 20:27:26.115593 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Mar 17 20:27:26.115614 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 17 20:27:26.115629 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 17 20:27:26.115642 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 17 20:27:26.115714 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 17 20:27:26.115729 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 17 20:27:26.115743 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 17 20:27:26.115756 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 17 20:27:26.115770 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 17 20:27:26.115783 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 17 20:27:26.115796 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 17 20:27:26.115809 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 17 20:27:26.115822 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 17 20:27:26.115843 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 17 20:27:26.115857 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 17 20:27:26.115871 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 17 20:27:26.115884 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 17 20:27:26.115897 kernel: iommu: Default domain type: Translated Mar 17 20:27:26.115911 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 17 20:27:26.115924 kernel: PCI: Using ACPI for IRQ routing Mar 17 20:27:26.115938 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 17 20:27:26.115951 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 17 20:27:26.115970 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Mar 17 20:27:26.116158 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 17 20:27:26.116349 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 17 20:27:26.116538 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 17 20:27:26.116559 kernel: vgaarb: loaded Mar 17 20:27:26.116573 kernel: clocksource: Switched to clocksource kvm-clock Mar 17 20:27:26.116586 kernel: VFS: Disk quotas dquot_6.6.0 Mar 17 20:27:26.116600 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 17 20:27:26.116635 kernel: pnp: PnP ACPI init Mar 17 20:27:26.116873 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 17 20:27:26.116896 kernel: pnp: PnP ACPI: found 5 devices Mar 17 20:27:26.116910 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 17 20:27:26.116923 kernel: NET: Registered PF_INET protocol family Mar 17 20:27:26.116937 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 17 20:27:26.116950 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Mar 17 20:27:26.116964 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 17 20:27:26.116977 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 17 20:27:26.116999 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Mar 17 20:27:26.117013 kernel: TCP: Hash tables configured (established 16384 bind 16384) Mar 17 20:27:26.117027 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 17 20:27:26.117040 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 17 20:27:26.117053 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 17 20:27:26.117067 kernel: NET: Registered PF_XDP protocol family Mar 17 20:27:26.117250 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Mar 17 20:27:26.117435 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Mar 17 20:27:26.117629 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Mar 17 20:27:26.117851 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Mar 17 20:27:26.118037 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Mar 17 20:27:26.118219 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Mar 17 20:27:26.118404 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Mar 17 20:27:26.118587 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Mar 17 20:27:26.118823 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Mar 17 20:27:26.119013 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Mar 17 20:27:26.119198 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Mar 17 20:27:26.119383 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Mar 17 20:27:26.119567 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Mar 17 20:27:26.119809 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Mar 17 20:27:26.119992 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Mar 17 20:27:26.120183 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Mar 17 20:27:26.120404 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Mar 17 20:27:26.120601 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Mar 17 20:27:26.120813 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Mar 17 20:27:26.120997 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Mar 17 20:27:26.121180 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Mar 17 20:27:26.121364 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Mar 17 20:27:26.121553 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Mar 17 20:27:26.121785 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Mar 17 20:27:26.121988 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Mar 17 20:27:26.122176 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Mar 17 20:27:26.122377 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Mar 17 20:27:26.122568 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Mar 17 20:27:26.122812 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Mar 17 20:27:26.123008 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Mar 17 20:27:26.123200 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Mar 17 20:27:26.123383 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Mar 17 20:27:26.123565 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Mar 17 20:27:26.123778 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Mar 17 20:27:26.123963 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Mar 17 20:27:26.124147 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Mar 17 20:27:26.124333 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Mar 17 20:27:26.124516 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Mar 17 20:27:26.124742 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Mar 17 20:27:26.124940 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Mar 17 20:27:26.125131 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Mar 17 20:27:26.125318 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Mar 17 20:27:26.125505 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Mar 17 20:27:26.125762 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Mar 17 20:27:26.125958 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Mar 17 20:27:26.126142 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Mar 17 20:27:26.126326 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Mar 17 20:27:26.126521 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Mar 17 20:27:26.126745 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Mar 17 20:27:26.126931 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Mar 17 20:27:26.127112 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 17 20:27:26.127281 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 17 20:27:26.127457 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 17 20:27:26.127626 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Mar 17 20:27:26.127867 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 17 20:27:26.128035 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Mar 17 20:27:26.128242 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Mar 17 20:27:26.128421 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Mar 17 20:27:26.128596 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Mar 17 20:27:26.128840 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Mar 17 20:27:26.129054 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Mar 17 20:27:26.129233 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Mar 17 20:27:26.129410 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Mar 17 20:27:26.129601 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Mar 17 20:27:26.129840 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Mar 17 20:27:26.130018 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Mar 17 20:27:26.130220 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Mar 17 20:27:26.130397 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Mar 17 20:27:26.130571 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Mar 17 20:27:26.130816 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Mar 17 20:27:26.130995 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Mar 17 20:27:26.131173 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Mar 17 20:27:26.131409 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Mar 17 20:27:26.131596 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Mar 17 20:27:26.131833 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Mar 17 20:27:26.132022 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Mar 17 20:27:26.132198 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Mar 17 20:27:26.132372 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Mar 17 20:27:26.132571 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Mar 17 20:27:26.132791 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Mar 17 20:27:26.132979 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Mar 17 20:27:26.133001 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 17 20:27:26.133016 kernel: PCI: CLS 0 bytes, default 64 Mar 17 20:27:26.133030 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Mar 17 20:27:26.133045 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Mar 17 20:27:26.133059 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Mar 17 20:27:26.133073 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Mar 17 20:27:26.133087 kernel: Initialise system trusted keyrings Mar 17 20:27:26.133109 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Mar 17 20:27:26.133124 kernel: Key type asymmetric registered Mar 17 20:27:26.133138 kernel: Asymmetric key parser 'x509' registered Mar 17 20:27:26.133152 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 17 20:27:26.133165 kernel: io scheduler mq-deadline registered Mar 17 20:27:26.133179 kernel: io scheduler kyber registered Mar 17 20:27:26.133193 kernel: io scheduler bfq registered Mar 17 20:27:26.133376 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Mar 17 20:27:26.133564 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Mar 17 20:27:26.133818 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 17 20:27:26.134008 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Mar 17 20:27:26.134192 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Mar 17 20:27:26.134376 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 17 20:27:26.134561 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Mar 17 20:27:26.134800 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Mar 17 20:27:26.134996 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 17 20:27:26.135183 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Mar 17 20:27:26.135366 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Mar 17 20:27:26.135552 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 17 20:27:26.135799 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Mar 17 20:27:26.135985 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Mar 17 20:27:26.136180 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 17 20:27:26.136365 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Mar 17 20:27:26.136550 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Mar 17 20:27:26.136784 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 17 20:27:26.136975 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Mar 17 20:27:26.137160 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Mar 17 20:27:26.137354 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 17 20:27:26.137541 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Mar 17 20:27:26.137787 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Mar 17 20:27:26.137974 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 17 20:27:26.137996 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 17 20:27:26.138011 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 17 20:27:26.138034 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 17 20:27:26.138048 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 17 20:27:26.138062 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 17 20:27:26.138077 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 17 20:27:26.138091 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 17 20:27:26.138104 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 17 20:27:26.138297 kernel: rtc_cmos 00:03: RTC can wake from S4 Mar 17 20:27:26.138327 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 17 20:27:26.138512 kernel: rtc_cmos 00:03: registered as rtc0 Mar 17 20:27:26.138717 kernel: rtc_cmos 00:03: setting system clock to 2025-03-17T20:27:25 UTC (1742243245) Mar 17 20:27:26.138920 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Mar 17 20:27:26.138942 kernel: intel_pstate: CPU model not supported Mar 17 20:27:26.138956 kernel: NET: Registered PF_INET6 protocol family Mar 17 20:27:26.138970 kernel: Segment Routing with IPv6 Mar 17 20:27:26.138985 kernel: In-situ OAM (IOAM) with IPv6 Mar 17 20:27:26.138999 kernel: NET: Registered PF_PACKET protocol family Mar 17 20:27:26.139013 kernel: Key type dns_resolver registered Mar 17 20:27:26.139035 kernel: IPI shorthand broadcast: enabled Mar 17 20:27:26.139050 kernel: sched_clock: Marking stable (1381014548, 237107266)->(1894952966, -276831152) Mar 17 20:27:26.139064 kernel: registered taskstats version 1 Mar 17 20:27:26.139078 kernel: Loading compiled-in X.509 certificates Mar 17 20:27:26.139092 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: 2d438fc13e28f87f3f580874887bade2e2b0c7dd' Mar 17 20:27:26.139106 kernel: Key type .fscrypt registered Mar 17 20:27:26.139120 kernel: Key type fscrypt-provisioning registered Mar 17 20:27:26.139134 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 17 20:27:26.139147 kernel: ima: Allocated hash algorithm: sha1 Mar 17 20:27:26.139167 kernel: ima: No architecture policies found Mar 17 20:27:26.139181 kernel: clk: Disabling unused clocks Mar 17 20:27:26.139195 kernel: Freeing unused kernel image (initmem) memory: 43476K Mar 17 20:27:26.139209 kernel: Write protecting the kernel read-only data: 38912k Mar 17 20:27:26.139223 kernel: Freeing unused kernel image (rodata/data gap) memory: 1716K Mar 17 20:27:26.139237 kernel: Run /init as init process Mar 17 20:27:26.139251 kernel: with arguments: Mar 17 20:27:26.139265 kernel: /init Mar 17 20:27:26.139278 kernel: with environment: Mar 17 20:27:26.139297 kernel: HOME=/ Mar 17 20:27:26.139311 kernel: TERM=linux Mar 17 20:27:26.139324 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 17 20:27:26.139340 systemd[1]: Successfully made /usr/ read-only. Mar 17 20:27:26.139358 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 17 20:27:26.139374 systemd[1]: Detected virtualization kvm. Mar 17 20:27:26.139388 systemd[1]: Detected architecture x86-64. Mar 17 20:27:26.139402 systemd[1]: Running in initrd. Mar 17 20:27:26.139423 systemd[1]: No hostname configured, using default hostname. Mar 17 20:27:26.139439 systemd[1]: Hostname set to . Mar 17 20:27:26.139453 systemd[1]: Initializing machine ID from VM UUID. Mar 17 20:27:26.139467 systemd[1]: Queued start job for default target initrd.target. Mar 17 20:27:26.139486 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 20:27:26.139501 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 20:27:26.139516 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 17 20:27:26.139531 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 20:27:26.139561 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 17 20:27:26.139577 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 17 20:27:26.139593 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 17 20:27:26.139608 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 17 20:27:26.139623 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 20:27:26.139638 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 20:27:26.139692 systemd[1]: Reached target paths.target - Path Units. Mar 17 20:27:26.139711 systemd[1]: Reached target slices.target - Slice Units. Mar 17 20:27:26.139726 systemd[1]: Reached target swap.target - Swaps. Mar 17 20:27:26.139741 systemd[1]: Reached target timers.target - Timer Units. Mar 17 20:27:26.139756 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 20:27:26.139770 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 20:27:26.139785 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 17 20:27:26.139799 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 17 20:27:26.139814 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 20:27:26.139837 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 20:27:26.139852 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 20:27:26.139867 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 20:27:26.139882 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 17 20:27:26.139897 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 20:27:26.139912 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 17 20:27:26.139926 systemd[1]: Starting systemd-fsck-usr.service... Mar 17 20:27:26.139941 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 20:27:26.139955 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 20:27:26.139976 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 20:27:26.139991 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 17 20:27:26.140011 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 20:27:26.140077 systemd-journald[201]: Collecting audit messages is disabled. Mar 17 20:27:26.140118 systemd[1]: Finished systemd-fsck-usr.service. Mar 17 20:27:26.140134 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 17 20:27:26.140149 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 17 20:27:26.140163 kernel: Bridge firewalling registered Mar 17 20:27:26.140185 systemd-journald[201]: Journal started Mar 17 20:27:26.140212 systemd-journald[201]: Runtime Journal (/run/log/journal/513cf63564c742c7b154b2649a1285f5) is 4.7M, max 37.9M, 33.2M free. Mar 17 20:27:26.073340 systemd-modules-load[202]: Inserted module 'overlay' Mar 17 20:27:26.109025 systemd-modules-load[202]: Inserted module 'br_netfilter' Mar 17 20:27:26.187744 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 20:27:26.188793 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 20:27:26.190826 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 20:27:26.191884 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 20:27:26.205913 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 20:27:26.207870 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 20:27:26.213908 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 20:27:26.221868 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 20:27:26.236031 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 20:27:26.239153 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 20:27:26.242211 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 20:27:26.253307 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 17 20:27:26.254563 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 20:27:26.258909 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 20:27:26.269214 dracut-cmdline[236]: dracut-dracut-053 Mar 17 20:27:26.274091 dracut-cmdline[236]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=2a4a0f64c0160ed10b339be09fdc9d7e265b13f78aefc87616e79bf13c00bb1c Mar 17 20:27:26.317201 systemd-resolved[238]: Positive Trust Anchors: Mar 17 20:27:26.317222 systemd-resolved[238]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 20:27:26.317265 systemd-resolved[238]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 20:27:26.321002 systemd-resolved[238]: Defaulting to hostname 'linux'. Mar 17 20:27:26.322831 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 20:27:26.329764 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 20:27:26.405746 kernel: SCSI subsystem initialized Mar 17 20:27:26.416717 kernel: Loading iSCSI transport class v2.0-870. Mar 17 20:27:26.430718 kernel: iscsi: registered transport (tcp) Mar 17 20:27:26.456740 kernel: iscsi: registered transport (qla4xxx) Mar 17 20:27:26.456803 kernel: QLogic iSCSI HBA Driver Mar 17 20:27:26.517740 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 17 20:27:26.526138 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 17 20:27:26.570012 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 17 20:27:26.570080 kernel: device-mapper: uevent: version 1.0.3 Mar 17 20:27:26.571739 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 17 20:27:26.622734 kernel: raid6: sse2x4 gen() 13554 MB/s Mar 17 20:27:26.640700 kernel: raid6: sse2x2 gen() 9512 MB/s Mar 17 20:27:26.659314 kernel: raid6: sse2x1 gen() 10103 MB/s Mar 17 20:27:26.659355 kernel: raid6: using algorithm sse2x4 gen() 13554 MB/s Mar 17 20:27:26.678354 kernel: raid6: .... xor() 7747 MB/s, rmw enabled Mar 17 20:27:26.678416 kernel: raid6: using ssse3x2 recovery algorithm Mar 17 20:27:26.705709 kernel: xor: automatically using best checksumming function avx Mar 17 20:27:26.886699 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 17 20:27:26.902765 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 17 20:27:26.910923 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 20:27:26.942861 systemd-udevd[421]: Using default interface naming scheme 'v255'. Mar 17 20:27:26.952441 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 20:27:26.961116 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 17 20:27:26.984705 dracut-pre-trigger[427]: rd.md=0: removing MD RAID activation Mar 17 20:27:27.026305 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 20:27:27.033891 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 20:27:27.167903 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 20:27:27.177878 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 17 20:27:27.202570 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 17 20:27:27.205162 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 20:27:27.208648 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 20:27:27.209362 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 20:27:27.218918 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 17 20:27:27.239235 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 17 20:27:27.312751 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Mar 17 20:27:27.391046 kernel: cryptd: max_cpu_qlen set to 1000 Mar 17 20:27:27.391084 kernel: AVX version of gcm_enc/dec engaged. Mar 17 20:27:27.391104 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Mar 17 20:27:27.391341 kernel: AES CTR mode by8 optimization enabled Mar 17 20:27:27.391365 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 17 20:27:27.391384 kernel: GPT:17805311 != 125829119 Mar 17 20:27:27.391403 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 17 20:27:27.391421 kernel: GPT:17805311 != 125829119 Mar 17 20:27:27.391439 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 17 20:27:27.391457 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 20:27:27.351874 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 20:27:27.352059 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 20:27:27.353843 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 20:27:27.354563 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 20:27:27.354774 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 20:27:27.355833 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 20:27:27.369146 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 20:27:27.370352 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 17 20:27:27.414963 kernel: libata version 3.00 loaded. Mar 17 20:27:27.419692 kernel: ACPI: bus type USB registered Mar 17 20:27:27.423036 kernel: ahci 0000:00:1f.2: version 3.0 Mar 17 20:27:27.554725 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 17 20:27:27.554783 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 17 20:27:27.555074 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 17 20:27:27.555332 kernel: usbcore: registered new interface driver usbfs Mar 17 20:27:27.555362 kernel: usbcore: registered new interface driver hub Mar 17 20:27:27.555386 kernel: usbcore: registered new device driver usb Mar 17 20:27:27.555405 kernel: scsi host0: ahci Mar 17 20:27:27.557957 kernel: scsi host1: ahci Mar 17 20:27:27.558238 kernel: scsi host2: ahci Mar 17 20:27:27.558520 kernel: scsi host3: ahci Mar 17 20:27:27.560230 kernel: scsi host4: ahci Mar 17 20:27:27.560492 kernel: scsi host5: ahci Mar 17 20:27:27.562815 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 38 Mar 17 20:27:27.562844 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 38 Mar 17 20:27:27.562864 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 38 Mar 17 20:27:27.562882 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 38 Mar 17 20:27:27.562899 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 38 Mar 17 20:27:27.562922 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 38 Mar 17 20:27:27.585805 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (470) Mar 17 20:27:27.613683 kernel: BTRFS: device fsid 16b3954e-2e86-4c7f-a948-d3d3817b1bdc devid 1 transid 42 /dev/vda3 scanned by (udev-worker) (468) Mar 17 20:27:27.673434 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 17 20:27:27.709953 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 20:27:27.724177 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 17 20:27:27.737438 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 17 20:27:27.748348 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 17 20:27:27.749218 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 17 20:27:27.756879 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 17 20:27:27.759846 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 20:27:27.768711 disk-uuid[562]: Primary Header is updated. Mar 17 20:27:27.768711 disk-uuid[562]: Secondary Entries is updated. Mar 17 20:27:27.768711 disk-uuid[562]: Secondary Header is updated. Mar 17 20:27:27.774729 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 20:27:27.810317 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 20:27:27.859682 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 17 20:27:27.865697 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 17 20:27:27.872556 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 17 20:27:27.872593 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 17 20:27:27.872613 kernel: ata3: SATA link down (SStatus 0 SControl 300) Mar 17 20:27:27.872667 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 17 20:27:27.893713 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Mar 17 20:27:27.961207 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Mar 17 20:27:27.961463 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Mar 17 20:27:27.961820 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Mar 17 20:27:27.962053 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Mar 17 20:27:27.962276 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Mar 17 20:27:27.962496 kernel: hub 1-0:1.0: USB hub found Mar 17 20:27:27.963989 kernel: hub 1-0:1.0: 4 ports detected Mar 17 20:27:27.964223 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Mar 17 20:27:27.964502 kernel: hub 2-0:1.0: USB hub found Mar 17 20:27:27.965856 kernel: hub 2-0:1.0: 4 ports detected Mar 17 20:27:28.190724 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Mar 17 20:27:28.332818 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 17 20:27:28.338758 kernel: usbcore: registered new interface driver usbhid Mar 17 20:27:28.338806 kernel: usbhid: USB HID core driver Mar 17 20:27:28.348003 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Mar 17 20:27:28.348049 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Mar 17 20:27:28.790707 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 20:27:28.793026 disk-uuid[563]: The operation has completed successfully. Mar 17 20:27:28.871886 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 17 20:27:28.872109 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 17 20:27:28.914956 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 17 20:27:28.919547 sh[586]: Success Mar 17 20:27:28.936856 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Mar 17 20:27:28.999963 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 17 20:27:29.007809 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 17 20:27:29.012259 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 17 20:27:29.042725 kernel: BTRFS info (device dm-0): first mount of filesystem 16b3954e-2e86-4c7f-a948-d3d3817b1bdc Mar 17 20:27:29.042780 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 17 20:27:29.042803 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 17 20:27:29.045177 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 17 20:27:29.048442 kernel: BTRFS info (device dm-0): using free space tree Mar 17 20:27:29.057271 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 17 20:27:29.058647 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 17 20:27:29.065867 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 17 20:27:29.070848 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 17 20:27:29.091641 kernel: BTRFS info (device vda6): first mount of filesystem e64ce651-fa93-44de-893d-ff1e0bc9061f Mar 17 20:27:29.091739 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 20:27:29.091765 kernel: BTRFS info (device vda6): using free space tree Mar 17 20:27:29.097678 kernel: BTRFS info (device vda6): auto enabling async discard Mar 17 20:27:29.111122 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 17 20:27:29.113684 kernel: BTRFS info (device vda6): last unmount of filesystem e64ce651-fa93-44de-893d-ff1e0bc9061f Mar 17 20:27:29.121554 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 17 20:27:29.129889 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 17 20:27:29.314766 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 20:27:29.321838 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 20:27:29.325729 ignition[672]: Ignition 2.20.0 Mar 17 20:27:29.325747 ignition[672]: Stage: fetch-offline Mar 17 20:27:29.325808 ignition[672]: no configs at "/usr/lib/ignition/base.d" Mar 17 20:27:29.325826 ignition[672]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 17 20:27:29.325958 ignition[672]: parsed url from cmdline: "" Mar 17 20:27:29.325965 ignition[672]: no config URL provided Mar 17 20:27:29.325975 ignition[672]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 20:27:29.325991 ignition[672]: no config at "/usr/lib/ignition/user.ign" Mar 17 20:27:29.326000 ignition[672]: failed to fetch config: resource requires networking Mar 17 20:27:29.333484 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 20:27:29.326308 ignition[672]: Ignition finished successfully Mar 17 20:27:29.365035 systemd-networkd[775]: lo: Link UP Mar 17 20:27:29.365064 systemd-networkd[775]: lo: Gained carrier Mar 17 20:27:29.367642 systemd-networkd[775]: Enumeration completed Mar 17 20:27:29.368182 systemd-networkd[775]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 20:27:29.368189 systemd-networkd[775]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 20:27:29.369927 systemd-networkd[775]: eth0: Link UP Mar 17 20:27:29.369933 systemd-networkd[775]: eth0: Gained carrier Mar 17 20:27:29.369944 systemd-networkd[775]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 20:27:29.371461 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 20:27:29.373271 systemd[1]: Reached target network.target - Network. Mar 17 20:27:29.380843 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 17 20:27:29.397815 ignition[779]: Ignition 2.20.0 Mar 17 20:27:29.397830 ignition[779]: Stage: fetch Mar 17 20:27:29.398067 ignition[779]: no configs at "/usr/lib/ignition/base.d" Mar 17 20:27:29.398088 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 17 20:27:29.398234 ignition[779]: parsed url from cmdline: "" Mar 17 20:27:29.398241 ignition[779]: no config URL provided Mar 17 20:27:29.398252 ignition[779]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 20:27:29.398268 ignition[779]: no config at "/usr/lib/ignition/user.ign" Mar 17 20:27:29.398378 ignition[779]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Mar 17 20:27:29.398544 ignition[779]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Mar 17 20:27:29.398597 ignition[779]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Mar 17 20:27:29.398647 ignition[779]: GET error: Get "http://169.254.169.254/openstack/latest/user_data": dial tcp 169.254.169.254:80: connect: network is unreachable Mar 17 20:27:29.429755 systemd-networkd[775]: eth0: DHCPv4 address 10.230.57.126/30, gateway 10.230.57.125 acquired from 10.230.57.125 Mar 17 20:27:29.598837 ignition[779]: GET http://169.254.169.254/openstack/latest/user_data: attempt #2 Mar 17 20:27:29.615975 ignition[779]: GET result: OK Mar 17 20:27:29.616436 ignition[779]: parsing config with SHA512: 54e03c6f8911a6b3453d2dc9a6f61ad08fd2c6822c979f0d17b1130d264343176f8237fc92a941471c077cf33f2e0c9229e9d98b68a1bd62a1055f1be0bb47f9 Mar 17 20:27:29.621913 unknown[779]: fetched base config from "system" Mar 17 20:27:29.621930 unknown[779]: fetched base config from "system" Mar 17 20:27:29.622332 ignition[779]: fetch: fetch complete Mar 17 20:27:29.621940 unknown[779]: fetched user config from "openstack" Mar 17 20:27:29.622342 ignition[779]: fetch: fetch passed Mar 17 20:27:29.629186 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 17 20:27:29.627262 ignition[779]: Ignition finished successfully Mar 17 20:27:29.642941 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 17 20:27:29.663155 ignition[787]: Ignition 2.20.0 Mar 17 20:27:29.663178 ignition[787]: Stage: kargs Mar 17 20:27:29.663418 ignition[787]: no configs at "/usr/lib/ignition/base.d" Mar 17 20:27:29.665910 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 17 20:27:29.663440 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 17 20:27:29.664522 ignition[787]: kargs: kargs passed Mar 17 20:27:29.664613 ignition[787]: Ignition finished successfully Mar 17 20:27:29.673874 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 17 20:27:29.691127 ignition[793]: Ignition 2.20.0 Mar 17 20:27:29.691148 ignition[793]: Stage: disks Mar 17 20:27:29.691370 ignition[793]: no configs at "/usr/lib/ignition/base.d" Mar 17 20:27:29.691391 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 17 20:27:29.692487 ignition[793]: disks: disks passed Mar 17 20:27:29.694997 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 17 20:27:29.692571 ignition[793]: Ignition finished successfully Mar 17 20:27:29.697154 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 17 20:27:29.698335 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 17 20:27:29.699740 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 20:27:29.701246 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 20:27:29.702810 systemd[1]: Reached target basic.target - Basic System. Mar 17 20:27:29.708872 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 17 20:27:29.739598 systemd-fsck[801]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Mar 17 20:27:29.742839 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 17 20:27:30.044806 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 17 20:27:30.166695 kernel: EXT4-fs (vda9): mounted filesystem 21764504-a65e-45eb-84e1-376b55b62aba r/w with ordered data mode. Quota mode: none. Mar 17 20:27:30.168289 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 17 20:27:30.169604 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 17 20:27:30.182847 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 20:27:30.185796 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 17 20:27:30.189741 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 17 20:27:30.191499 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Mar 17 20:27:30.205910 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (809) Mar 17 20:27:30.205946 kernel: BTRFS info (device vda6): first mount of filesystem e64ce651-fa93-44de-893d-ff1e0bc9061f Mar 17 20:27:30.205967 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 20:27:30.205986 kernel: BTRFS info (device vda6): using free space tree Mar 17 20:27:30.196001 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 17 20:27:30.196047 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 20:27:30.211627 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 17 20:27:30.221185 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 17 20:27:30.227678 kernel: BTRFS info (device vda6): auto enabling async discard Mar 17 20:27:30.234108 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 20:27:30.305353 initrd-setup-root[836]: cut: /sysroot/etc/passwd: No such file or directory Mar 17 20:27:30.317909 initrd-setup-root[845]: cut: /sysroot/etc/group: No such file or directory Mar 17 20:27:30.330619 initrd-setup-root[852]: cut: /sysroot/etc/shadow: No such file or directory Mar 17 20:27:30.337623 initrd-setup-root[859]: cut: /sysroot/etc/gshadow: No such file or directory Mar 17 20:27:30.455112 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 17 20:27:30.461789 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 17 20:27:30.464850 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 17 20:27:30.478715 kernel: BTRFS info (device vda6): last unmount of filesystem e64ce651-fa93-44de-893d-ff1e0bc9061f Mar 17 20:27:30.509845 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 17 20:27:30.513106 ignition[927]: INFO : Ignition 2.20.0 Mar 17 20:27:30.515734 ignition[927]: INFO : Stage: mount Mar 17 20:27:30.515734 ignition[927]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 20:27:30.515734 ignition[927]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 17 20:27:30.518953 ignition[927]: INFO : mount: mount passed Mar 17 20:27:30.518953 ignition[927]: INFO : Ignition finished successfully Mar 17 20:27:30.518248 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 17 20:27:31.039613 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 17 20:27:31.090933 systemd-networkd[775]: eth0: Gained IPv6LL Mar 17 20:27:32.599639 systemd-networkd[775]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8e5f:24:19ff:fee6:397e/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8e5f:24:19ff:fee6:397e/64 assigned by NDisc. Mar 17 20:27:32.599710 systemd-networkd[775]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Mar 17 20:27:37.371778 coreos-metadata[811]: Mar 17 20:27:37.371 WARN failed to locate config-drive, using the metadata service API instead Mar 17 20:27:37.393253 coreos-metadata[811]: Mar 17 20:27:37.393 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Mar 17 20:27:37.407111 coreos-metadata[811]: Mar 17 20:27:37.407 INFO Fetch successful Mar 17 20:27:37.408367 coreos-metadata[811]: Mar 17 20:27:37.408 INFO wrote hostname srv-24y52.gb1.brightbox.com to /sysroot/etc/hostname Mar 17 20:27:37.410876 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Mar 17 20:27:37.411251 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Mar 17 20:27:37.422876 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 17 20:27:37.449876 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 20:27:37.464705 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (944) Mar 17 20:27:37.469858 kernel: BTRFS info (device vda6): first mount of filesystem e64ce651-fa93-44de-893d-ff1e0bc9061f Mar 17 20:27:37.469908 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 20:27:37.471130 kernel: BTRFS info (device vda6): using free space tree Mar 17 20:27:37.476698 kernel: BTRFS info (device vda6): auto enabling async discard Mar 17 20:27:37.480918 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 20:27:37.512953 ignition[962]: INFO : Ignition 2.20.0 Mar 17 20:27:37.512953 ignition[962]: INFO : Stage: files Mar 17 20:27:37.514786 ignition[962]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 20:27:37.514786 ignition[962]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 17 20:27:37.514786 ignition[962]: DEBUG : files: compiled without relabeling support, skipping Mar 17 20:27:37.517535 ignition[962]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 17 20:27:37.517535 ignition[962]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 17 20:27:37.519591 ignition[962]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 17 20:27:37.519591 ignition[962]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 17 20:27:37.521501 ignition[962]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 17 20:27:37.520098 unknown[962]: wrote ssh authorized keys file for user: core Mar 17 20:27:37.523443 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Mar 17 20:27:37.523443 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Mar 17 20:27:37.757567 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 17 20:27:38.078495 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Mar 17 20:27:38.078495 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 20:27:38.087471 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Mar 17 20:27:38.754244 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 17 20:27:39.158434 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 20:27:39.158434 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 17 20:27:39.160876 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 17 20:27:39.160876 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 17 20:27:39.160876 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 17 20:27:39.160876 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 20:27:39.160876 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 20:27:39.160876 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 20:27:39.160876 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 20:27:39.160876 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 20:27:39.160876 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 20:27:39.160876 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Mar 17 20:27:39.160876 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Mar 17 20:27:39.160876 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Mar 17 20:27:39.160876 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Mar 17 20:27:39.753477 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 17 20:27:42.034164 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Mar 17 20:27:42.034164 ignition[962]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 17 20:27:42.038052 ignition[962]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 20:27:42.038052 ignition[962]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 20:27:42.038052 ignition[962]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 17 20:27:42.038052 ignition[962]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Mar 17 20:27:42.038052 ignition[962]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Mar 17 20:27:42.046747 ignition[962]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 17 20:27:42.046747 ignition[962]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 17 20:27:42.046747 ignition[962]: INFO : files: files passed Mar 17 20:27:42.046747 ignition[962]: INFO : Ignition finished successfully Mar 17 20:27:42.042391 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 17 20:27:42.055973 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 17 20:27:42.060873 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 17 20:27:42.067783 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 17 20:27:42.067955 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 17 20:27:42.088751 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 20:27:42.088751 initrd-setup-root-after-ignition[990]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 17 20:27:42.092728 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 20:27:42.095454 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 20:27:42.096831 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 17 20:27:42.103895 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 17 20:27:42.140552 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 17 20:27:42.141524 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 17 20:27:42.143286 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 17 20:27:42.144243 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 17 20:27:42.145966 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 17 20:27:42.153991 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 17 20:27:42.172507 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 20:27:42.177868 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 17 20:27:42.200584 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 17 20:27:42.201507 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 20:27:42.203096 systemd[1]: Stopped target timers.target - Timer Units. Mar 17 20:27:42.204586 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 17 20:27:42.204814 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 20:27:42.206546 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 17 20:27:42.207438 systemd[1]: Stopped target basic.target - Basic System. Mar 17 20:27:42.208949 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 17 20:27:42.210376 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 20:27:42.211773 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 17 20:27:42.213282 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 17 20:27:42.214830 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 20:27:42.216385 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 17 20:27:42.217810 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 17 20:27:42.219390 systemd[1]: Stopped target swap.target - Swaps. Mar 17 20:27:42.220697 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 17 20:27:42.220917 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 17 20:27:42.222934 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 17 20:27:42.223903 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 20:27:42.225300 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 17 20:27:42.225739 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 20:27:42.226948 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 17 20:27:42.227226 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 17 20:27:42.229072 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 17 20:27:42.229339 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 20:27:42.230975 systemd[1]: ignition-files.service: Deactivated successfully. Mar 17 20:27:42.231227 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 17 20:27:42.239955 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 17 20:27:42.240711 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 17 20:27:42.240899 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 20:27:42.257895 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 17 20:27:42.260730 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 17 20:27:42.260935 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 20:27:42.262315 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 17 20:27:42.262504 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 20:27:42.275493 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 17 20:27:42.275652 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 17 20:27:42.287349 ignition[1014]: INFO : Ignition 2.20.0 Mar 17 20:27:42.290260 ignition[1014]: INFO : Stage: umount Mar 17 20:27:42.290260 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 20:27:42.290260 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 17 20:27:42.290260 ignition[1014]: INFO : umount: umount passed Mar 17 20:27:42.290260 ignition[1014]: INFO : Ignition finished successfully Mar 17 20:27:42.291397 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 17 20:27:42.292516 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 17 20:27:42.293771 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 17 20:27:42.295527 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 17 20:27:42.295718 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 17 20:27:42.296640 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 17 20:27:42.296745 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 17 20:27:42.297499 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 17 20:27:42.297586 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 17 20:27:42.299226 systemd[1]: Stopped target network.target - Network. Mar 17 20:27:42.300466 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 17 20:27:42.300567 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 20:27:42.301971 systemd[1]: Stopped target paths.target - Path Units. Mar 17 20:27:42.303216 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 17 20:27:42.305873 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 20:27:42.306911 systemd[1]: Stopped target slices.target - Slice Units. Mar 17 20:27:42.308256 systemd[1]: Stopped target sockets.target - Socket Units. Mar 17 20:27:42.309674 systemd[1]: iscsid.socket: Deactivated successfully. Mar 17 20:27:42.309771 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 20:27:42.311104 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 17 20:27:42.311186 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 20:27:42.312584 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 17 20:27:42.312712 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 17 20:27:42.313914 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 17 20:27:42.314003 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 17 20:27:42.315805 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 17 20:27:42.318294 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 17 20:27:42.319926 systemd-networkd[775]: eth0: DHCPv6 lease lost Mar 17 20:27:42.322497 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 17 20:27:42.322678 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 17 20:27:42.326479 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 17 20:27:42.326731 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 17 20:27:42.329295 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 17 20:27:42.329703 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 17 20:27:42.329957 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 17 20:27:42.336939 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 17 20:27:42.339070 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 17 20:27:42.339378 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 17 20:27:42.340501 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 17 20:27:42.340583 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 17 20:27:42.348846 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 17 20:27:42.349937 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 17 20:27:42.350034 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 20:27:42.350912 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 20:27:42.351018 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 17 20:27:42.352339 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 17 20:27:42.352414 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 17 20:27:42.353441 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 17 20:27:42.353514 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 20:27:42.355620 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 20:27:42.358029 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 17 20:27:42.358129 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 17 20:27:42.367388 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 17 20:27:42.368533 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 20:27:42.370573 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 17 20:27:42.371792 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 17 20:27:42.373467 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 17 20:27:42.373609 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 17 20:27:42.375354 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 17 20:27:42.375420 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 20:27:42.376896 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 17 20:27:42.376988 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 17 20:27:42.379189 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 17 20:27:42.379267 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 17 20:27:42.380611 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 20:27:42.380709 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 20:27:42.387880 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 17 20:27:42.388643 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 17 20:27:42.388737 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 20:27:42.391551 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 20:27:42.391647 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 20:27:42.394881 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 17 20:27:42.394981 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 17 20:27:42.399923 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 17 20:27:42.400061 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 17 20:27:42.401418 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 17 20:27:42.408886 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 17 20:27:42.421362 systemd[1]: Switching root. Mar 17 20:27:42.454039 systemd-journald[201]: Journal stopped Mar 17 20:27:44.236861 systemd-journald[201]: Received SIGTERM from PID 1 (systemd). Mar 17 20:27:44.237079 kernel: SELinux: policy capability network_peer_controls=1 Mar 17 20:27:44.237164 kernel: SELinux: policy capability open_perms=1 Mar 17 20:27:44.237196 kernel: SELinux: policy capability extended_socket_class=1 Mar 17 20:27:44.237242 kernel: SELinux: policy capability always_check_network=0 Mar 17 20:27:44.237281 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 17 20:27:44.237329 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 17 20:27:44.237360 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 17 20:27:44.237400 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 17 20:27:44.237426 kernel: audit: type=1403 audit(1742243262.816:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 17 20:27:44.237471 systemd[1]: Successfully loaded SELinux policy in 68.005ms. Mar 17 20:27:44.237528 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 25.700ms. Mar 17 20:27:44.237566 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 17 20:27:44.237590 systemd[1]: Detected virtualization kvm. Mar 17 20:27:44.237625 systemd[1]: Detected architecture x86-64. Mar 17 20:27:44.237648 systemd[1]: Detected first boot. Mar 17 20:27:44.237709 systemd[1]: Hostname set to . Mar 17 20:27:44.237766 systemd[1]: Initializing machine ID from VM UUID. Mar 17 20:27:44.237800 zram_generator::config[1059]: No configuration found. Mar 17 20:27:44.237839 kernel: Guest personality initialized and is inactive Mar 17 20:27:44.237870 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Mar 17 20:27:44.237910 kernel: Initialized host personality Mar 17 20:27:44.237941 kernel: NET: Registered PF_VSOCK protocol family Mar 17 20:27:44.237991 systemd[1]: Populated /etc with preset unit settings. Mar 17 20:27:44.238022 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 17 20:27:44.238062 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 17 20:27:44.238092 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 17 20:27:44.238124 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 17 20:27:44.238152 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 17 20:27:44.238173 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 17 20:27:44.238194 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 17 20:27:44.238246 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 17 20:27:44.238277 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 17 20:27:44.238326 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 17 20:27:44.238350 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 17 20:27:44.238375 systemd[1]: Created slice user.slice - User and Session Slice. Mar 17 20:27:44.238411 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 20:27:44.238440 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 20:27:44.238467 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 17 20:27:44.238497 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 17 20:27:44.238551 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 17 20:27:44.238593 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 20:27:44.238620 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 17 20:27:44.238677 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 20:27:44.238709 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 17 20:27:44.238739 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 17 20:27:44.238765 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 17 20:27:44.238835 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 17 20:27:44.238870 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 20:27:44.238893 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 20:27:44.238914 systemd[1]: Reached target slices.target - Slice Units. Mar 17 20:27:44.238934 systemd[1]: Reached target swap.target - Swaps. Mar 17 20:27:44.238968 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 17 20:27:44.238997 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 17 20:27:44.239019 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 17 20:27:44.239039 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 20:27:44.239098 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 20:27:44.239146 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 20:27:44.239179 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 17 20:27:44.239209 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 17 20:27:44.239236 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 17 20:27:44.239274 systemd[1]: Mounting media.mount - External Media Directory... Mar 17 20:27:44.239309 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 20:27:44.239332 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 17 20:27:44.239353 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 17 20:27:44.239379 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 17 20:27:44.239417 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 17 20:27:44.239451 systemd[1]: Reached target machines.target - Containers. Mar 17 20:27:44.239483 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 17 20:27:44.239535 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 20:27:44.239563 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 20:27:44.239590 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 17 20:27:44.239611 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 20:27:44.239630 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 20:27:44.239669 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 20:27:44.239706 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 17 20:27:44.239746 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 20:27:44.239798 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 17 20:27:44.239850 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 17 20:27:44.239896 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 17 20:27:44.239939 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 17 20:27:44.239972 systemd[1]: Stopped systemd-fsck-usr.service. Mar 17 20:27:44.240021 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 17 20:27:44.240050 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 20:27:44.240072 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 20:27:44.240111 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 17 20:27:44.240163 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 17 20:27:44.240197 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 17 20:27:44.240225 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 20:27:44.240252 systemd[1]: verity-setup.service: Deactivated successfully. Mar 17 20:27:44.240275 systemd[1]: Stopped verity-setup.service. Mar 17 20:27:44.240312 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 20:27:44.240346 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 17 20:27:44.240384 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 17 20:27:44.240427 systemd[1]: Mounted media.mount - External Media Directory. Mar 17 20:27:44.240467 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 17 20:27:44.240490 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 17 20:27:44.240511 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 17 20:27:44.240557 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 20:27:44.240580 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 17 20:27:44.240606 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 17 20:27:44.240633 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 20:27:44.240687 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 20:27:44.240725 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 20:27:44.240777 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 20:27:44.240814 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 20:27:44.240852 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 17 20:27:44.241010 systemd-journald[1156]: Collecting audit messages is disabled. Mar 17 20:27:44.241065 systemd-journald[1156]: Journal started Mar 17 20:27:44.241147 systemd-journald[1156]: Runtime Journal (/run/log/journal/513cf63564c742c7b154b2649a1285f5) is 4.7M, max 37.9M, 33.2M free. Mar 17 20:27:43.764886 systemd[1]: Queued start job for default target multi-user.target. Mar 17 20:27:43.782939 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 17 20:27:43.783796 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 17 20:27:44.247710 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 17 20:27:44.251694 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 20:27:44.254921 kernel: fuse: init (API version 7.39) Mar 17 20:27:44.257699 kernel: loop: module loaded Mar 17 20:27:44.269150 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 17 20:27:44.272265 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 17 20:27:44.272689 kernel: ACPI: bus type drm_connector registered Mar 17 20:27:44.273895 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 20:27:44.274293 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 20:27:44.283825 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 20:27:44.284235 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 20:27:44.289161 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 17 20:27:44.309976 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 17 20:27:44.328327 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 17 20:27:44.334743 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 17 20:27:44.335717 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 17 20:27:44.335825 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 20:27:44.341564 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 17 20:27:44.363874 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 17 20:27:44.368417 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 17 20:27:44.369422 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 20:27:44.374894 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 17 20:27:44.387452 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 17 20:27:44.388414 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 20:27:44.391999 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 17 20:27:44.396140 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 20:27:44.407839 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 20:27:44.423006 systemd-journald[1156]: Time spent on flushing to /var/log/journal/513cf63564c742c7b154b2649a1285f5 is 164.416ms for 1154 entries. Mar 17 20:27:44.423006 systemd-journald[1156]: System Journal (/var/log/journal/513cf63564c742c7b154b2649a1285f5) is 8M, max 584.8M, 576.8M free. Mar 17 20:27:44.699198 systemd-journald[1156]: Received client request to flush runtime journal. Mar 17 20:27:44.699283 kernel: loop0: detected capacity change from 0 to 8 Mar 17 20:27:44.699322 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 17 20:27:44.699368 kernel: loop1: detected capacity change from 0 to 218376 Mar 17 20:27:44.417888 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 17 20:27:44.432004 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 17 20:27:44.436746 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 17 20:27:44.454422 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 17 20:27:44.461901 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 17 20:27:44.463961 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 17 20:27:44.551617 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 17 20:27:44.552764 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 17 20:27:44.563937 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 17 20:27:44.607539 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 20:27:44.636916 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 17 20:27:44.643362 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 17 20:27:44.694899 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 20:27:44.705867 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 17 20:27:44.726471 udevadm[1208]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 17 20:27:44.739832 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 17 20:27:44.750900 kernel: loop2: detected capacity change from 0 to 147912 Mar 17 20:27:44.758929 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 20:27:44.788209 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 17 20:27:44.811705 kernel: loop3: detected capacity change from 0 to 138176 Mar 17 20:27:44.901876 systemd-tmpfiles[1219]: ACLs are not supported, ignoring. Mar 17 20:27:44.903985 systemd-tmpfiles[1219]: ACLs are not supported, ignoring. Mar 17 20:27:44.916714 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 20:27:44.938887 kernel: loop4: detected capacity change from 0 to 8 Mar 17 20:27:44.947140 kernel: loop5: detected capacity change from 0 to 218376 Mar 17 20:27:44.985803 kernel: loop6: detected capacity change from 0 to 147912 Mar 17 20:27:45.023725 kernel: loop7: detected capacity change from 0 to 138176 Mar 17 20:27:45.073896 (sd-merge)[1224]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Mar 17 20:27:45.074973 (sd-merge)[1224]: Merged extensions into '/usr'. Mar 17 20:27:45.082488 systemd[1]: Reload requested from client PID 1197 ('systemd-sysext') (unit systemd-sysext.service)... Mar 17 20:27:45.082581 systemd[1]: Reloading... Mar 17 20:27:45.319708 zram_generator::config[1249]: No configuration found. Mar 17 20:27:45.537289 ldconfig[1192]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 17 20:27:45.697376 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 20:27:45.796972 systemd[1]: Reloading finished in 713 ms. Mar 17 20:27:45.817694 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 17 20:27:45.819579 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 17 20:27:45.875952 systemd[1]: Starting ensure-sysext.service... Mar 17 20:27:45.887923 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 20:27:45.916229 systemd[1]: Reload requested from client PID 1308 ('systemctl') (unit ensure-sysext.service)... Mar 17 20:27:45.916434 systemd[1]: Reloading... Mar 17 20:27:45.974517 systemd-tmpfiles[1309]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 17 20:27:45.975142 systemd-tmpfiles[1309]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 17 20:27:45.979728 systemd-tmpfiles[1309]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 17 20:27:45.980184 systemd-tmpfiles[1309]: ACLs are not supported, ignoring. Mar 17 20:27:45.980301 systemd-tmpfiles[1309]: ACLs are not supported, ignoring. Mar 17 20:27:45.997683 systemd-tmpfiles[1309]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 20:27:45.997704 systemd-tmpfiles[1309]: Skipping /boot Mar 17 20:27:46.064358 systemd-tmpfiles[1309]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 20:27:46.065966 systemd-tmpfiles[1309]: Skipping /boot Mar 17 20:27:46.070689 zram_generator::config[1338]: No configuration found. Mar 17 20:27:46.273204 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 20:27:46.373601 systemd[1]: Reloading finished in 456 ms. Mar 17 20:27:46.390103 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 17 20:27:46.407463 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 20:27:46.422071 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 20:27:46.427885 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 17 20:27:46.432078 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 17 20:27:46.439786 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 20:27:46.447420 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 20:27:46.450755 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 17 20:27:46.458155 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 20:27:46.458463 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 20:27:46.467975 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 20:27:46.474055 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 20:27:46.477595 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 20:27:46.478876 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 20:27:46.479053 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 17 20:27:46.479208 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 20:27:46.487370 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 20:27:46.487681 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 20:27:46.487934 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 20:27:46.488084 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 17 20:27:46.500339 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 17 20:27:46.501579 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 20:27:46.509859 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 20:27:46.510229 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 20:27:46.522084 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 20:27:46.523027 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 20:27:46.523206 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 17 20:27:46.523421 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 20:27:46.525396 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 20:27:46.527072 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 20:27:46.540768 systemd[1]: Finished ensure-sysext.service. Mar 17 20:27:46.545822 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 20:27:46.546153 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 20:27:46.555795 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 17 20:27:46.558892 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 17 20:27:46.560215 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 20:27:46.560510 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 20:27:46.569609 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 20:27:46.569775 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 20:27:46.579904 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 17 20:27:46.580726 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 20:27:46.581317 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 20:27:46.582744 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 20:27:46.613367 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 17 20:27:46.620885 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 17 20:27:46.624182 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 17 20:27:46.626791 systemd-udevd[1401]: Using default interface naming scheme 'v255'. Mar 17 20:27:46.662687 augenrules[1443]: No rules Mar 17 20:27:46.662206 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 20:27:46.664249 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 20:27:46.681767 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 17 20:27:46.695952 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 20:27:46.709916 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 20:27:46.871241 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 17 20:27:46.872262 systemd[1]: Reached target time-set.target - System Time Set. Mar 17 20:27:46.885942 systemd-resolved[1400]: Positive Trust Anchors: Mar 17 20:27:46.886619 systemd-resolved[1400]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 20:27:46.886797 systemd-resolved[1400]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 20:27:46.900335 systemd-resolved[1400]: Using system hostname 'srv-24y52.gb1.brightbox.com'. Mar 17 20:27:46.906493 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 20:27:46.907362 systemd-networkd[1454]: lo: Link UP Mar 17 20:27:46.907748 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 20:27:46.907891 systemd-networkd[1454]: lo: Gained carrier Mar 17 20:27:46.911085 systemd-networkd[1454]: Enumeration completed Mar 17 20:27:46.911213 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 20:27:46.912819 systemd[1]: Reached target network.target - Network. Mar 17 20:27:46.920902 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 17 20:27:46.931835 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 17 20:27:46.936767 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 17 20:27:47.024351 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 17 20:27:47.040694 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (1458) Mar 17 20:27:47.082706 systemd-networkd[1454]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 20:27:47.082897 systemd-networkd[1454]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 20:27:47.087969 systemd-networkd[1454]: eth0: Link UP Mar 17 20:27:47.087981 systemd-networkd[1454]: eth0: Gained carrier Mar 17 20:27:47.088037 systemd-networkd[1454]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 20:27:47.108818 systemd-networkd[1454]: eth0: DHCPv4 address 10.230.57.126/30, gateway 10.230.57.125 acquired from 10.230.57.125 Mar 17 20:27:47.112191 systemd-timesyncd[1424]: Network configuration changed, trying to establish connection. Mar 17 20:27:47.154704 kernel: mousedev: PS/2 mouse device common for all mice Mar 17 20:27:47.164970 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Mar 17 20:27:47.182684 kernel: ACPI: button: Power Button [PWRF] Mar 17 20:27:47.242166 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 17 20:27:47.263736 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Mar 17 20:27:47.264440 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 17 20:27:47.293687 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 17 20:27:47.301819 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 17 20:27:47.309357 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 17 20:27:47.304207 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 17 20:27:47.318987 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 20:27:47.535052 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 20:27:47.557406 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 17 20:27:47.570146 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 17 20:27:47.588522 lvm[1491]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 20:27:47.629348 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 17 20:27:47.630604 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 20:27:47.631465 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 20:27:47.632370 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 17 20:27:47.633441 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 17 20:27:47.634869 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 17 20:27:47.635783 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 17 20:27:47.636634 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 17 20:27:47.637420 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 17 20:27:47.637473 systemd[1]: Reached target paths.target - Path Units. Mar 17 20:27:47.638174 systemd[1]: Reached target timers.target - Timer Units. Mar 17 20:27:47.643960 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 17 20:27:47.646899 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 17 20:27:47.652693 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 17 20:27:47.653773 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 17 20:27:47.654565 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 17 20:27:47.666557 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 17 20:27:47.667975 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 17 20:27:47.677965 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 17 20:27:47.679763 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 17 20:27:47.680755 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 20:27:47.681455 systemd[1]: Reached target basic.target - Basic System. Mar 17 20:27:47.682240 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 17 20:27:47.682299 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 17 20:27:47.684711 lvm[1495]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 20:27:47.685794 systemd[1]: Starting containerd.service - containerd container runtime... Mar 17 20:27:47.695959 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 17 20:27:47.700091 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 17 20:27:47.706782 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 17 20:27:47.709227 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 17 20:27:47.711770 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 17 20:27:47.723475 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 17 20:27:47.730764 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 17 20:27:47.737908 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 17 20:27:47.747875 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 17 20:27:47.760352 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 17 20:27:47.763144 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 17 20:27:47.766026 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 17 20:27:47.774048 systemd[1]: Starting update-engine.service - Update Engine... Mar 17 20:27:47.775448 jq[1499]: false Mar 17 20:27:47.782818 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 17 20:27:47.786606 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 17 20:27:47.792249 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 17 20:27:47.792599 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 17 20:27:47.802591 jq[1513]: true Mar 17 20:27:47.806414 dbus-daemon[1498]: [system] SELinux support is enabled Mar 17 20:27:47.806902 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 17 20:27:47.837260 dbus-daemon[1498]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1454 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Mar 17 20:27:47.843413 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 17 20:27:47.843779 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 17 20:27:47.849435 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 17 20:27:47.849483 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 17 20:27:47.850366 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 17 20:27:47.850433 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 17 20:27:47.860890 dbus-daemon[1498]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 17 20:27:47.871911 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Mar 17 20:27:47.873166 update_engine[1508]: I20250317 20:27:47.873028 1508 main.cc:92] Flatcar Update Engine starting Mar 17 20:27:47.878329 systemd[1]: Started update-engine.service - Update Engine. Mar 17 20:27:47.878892 update_engine[1508]: I20250317 20:27:47.878651 1508 update_check_scheduler.cc:74] Next update check in 2m13s Mar 17 20:27:47.886077 extend-filesystems[1500]: Found loop4 Mar 17 20:27:47.886077 extend-filesystems[1500]: Found loop5 Mar 17 20:27:47.886077 extend-filesystems[1500]: Found loop6 Mar 17 20:27:47.886077 extend-filesystems[1500]: Found loop7 Mar 17 20:27:47.886077 extend-filesystems[1500]: Found vda Mar 17 20:27:47.886077 extend-filesystems[1500]: Found vda1 Mar 17 20:27:47.886077 extend-filesystems[1500]: Found vda2 Mar 17 20:27:47.886077 extend-filesystems[1500]: Found vda3 Mar 17 20:27:47.886077 extend-filesystems[1500]: Found usr Mar 17 20:27:47.886077 extend-filesystems[1500]: Found vda4 Mar 17 20:27:47.886077 extend-filesystems[1500]: Found vda6 Mar 17 20:27:47.899766 extend-filesystems[1500]: Found vda7 Mar 17 20:27:47.899766 extend-filesystems[1500]: Found vda9 Mar 17 20:27:47.899766 extend-filesystems[1500]: Checking size of /dev/vda9 Mar 17 20:27:47.901734 tar[1516]: linux-amd64/LICENSE Mar 17 20:27:47.901734 tar[1516]: linux-amd64/helm Mar 17 20:27:47.888850 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 17 20:27:48.121482 jq[1522]: true Mar 17 20:27:48.184279 extend-filesystems[1500]: Resized partition /dev/vda9 Mar 17 20:27:48.216586 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Mar 17 20:27:48.133497 (ntainerd)[1529]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 17 20:27:48.229512 extend-filesystems[1539]: resize2fs 1.47.1 (20-May-2024) Mar 17 20:27:48.222216 systemd[1]: motdgen.service: Deactivated successfully. Mar 17 20:27:48.222801 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 17 20:27:48.261912 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 17 20:27:48.315737 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (1462) Mar 17 20:27:48.373333 systemd-networkd[1454]: eth0: Gained IPv6LL Mar 17 20:27:48.376672 systemd-timesyncd[1424]: Network configuration changed, trying to establish connection. Mar 17 20:27:48.415386 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Mar 17 20:27:48.429824 dbus-daemon[1498]: [system] Successfully activated service 'org.freedesktop.hostname1' Mar 17 20:27:48.427047 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 17 20:27:48.430147 systemd-logind[1507]: Watching system buttons on /dev/input/event2 (Power Button) Mar 17 20:27:48.430187 systemd-logind[1507]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 17 20:27:48.430709 systemd[1]: Reached target network-online.target - Network is Online. Mar 17 20:27:48.433568 systemd-logind[1507]: New seat seat0. Mar 17 20:27:48.450870 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 20:27:48.456588 dbus-daemon[1498]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1532 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Mar 17 20:27:48.463944 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 17 20:27:48.464899 systemd[1]: Started systemd-logind.service - User Login Management. Mar 17 20:27:48.484420 systemd[1]: Starting polkit.service - Authorization Manager... Mar 17 20:27:48.532953 bash[1560]: Updated "/home/core/.ssh/authorized_keys" Mar 17 20:27:48.544088 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 17 20:27:48.597860 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 17 20:27:48.784714 polkitd[1561]: Started polkitd version 121 Mar 17 20:27:48.720172 systemd[1]: Starting sshkeys.service... Mar 17 20:27:48.824892 polkitd[1561]: Loading rules from directory /etc/polkit-1/rules.d Mar 17 20:27:48.829031 polkitd[1561]: Loading rules from directory /usr/share/polkit-1/rules.d Mar 17 20:27:48.836453 polkitd[1561]: Finished loading, compiling and executing 2 rules Mar 17 20:27:48.837614 dbus-daemon[1498]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Mar 17 20:27:48.837831 systemd[1]: Started polkit.service - Authorization Manager. Mar 17 20:27:48.843203 polkitd[1561]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Mar 17 20:27:48.905138 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Mar 17 20:27:48.925767 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 17 20:27:48.936561 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 17 20:27:48.965950 systemd-hostnamed[1532]: Hostname set to (static) Mar 17 20:27:48.980963 extend-filesystems[1539]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 17 20:27:48.980963 extend-filesystems[1539]: old_desc_blocks = 1, new_desc_blocks = 8 Mar 17 20:27:48.980963 extend-filesystems[1539]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Mar 17 20:27:48.976145 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 17 20:27:49.035940 extend-filesystems[1500]: Resized filesystem in /dev/vda9 Mar 17 20:27:48.976569 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 17 20:27:48.993672 systemd-networkd[1454]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8e5f:24:19ff:fee6:397e/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8e5f:24:19ff:fee6:397e/64 assigned by NDisc. Mar 17 20:27:48.993682 systemd-networkd[1454]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Mar 17 20:27:49.001652 systemd-timesyncd[1424]: Network configuration changed, trying to establish connection. Mar 17 20:27:49.126222 locksmithd[1533]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 17 20:27:49.223956 containerd[1529]: time="2025-03-17T20:27:49.223785404Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Mar 17 20:27:49.341347 containerd[1529]: time="2025-03-17T20:27:49.341272022Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 17 20:27:49.349821 containerd[1529]: time="2025-03-17T20:27:49.347922167Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.83-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 17 20:27:49.349821 containerd[1529]: time="2025-03-17T20:27:49.347995772Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 17 20:27:49.349821 containerd[1529]: time="2025-03-17T20:27:49.348023892Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 17 20:27:49.349821 containerd[1529]: time="2025-03-17T20:27:49.348348299Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 17 20:27:49.349821 containerd[1529]: time="2025-03-17T20:27:49.348390770Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 17 20:27:49.349821 containerd[1529]: time="2025-03-17T20:27:49.348521608Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 20:27:49.349821 containerd[1529]: time="2025-03-17T20:27:49.348546269Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 17 20:27:49.349821 containerd[1529]: time="2025-03-17T20:27:49.348873734Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 20:27:49.349821 containerd[1529]: time="2025-03-17T20:27:49.348900522Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 17 20:27:49.349821 containerd[1529]: time="2025-03-17T20:27:49.348924134Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 20:27:49.349821 containerd[1529]: time="2025-03-17T20:27:49.348953502Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 17 20:27:49.350304 containerd[1529]: time="2025-03-17T20:27:49.349144749Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 17 20:27:49.350304 containerd[1529]: time="2025-03-17T20:27:49.349628610Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 17 20:27:49.355803 containerd[1529]: time="2025-03-17T20:27:49.355750355Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 20:27:49.355936 containerd[1529]: time="2025-03-17T20:27:49.355907643Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 17 20:27:49.358015 containerd[1529]: time="2025-03-17T20:27:49.357559728Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 17 20:27:49.358015 containerd[1529]: time="2025-03-17T20:27:49.357721555Z" level=info msg="metadata content store policy set" policy=shared Mar 17 20:27:49.371990 containerd[1529]: time="2025-03-17T20:27:49.371933227Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 17 20:27:49.372254 containerd[1529]: time="2025-03-17T20:27:49.372217307Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 17 20:27:49.372402 containerd[1529]: time="2025-03-17T20:27:49.372374154Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 17 20:27:49.372548 containerd[1529]: time="2025-03-17T20:27:49.372521869Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 17 20:27:49.372830 containerd[1529]: time="2025-03-17T20:27:49.372692225Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 17 20:27:49.375073 containerd[1529]: time="2025-03-17T20:27:49.373444795Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 17 20:27:49.375073 containerd[1529]: time="2025-03-17T20:27:49.373835575Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 17 20:27:49.375073 containerd[1529]: time="2025-03-17T20:27:49.374071624Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 17 20:27:49.375073 containerd[1529]: time="2025-03-17T20:27:49.374102772Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 17 20:27:49.375073 containerd[1529]: time="2025-03-17T20:27:49.374127320Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 17 20:27:49.375073 containerd[1529]: time="2025-03-17T20:27:49.374149016Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 17 20:27:49.375073 containerd[1529]: time="2025-03-17T20:27:49.374170480Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 17 20:27:49.375073 containerd[1529]: time="2025-03-17T20:27:49.374190420Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 17 20:27:49.375073 containerd[1529]: time="2025-03-17T20:27:49.374211922Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 17 20:27:49.375073 containerd[1529]: time="2025-03-17T20:27:49.374236837Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 17 20:27:49.375073 containerd[1529]: time="2025-03-17T20:27:49.374279925Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 17 20:27:49.375073 containerd[1529]: time="2025-03-17T20:27:49.374307990Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 17 20:27:49.375073 containerd[1529]: time="2025-03-17T20:27:49.374328392Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 17 20:27:49.375073 containerd[1529]: time="2025-03-17T20:27:49.374387850Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 17 20:27:49.375688 containerd[1529]: time="2025-03-17T20:27:49.374412568Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 17 20:27:49.375688 containerd[1529]: time="2025-03-17T20:27:49.374443736Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 17 20:27:49.375688 containerd[1529]: time="2025-03-17T20:27:49.374467705Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 17 20:27:49.375688 containerd[1529]: time="2025-03-17T20:27:49.374494869Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 17 20:27:49.375688 containerd[1529]: time="2025-03-17T20:27:49.374536293Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 17 20:27:49.375688 containerd[1529]: time="2025-03-17T20:27:49.374559535Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 17 20:27:49.375688 containerd[1529]: time="2025-03-17T20:27:49.374579601Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 17 20:27:49.375688 containerd[1529]: time="2025-03-17T20:27:49.374619303Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 17 20:27:49.378977 containerd[1529]: time="2025-03-17T20:27:49.374647456Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 17 20:27:49.378977 containerd[1529]: time="2025-03-17T20:27:49.378515384Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 17 20:27:49.378977 containerd[1529]: time="2025-03-17T20:27:49.378542576Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 17 20:27:49.378977 containerd[1529]: time="2025-03-17T20:27:49.378599734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 17 20:27:49.378977 containerd[1529]: time="2025-03-17T20:27:49.378652014Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 17 20:27:49.379791 containerd[1529]: time="2025-03-17T20:27:49.379243898Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 17 20:27:49.379791 containerd[1529]: time="2025-03-17T20:27:49.379286321Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 17 20:27:49.380071 containerd[1529]: time="2025-03-17T20:27:49.379922887Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 17 20:27:49.380334 containerd[1529]: time="2025-03-17T20:27:49.380167782Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 17 20:27:49.380691 containerd[1529]: time="2025-03-17T20:27:49.380206221Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 17 20:27:49.380691 containerd[1529]: time="2025-03-17T20:27:49.380448902Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 17 20:27:49.380955 containerd[1529]: time="2025-03-17T20:27:49.380473939Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 17 20:27:49.380955 containerd[1529]: time="2025-03-17T20:27:49.380899138Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 17 20:27:49.381450 containerd[1529]: time="2025-03-17T20:27:49.381191091Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 17 20:27:49.382816 containerd[1529]: time="2025-03-17T20:27:49.381578000Z" level=info msg="NRI interface is disabled by configuration." Mar 17 20:27:49.382816 containerd[1529]: time="2025-03-17T20:27:49.382772463Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 17 20:27:49.386227 containerd[1529]: time="2025-03-17T20:27:49.386126518Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 17 20:27:49.387381 containerd[1529]: time="2025-03-17T20:27:49.386771808Z" level=info msg="Connect containerd service" Mar 17 20:27:49.387381 containerd[1529]: time="2025-03-17T20:27:49.386912656Z" level=info msg="using legacy CRI server" Mar 17 20:27:49.387381 containerd[1529]: time="2025-03-17T20:27:49.386948725Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 17 20:27:49.387381 containerd[1529]: time="2025-03-17T20:27:49.387249426Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 17 20:27:49.392976 containerd[1529]: time="2025-03-17T20:27:49.392137420Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 20:27:49.392976 containerd[1529]: time="2025-03-17T20:27:49.392323737Z" level=info msg="Start subscribing containerd event" Mar 17 20:27:49.392976 containerd[1529]: time="2025-03-17T20:27:49.392446316Z" level=info msg="Start recovering state" Mar 17 20:27:49.392976 containerd[1529]: time="2025-03-17T20:27:49.392609840Z" level=info msg="Start event monitor" Mar 17 20:27:49.392976 containerd[1529]: time="2025-03-17T20:27:49.392684650Z" level=info msg="Start snapshots syncer" Mar 17 20:27:49.392976 containerd[1529]: time="2025-03-17T20:27:49.392715243Z" level=info msg="Start cni network conf syncer for default" Mar 17 20:27:49.392976 containerd[1529]: time="2025-03-17T20:27:49.392737497Z" level=info msg="Start streaming server" Mar 17 20:27:49.397538 containerd[1529]: time="2025-03-17T20:27:49.394073447Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 17 20:27:49.397538 containerd[1529]: time="2025-03-17T20:27:49.394173628Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 17 20:27:49.394413 systemd[1]: Started containerd.service - containerd container runtime. Mar 17 20:27:49.398709 containerd[1529]: time="2025-03-17T20:27:49.398045200Z" level=info msg="containerd successfully booted in 0.176763s" Mar 17 20:27:49.577048 sshd_keygen[1521]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 17 20:27:49.674394 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 17 20:27:49.691841 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 17 20:27:49.704416 systemd[1]: Started sshd@0-10.230.57.126:22-139.178.89.65:43796.service - OpenSSH per-connection server daemon (139.178.89.65:43796). Mar 17 20:27:49.748188 systemd[1]: issuegen.service: Deactivated successfully. Mar 17 20:27:49.750184 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 17 20:27:49.765414 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 17 20:27:49.864089 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 17 20:27:49.874524 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 17 20:27:49.884329 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 17 20:27:49.886216 systemd[1]: Reached target getty.target - Login Prompts. Mar 17 20:27:50.310048 tar[1516]: linux-amd64/README.md Mar 17 20:27:50.332778 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 17 20:27:50.718087 sshd[1609]: Accepted publickey for core from 139.178.89.65 port 43796 ssh2: RSA SHA256:k1ZHd7Wei2LOtqjxADh/qNuu/xdqobLaJ/Va6KemVy0 Mar 17 20:27:50.721517 sshd-session[1609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 20:27:50.746996 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 17 20:27:50.759190 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 17 20:27:50.767253 systemd-logind[1507]: New session 1 of user core. Mar 17 20:27:50.817386 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 17 20:27:50.835226 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 17 20:27:50.841474 (systemd)[1624]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 17 20:27:50.848053 systemd-logind[1507]: New session c1 of user core. Mar 17 20:27:50.988935 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 20:27:51.001946 systemd-timesyncd[1424]: Network configuration changed, trying to establish connection. Mar 17 20:27:51.006911 (kubelet)[1636]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 20:27:51.089913 systemd[1624]: Queued start job for default target default.target. Mar 17 20:27:51.108753 systemd[1624]: Created slice app.slice - User Application Slice. Mar 17 20:27:51.108794 systemd[1624]: Reached target paths.target - Paths. Mar 17 20:27:51.108883 systemd[1624]: Reached target timers.target - Timers. Mar 17 20:27:51.113864 systemd[1624]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 17 20:27:51.133762 systemd[1624]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 17 20:27:51.134791 systemd[1624]: Reached target sockets.target - Sockets. Mar 17 20:27:51.134882 systemd[1624]: Reached target basic.target - Basic System. Mar 17 20:27:51.134990 systemd[1624]: Reached target default.target - Main User Target. Mar 17 20:27:51.135068 systemd[1624]: Startup finished in 272ms. Mar 17 20:27:51.135998 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 17 20:27:51.146191 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 17 20:27:51.797209 systemd[1]: Started sshd@1-10.230.57.126:22-139.178.89.65:52302.service - OpenSSH per-connection server daemon (139.178.89.65:52302). Mar 17 20:27:51.939188 kubelet[1636]: E0317 20:27:51.939112 1636 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 20:27:51.942488 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 20:27:51.942838 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 20:27:51.943605 systemd[1]: kubelet.service: Consumed 1.986s CPU time, 255.5M memory peak. Mar 17 20:27:52.699927 sshd[1646]: Accepted publickey for core from 139.178.89.65 port 52302 ssh2: RSA SHA256:k1ZHd7Wei2LOtqjxADh/qNuu/xdqobLaJ/Va6KemVy0 Mar 17 20:27:52.701950 sshd-session[1646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 20:27:52.711942 systemd-logind[1507]: New session 2 of user core. Mar 17 20:27:52.722260 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 17 20:27:53.319372 sshd[1651]: Connection closed by 139.178.89.65 port 52302 Mar 17 20:27:53.318609 sshd-session[1646]: pam_unix(sshd:session): session closed for user core Mar 17 20:27:53.323173 systemd[1]: sshd@1-10.230.57.126:22-139.178.89.65:52302.service: Deactivated successfully. Mar 17 20:27:53.325639 systemd[1]: session-2.scope: Deactivated successfully. Mar 17 20:27:53.327991 systemd-logind[1507]: Session 2 logged out. Waiting for processes to exit. Mar 17 20:27:53.329490 systemd-logind[1507]: Removed session 2. Mar 17 20:27:53.488492 systemd[1]: Started sshd@2-10.230.57.126:22-139.178.89.65:52306.service - OpenSSH per-connection server daemon (139.178.89.65:52306). Mar 17 20:27:54.389420 sshd[1657]: Accepted publickey for core from 139.178.89.65 port 52306 ssh2: RSA SHA256:k1ZHd7Wei2LOtqjxADh/qNuu/xdqobLaJ/Va6KemVy0 Mar 17 20:27:54.393373 sshd-session[1657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 20:27:54.402740 systemd-logind[1507]: New session 3 of user core. Mar 17 20:27:54.412058 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 17 20:27:54.944333 login[1617]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Mar 17 20:27:54.955835 systemd-logind[1507]: New session 4 of user core. Mar 17 20:27:54.964579 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 17 20:27:54.980946 login[1616]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Mar 17 20:27:54.991823 systemd-logind[1507]: New session 5 of user core. Mar 17 20:27:54.999996 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 17 20:27:55.015997 sshd[1659]: Connection closed by 139.178.89.65 port 52306 Mar 17 20:27:55.016932 sshd-session[1657]: pam_unix(sshd:session): session closed for user core Mar 17 20:27:55.023328 systemd[1]: sshd@2-10.230.57.126:22-139.178.89.65:52306.service: Deactivated successfully. Mar 17 20:27:55.027811 systemd[1]: session-3.scope: Deactivated successfully. Mar 17 20:27:55.031881 systemd-logind[1507]: Session 3 logged out. Waiting for processes to exit. Mar 17 20:27:55.035109 systemd-logind[1507]: Removed session 3. Mar 17 20:27:55.066417 coreos-metadata[1497]: Mar 17 20:27:55.066 WARN failed to locate config-drive, using the metadata service API instead Mar 17 20:27:55.094562 coreos-metadata[1497]: Mar 17 20:27:55.094 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Mar 17 20:27:55.100978 coreos-metadata[1497]: Mar 17 20:27:55.100 INFO Fetch failed with 404: resource not found Mar 17 20:27:55.100978 coreos-metadata[1497]: Mar 17 20:27:55.100 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Mar 17 20:27:55.102060 coreos-metadata[1497]: Mar 17 20:27:55.102 INFO Fetch successful Mar 17 20:27:55.102227 coreos-metadata[1497]: Mar 17 20:27:55.102 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Mar 17 20:27:55.114568 coreos-metadata[1497]: Mar 17 20:27:55.114 INFO Fetch successful Mar 17 20:27:55.114945 coreos-metadata[1497]: Mar 17 20:27:55.114 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Mar 17 20:27:55.131219 coreos-metadata[1497]: Mar 17 20:27:55.131 INFO Fetch successful Mar 17 20:27:55.131219 coreos-metadata[1497]: Mar 17 20:27:55.131 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Mar 17 20:27:55.148981 coreos-metadata[1497]: Mar 17 20:27:55.148 INFO Fetch successful Mar 17 20:27:55.148981 coreos-metadata[1497]: Mar 17 20:27:55.148 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Mar 17 20:27:55.168382 coreos-metadata[1497]: Mar 17 20:27:55.168 INFO Fetch successful Mar 17 20:27:55.197038 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 17 20:27:55.198031 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 17 20:27:56.365222 coreos-metadata[1590]: Mar 17 20:27:56.365 WARN failed to locate config-drive, using the metadata service API instead Mar 17 20:27:56.387976 coreos-metadata[1590]: Mar 17 20:27:56.387 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Mar 17 20:27:56.413057 coreos-metadata[1590]: Mar 17 20:27:56.412 INFO Fetch successful Mar 17 20:27:56.413505 coreos-metadata[1590]: Mar 17 20:27:56.413 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Mar 17 20:27:56.446643 coreos-metadata[1590]: Mar 17 20:27:56.446 INFO Fetch successful Mar 17 20:27:56.448939 unknown[1590]: wrote ssh authorized keys file for user: core Mar 17 20:27:56.484392 update-ssh-keys[1697]: Updated "/home/core/.ssh/authorized_keys" Mar 17 20:27:56.485469 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 17 20:27:56.489036 systemd[1]: Finished sshkeys.service. Mar 17 20:27:56.492773 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 17 20:27:56.493064 systemd[1]: Startup finished in 1.565s (kernel) + 17.031s (initrd) + 13.741s (userspace) = 32.338s. Mar 17 20:28:01.993298 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 17 20:28:02.005964 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 20:28:02.301778 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 20:28:02.317188 (kubelet)[1709]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 20:28:02.376109 kubelet[1709]: E0317 20:28:02.376016 1709 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 20:28:02.379876 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 20:28:02.380125 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 20:28:02.380736 systemd[1]: kubelet.service: Consumed 340ms CPU time, 103.7M memory peak. Mar 17 20:28:05.181051 systemd[1]: Started sshd@3-10.230.57.126:22-139.178.89.65:47678.service - OpenSSH per-connection server daemon (139.178.89.65:47678). Mar 17 20:28:06.072558 sshd[1716]: Accepted publickey for core from 139.178.89.65 port 47678 ssh2: RSA SHA256:k1ZHd7Wei2LOtqjxADh/qNuu/xdqobLaJ/Va6KemVy0 Mar 17 20:28:06.074913 sshd-session[1716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 20:28:06.083984 systemd-logind[1507]: New session 6 of user core. Mar 17 20:28:06.095933 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 17 20:28:06.692768 sshd[1718]: Connection closed by 139.178.89.65 port 47678 Mar 17 20:28:06.693889 sshd-session[1716]: pam_unix(sshd:session): session closed for user core Mar 17 20:28:06.699521 systemd-logind[1507]: Session 6 logged out. Waiting for processes to exit. Mar 17 20:28:06.700182 systemd[1]: sshd@3-10.230.57.126:22-139.178.89.65:47678.service: Deactivated successfully. Mar 17 20:28:06.702750 systemd[1]: session-6.scope: Deactivated successfully. Mar 17 20:28:06.704115 systemd-logind[1507]: Removed session 6. Mar 17 20:28:06.851045 systemd[1]: Started sshd@4-10.230.57.126:22-139.178.89.65:47690.service - OpenSSH per-connection server daemon (139.178.89.65:47690). Mar 17 20:28:07.756842 sshd[1724]: Accepted publickey for core from 139.178.89.65 port 47690 ssh2: RSA SHA256:k1ZHd7Wei2LOtqjxADh/qNuu/xdqobLaJ/Va6KemVy0 Mar 17 20:28:07.759164 sshd-session[1724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 20:28:07.770617 systemd-logind[1507]: New session 7 of user core. Mar 17 20:28:07.776915 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 17 20:28:08.370001 sshd[1726]: Connection closed by 139.178.89.65 port 47690 Mar 17 20:28:08.370752 sshd-session[1724]: pam_unix(sshd:session): session closed for user core Mar 17 20:28:08.376269 systemd[1]: sshd@4-10.230.57.126:22-139.178.89.65:47690.service: Deactivated successfully. Mar 17 20:28:08.379082 systemd[1]: session-7.scope: Deactivated successfully. Mar 17 20:28:08.381255 systemd-logind[1507]: Session 7 logged out. Waiting for processes to exit. Mar 17 20:28:08.383064 systemd-logind[1507]: Removed session 7. Mar 17 20:28:08.532005 systemd[1]: Started sshd@5-10.230.57.126:22-139.178.89.65:47698.service - OpenSSH per-connection server daemon (139.178.89.65:47698). Mar 17 20:28:09.425920 sshd[1732]: Accepted publickey for core from 139.178.89.65 port 47698 ssh2: RSA SHA256:k1ZHd7Wei2LOtqjxADh/qNuu/xdqobLaJ/Va6KemVy0 Mar 17 20:28:09.428009 sshd-session[1732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 20:28:09.435740 systemd-logind[1507]: New session 8 of user core. Mar 17 20:28:09.445891 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 17 20:28:10.044914 sshd[1734]: Connection closed by 139.178.89.65 port 47698 Mar 17 20:28:10.046207 sshd-session[1732]: pam_unix(sshd:session): session closed for user core Mar 17 20:28:10.052707 systemd[1]: sshd@5-10.230.57.126:22-139.178.89.65:47698.service: Deactivated successfully. Mar 17 20:28:10.055419 systemd[1]: session-8.scope: Deactivated successfully. Mar 17 20:28:10.056519 systemd-logind[1507]: Session 8 logged out. Waiting for processes to exit. Mar 17 20:28:10.058120 systemd-logind[1507]: Removed session 8. Mar 17 20:28:10.214051 systemd[1]: Started sshd@6-10.230.57.126:22-139.178.89.65:56124.service - OpenSSH per-connection server daemon (139.178.89.65:56124). Mar 17 20:28:11.105672 sshd[1740]: Accepted publickey for core from 139.178.89.65 port 56124 ssh2: RSA SHA256:k1ZHd7Wei2LOtqjxADh/qNuu/xdqobLaJ/Va6KemVy0 Mar 17 20:28:11.107973 sshd-session[1740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 20:28:11.116771 systemd-logind[1507]: New session 9 of user core. Mar 17 20:28:11.122879 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 17 20:28:11.598739 sudo[1743]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 17 20:28:11.599824 sudo[1743]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 20:28:11.617501 sudo[1743]: pam_unix(sudo:session): session closed for user root Mar 17 20:28:11.762694 sshd[1742]: Connection closed by 139.178.89.65 port 56124 Mar 17 20:28:11.761976 sshd-session[1740]: pam_unix(sshd:session): session closed for user core Mar 17 20:28:11.766961 systemd[1]: sshd@6-10.230.57.126:22-139.178.89.65:56124.service: Deactivated successfully. Mar 17 20:28:11.769738 systemd[1]: session-9.scope: Deactivated successfully. Mar 17 20:28:11.772001 systemd-logind[1507]: Session 9 logged out. Waiting for processes to exit. Mar 17 20:28:11.773591 systemd-logind[1507]: Removed session 9. Mar 17 20:28:11.919012 systemd[1]: Started sshd@7-10.230.57.126:22-139.178.89.65:56126.service - OpenSSH per-connection server daemon (139.178.89.65:56126). Mar 17 20:28:12.492755 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 17 20:28:12.502942 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 20:28:12.742083 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 20:28:12.748244 (kubelet)[1759]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 20:28:12.842618 sshd[1749]: Accepted publickey for core from 139.178.89.65 port 56126 ssh2: RSA SHA256:k1ZHd7Wei2LOtqjxADh/qNuu/xdqobLaJ/Va6KemVy0 Mar 17 20:28:12.845991 sshd-session[1749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 20:28:12.853696 kubelet[1759]: E0317 20:28:12.853472 1759 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 20:28:12.856575 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 20:28:12.856852 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 20:28:12.857422 systemd[1]: kubelet.service: Consumed 319ms CPU time, 105M memory peak. Mar 17 20:28:12.861492 systemd-logind[1507]: New session 10 of user core. Mar 17 20:28:12.869911 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 17 20:28:13.319553 sudo[1768]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 17 20:28:13.320033 sudo[1768]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 20:28:13.325547 sudo[1768]: pam_unix(sudo:session): session closed for user root Mar 17 20:28:13.334841 sudo[1767]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 17 20:28:13.335340 sudo[1767]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 20:28:13.357211 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 20:28:13.400542 augenrules[1790]: No rules Mar 17 20:28:13.401591 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 20:28:13.401976 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 20:28:13.403372 sudo[1767]: pam_unix(sudo:session): session closed for user root Mar 17 20:28:13.546838 sshd[1766]: Connection closed by 139.178.89.65 port 56126 Mar 17 20:28:13.548032 sshd-session[1749]: pam_unix(sshd:session): session closed for user core Mar 17 20:28:13.552646 systemd[1]: sshd@7-10.230.57.126:22-139.178.89.65:56126.service: Deactivated successfully. Mar 17 20:28:13.555013 systemd[1]: session-10.scope: Deactivated successfully. Mar 17 20:28:13.557018 systemd-logind[1507]: Session 10 logged out. Waiting for processes to exit. Mar 17 20:28:13.558555 systemd-logind[1507]: Removed session 10. Mar 17 20:28:13.709036 systemd[1]: Started sshd@8-10.230.57.126:22-139.178.89.65:56132.service - OpenSSH per-connection server daemon (139.178.89.65:56132). Mar 17 20:28:14.602264 sshd[1799]: Accepted publickey for core from 139.178.89.65 port 56132 ssh2: RSA SHA256:k1ZHd7Wei2LOtqjxADh/qNuu/xdqobLaJ/Va6KemVy0 Mar 17 20:28:14.604252 sshd-session[1799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 20:28:14.612085 systemd-logind[1507]: New session 11 of user core. Mar 17 20:28:14.621944 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 17 20:28:15.077741 sudo[1802]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 17 20:28:15.078225 sudo[1802]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 20:28:15.923373 (dockerd)[1819]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 17 20:28:15.924216 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 17 20:28:16.503175 dockerd[1819]: time="2025-03-17T20:28:16.502647049Z" level=info msg="Starting up" Mar 17 20:28:16.660826 dockerd[1819]: time="2025-03-17T20:28:16.660601357Z" level=info msg="Loading containers: start." Mar 17 20:28:16.896713 kernel: Initializing XFRM netlink socket Mar 17 20:28:16.965337 systemd-timesyncd[1424]: Network configuration changed, trying to establish connection. Mar 17 20:28:17.046233 systemd-networkd[1454]: docker0: Link UP Mar 17 20:28:17.078708 dockerd[1819]: time="2025-03-17T20:28:17.078581594Z" level=info msg="Loading containers: done." Mar 17 20:28:17.103766 dockerd[1819]: time="2025-03-17T20:28:17.103693070Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 17 20:28:17.103968 dockerd[1819]: time="2025-03-17T20:28:17.103943430Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Mar 17 20:28:17.104205 dockerd[1819]: time="2025-03-17T20:28:17.104167840Z" level=info msg="Daemon has completed initialization" Mar 17 20:28:17.144930 dockerd[1819]: time="2025-03-17T20:28:17.144781357Z" level=info msg="API listen on /run/docker.sock" Mar 17 20:28:17.145051 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 17 20:28:18.627693 systemd-timesyncd[1424]: Contacted time server [2a03:b0c0:1:d0::1f9:f001]:123 (2.flatcar.pool.ntp.org). Mar 17 20:28:18.627720 systemd-resolved[1400]: Clock change detected. Flushing caches. Mar 17 20:28:18.627785 systemd-timesyncd[1424]: Initial clock synchronization to Mon 2025-03-17 20:28:18.627306 UTC. Mar 17 20:28:19.552243 containerd[1529]: time="2025-03-17T20:28:19.551107093Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.3\"" Mar 17 20:28:20.371543 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount31627289.mount: Deactivated successfully. Mar 17 20:28:20.399420 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Mar 17 20:28:22.505548 containerd[1529]: time="2025-03-17T20:28:22.505445129Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 20:28:22.507333 containerd[1529]: time="2025-03-17T20:28:22.507281823Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.3: active requests=0, bytes read=28682438" Mar 17 20:28:22.508668 containerd[1529]: time="2025-03-17T20:28:22.507953686Z" level=info msg="ImageCreate event name:\"sha256:f8bdc4cfa0651e2d7edb4678d2b90129aef82a19249b37dc8d4705e8bd604295\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 20:28:22.512200 containerd[1529]: time="2025-03-17T20:28:22.512139358Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:279e45cf07e4f56925c3c5237179eb63616788426a96e94df5fedf728b18926e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 20:28:22.514707 containerd[1529]: time="2025-03-17T20:28:22.513913601Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.3\" with image id \"sha256:f8bdc4cfa0651e2d7edb4678d2b90129aef82a19249b37dc8d4705e8bd604295\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:279e45cf07e4f56925c3c5237179eb63616788426a96e94df5fedf728b18926e\", size \"28679230\" in 2.962659998s" Mar 17 20:28:22.514707 containerd[1529]: time="2025-03-17T20:28:22.514000301Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.3\" returns image reference \"sha256:f8bdc4cfa0651e2d7edb4678d2b90129aef82a19249b37dc8d4705e8bd604295\"" Mar 17 20:28:22.515203 containerd[1529]: time="2025-03-17T20:28:22.515173319Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.3\"" Mar 17 20:28:24.346891 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 17 20:28:24.358026 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 20:28:24.545020 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 20:28:24.547959 (kubelet)[2081]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 20:28:24.670023 kubelet[2081]: E0317 20:28:24.669250 2081 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 20:28:24.674841 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 20:28:24.675241 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 20:28:24.676618 systemd[1]: kubelet.service: Consumed 231ms CPU time, 104.4M memory peak. Mar 17 20:28:25.054534 containerd[1529]: time="2025-03-17T20:28:25.054392014Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 20:28:25.056882 containerd[1529]: time="2025-03-17T20:28:25.056827073Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.3: active requests=0, bytes read=24779692" Mar 17 20:28:25.057951 containerd[1529]: time="2025-03-17T20:28:25.057916697Z" level=info msg="ImageCreate event name:\"sha256:085818208a5213f37ef6d103caaf8e1e243816a614eb5b87a98bfffe79c687b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 20:28:25.063799 containerd[1529]: time="2025-03-17T20:28:25.063748673Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:54456a96a1bbdc35dcc2e70fcc1355bf655af67694e40b650ac12e83521f6411\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 20:28:25.064872 containerd[1529]: time="2025-03-17T20:28:25.064822136Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.3\" with image id \"sha256:085818208a5213f37ef6d103caaf8e1e243816a614eb5b87a98bfffe79c687b5\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:54456a96a1bbdc35dcc2e70fcc1355bf655af67694e40b650ac12e83521f6411\", size \"26267292\" in 2.549403952s" Mar 17 20:28:25.065019 containerd[1529]: time="2025-03-17T20:28:25.064987616Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.3\" returns image reference \"sha256:085818208a5213f37ef6d103caaf8e1e243816a614eb5b87a98bfffe79c687b5\"" Mar 17 20:28:25.065880 containerd[1529]: time="2025-03-17T20:28:25.065693701Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.3\"" Mar 17 20:28:27.107680 containerd[1529]: time="2025-03-17T20:28:27.107556749Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 20:28:27.109173 containerd[1529]: time="2025-03-17T20:28:27.109126040Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.3: active requests=0, bytes read=19171427" Mar 17 20:28:27.110545 containerd[1529]: time="2025-03-17T20:28:27.109932100Z" level=info msg="ImageCreate event name:\"sha256:b4260bf5078ab1b01dd05fb05015fc436b7100b7b9b5ea738e247a86008b16b8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 20:28:27.114880 containerd[1529]: time="2025-03-17T20:28:27.114843254Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:aafae2e3a8d65bc6dc3a0c6095c24bc72b1ff608e1417f0f5e860ce4a61c27df\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 20:28:27.117201 containerd[1529]: time="2025-03-17T20:28:27.116990127Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.3\" with image id \"sha256:b4260bf5078ab1b01dd05fb05015fc436b7100b7b9b5ea738e247a86008b16b8\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:aafae2e3a8d65bc6dc3a0c6095c24bc72b1ff608e1417f0f5e860ce4a61c27df\", size \"20659045\" in 2.050817372s" Mar 17 20:28:27.117201 containerd[1529]: time="2025-03-17T20:28:27.117034330Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.3\" returns image reference \"sha256:b4260bf5078ab1b01dd05fb05015fc436b7100b7b9b5ea738e247a86008b16b8\"" Mar 17 20:28:27.118376 containerd[1529]: time="2025-03-17T20:28:27.118099202Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.3\"" Mar 17 20:28:28.991103 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount88686973.mount: Deactivated successfully. Mar 17 20:28:30.011296 containerd[1529]: time="2025-03-17T20:28:30.011165370Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 20:28:30.016702 containerd[1529]: time="2025-03-17T20:28:30.016346493Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.3: active requests=0, bytes read=30918193" Mar 17 20:28:30.016702 containerd[1529]: time="2025-03-17T20:28:30.016484764Z" level=info msg="ImageCreate event name:\"sha256:a1ae78fd2f9d8fc345928378dc947c7f1e95f01c1a552781827071867a95d09c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 20:28:30.019509 containerd[1529]: time="2025-03-17T20:28:30.019463288Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:5015269547a0b7dd2c062758e9a64467b58978ff2502cad4c3f5cdf4aa554ad3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 20:28:30.020757 containerd[1529]: time="2025-03-17T20:28:30.020719805Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.3\" with image id \"sha256:a1ae78fd2f9d8fc345928378dc947c7f1e95f01c1a552781827071867a95d09c\", repo tag \"registry.k8s.io/kube-proxy:v1.32.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:5015269547a0b7dd2c062758e9a64467b58978ff2502cad4c3f5cdf4aa554ad3\", size \"30917204\" in 2.902579715s" Mar 17 20:28:30.021064 containerd[1529]: time="2025-03-17T20:28:30.020878478Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.3\" returns image reference \"sha256:a1ae78fd2f9d8fc345928378dc947c7f1e95f01c1a552781827071867a95d09c\"" Mar 17 20:28:30.022072 containerd[1529]: time="2025-03-17T20:28:30.022041766Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Mar 17 20:28:30.631011 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3796958825.mount: Deactivated successfully. Mar 17 20:28:32.205819 containerd[1529]: time="2025-03-17T20:28:32.205677649Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 20:28:32.208604 containerd[1529]: time="2025-03-17T20:28:32.208286564Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" Mar 17 20:28:32.209503 containerd[1529]: time="2025-03-17T20:28:32.209463912Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 20:28:32.214721 containerd[1529]: time="2025-03-17T20:28:32.214684250Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 20:28:32.216638 containerd[1529]: time="2025-03-17T20:28:32.216585757Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.194397084s" Mar 17 20:28:32.216739 containerd[1529]: time="2025-03-17T20:28:32.216637861Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Mar 17 20:28:32.218306 containerd[1529]: time="2025-03-17T20:28:32.218063317Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 17 20:28:32.777058 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3290949309.mount: Deactivated successfully. Mar 17 20:28:32.783053 containerd[1529]: time="2025-03-17T20:28:32.782989944Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 20:28:32.784563 containerd[1529]: time="2025-03-17T20:28:32.784500155Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Mar 17 20:28:32.784994 containerd[1529]: time="2025-03-17T20:28:32.784912074Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 20:28:32.788392 containerd[1529]: time="2025-03-17T20:28:32.788323252Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 20:28:32.790230 containerd[1529]: time="2025-03-17T20:28:32.790047520Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 571.92706ms" Mar 17 20:28:32.790230 containerd[1529]: time="2025-03-17T20:28:32.790091833Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Mar 17 20:28:32.791327 containerd[1529]: time="2025-03-17T20:28:32.791183865Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Mar 17 20:28:33.445913 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3106426207.mount: Deactivated successfully. Mar 17 20:28:34.513408 update_engine[1508]: I20250317 20:28:34.513150 1508 update_attempter.cc:509] Updating boot flags... Mar 17 20:28:34.639731 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (2215) Mar 17 20:28:34.690316 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 17 20:28:34.711204 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 20:28:34.836895 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (2214) Mar 17 20:28:35.037367 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (2214) Mar 17 20:28:35.295884 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 20:28:35.309328 (kubelet)[2230]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 20:28:35.452315 kubelet[2230]: E0317 20:28:35.451198 2230 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 20:28:35.454415 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 20:28:35.454765 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 20:28:35.455810 systemd[1]: kubelet.service: Consumed 396ms CPU time, 102.6M memory peak. Mar 17 20:28:36.640338 containerd[1529]: time="2025-03-17T20:28:36.639304885Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 20:28:36.641838 containerd[1529]: time="2025-03-17T20:28:36.641723244Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551328" Mar 17 20:28:36.642039 containerd[1529]: time="2025-03-17T20:28:36.641986447Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 20:28:36.648359 containerd[1529]: time="2025-03-17T20:28:36.648317120Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 20:28:36.650915 containerd[1529]: time="2025-03-17T20:28:36.650867382Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.859375959s" Mar 17 20:28:36.651027 containerd[1529]: time="2025-03-17T20:28:36.650966421Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Mar 17 20:28:40.690000 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 20:28:40.691122 systemd[1]: kubelet.service: Consumed 396ms CPU time, 102.6M memory peak. Mar 17 20:28:40.698048 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 20:28:40.743519 systemd[1]: Reload requested from client PID 2268 ('systemctl') (unit session-11.scope)... Mar 17 20:28:40.743592 systemd[1]: Reloading... Mar 17 20:28:40.945692 zram_generator::config[2310]: No configuration found. Mar 17 20:28:41.130691 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 20:28:41.284106 systemd[1]: Reloading finished in 539 ms. Mar 17 20:28:41.352869 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 20:28:41.355829 (kubelet)[2372]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 20:28:41.361720 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 20:28:41.366782 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 20:28:41.367198 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 20:28:41.367260 systemd[1]: kubelet.service: Consumed 139ms CPU time, 90.8M memory peak. Mar 17 20:28:41.375317 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 20:28:41.616818 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 20:28:41.631309 (kubelet)[2385]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 20:28:41.827538 kubelet[2385]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 20:28:41.827538 kubelet[2385]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 17 20:28:41.827538 kubelet[2385]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 20:28:41.828241 kubelet[2385]: I0317 20:28:41.827694 2385 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 20:28:42.151931 kubelet[2385]: I0317 20:28:42.151865 2385 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Mar 17 20:28:42.151931 kubelet[2385]: I0317 20:28:42.151920 2385 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 20:28:42.152301 kubelet[2385]: I0317 20:28:42.152278 2385 server.go:954] "Client rotation is on, will bootstrap in background" Mar 17 20:28:42.191216 kubelet[2385]: E0317 20:28:42.190655 2385 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.230.57.126:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.57.126:6443: connect: connection refused" logger="UnhandledError" Mar 17 20:28:42.192115 kubelet[2385]: I0317 20:28:42.191893 2385 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 20:28:42.217688 kubelet[2385]: E0317 20:28:42.217612 2385 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 17 20:28:42.217688 kubelet[2385]: I0317 20:28:42.217688 2385 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 17 20:28:42.226838 kubelet[2385]: I0317 20:28:42.226766 2385 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 20:28:42.231099 kubelet[2385]: I0317 20:28:42.231031 2385 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 20:28:42.231387 kubelet[2385]: I0317 20:28:42.231081 2385 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-24y52.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 17 20:28:42.233196 kubelet[2385]: I0317 20:28:42.233145 2385 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 20:28:42.233196 kubelet[2385]: I0317 20:28:42.233183 2385 container_manager_linux.go:304] "Creating device plugin manager" Mar 17 20:28:42.233474 kubelet[2385]: I0317 20:28:42.233441 2385 state_mem.go:36] "Initialized new in-memory state store" Mar 17 20:28:42.239472 kubelet[2385]: I0317 20:28:42.239423 2385 kubelet.go:446] "Attempting to sync node with API server" Mar 17 20:28:42.239472 kubelet[2385]: I0317 20:28:42.239458 2385 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 20:28:42.239636 kubelet[2385]: I0317 20:28:42.239529 2385 kubelet.go:352] "Adding apiserver pod source" Mar 17 20:28:42.239636 kubelet[2385]: I0317 20:28:42.239557 2385 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 20:28:42.246075 kubelet[2385]: I0317 20:28:42.245970 2385 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 17 20:28:42.250673 kubelet[2385]: I0317 20:28:42.249503 2385 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 20:28:42.250673 kubelet[2385]: W0317 20:28:42.250515 2385 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.57.126:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-24y52.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.57.126:6443: connect: connection refused Mar 17 20:28:42.250823 kubelet[2385]: W0317 20:28:42.250692 2385 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 17 20:28:42.251023 kubelet[2385]: E0317 20:28:42.250628 2385 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.230.57.126:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-24y52.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.57.126:6443: connect: connection refused" logger="UnhandledError" Mar 17 20:28:42.251222 kubelet[2385]: W0317 20:28:42.251179 2385 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.57.126:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.230.57.126:6443: connect: connection refused Mar 17 20:28:42.251385 kubelet[2385]: E0317 20:28:42.251355 2385 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.230.57.126:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.57.126:6443: connect: connection refused" logger="UnhandledError" Mar 17 20:28:42.252005 kubelet[2385]: I0317 20:28:42.251976 2385 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 17 20:28:42.252107 kubelet[2385]: I0317 20:28:42.252040 2385 server.go:1287] "Started kubelet" Mar 17 20:28:42.253301 kubelet[2385]: I0317 20:28:42.253239 2385 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 20:28:42.253936 kubelet[2385]: I0317 20:28:42.253852 2385 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 20:28:42.254485 kubelet[2385]: I0317 20:28:42.254455 2385 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 20:28:42.259152 kubelet[2385]: I0317 20:28:42.258476 2385 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 20:28:42.263489 kubelet[2385]: E0317 20:28:42.258974 2385 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.57.126:6443/api/v1/namespaces/default/events\": dial tcp 10.230.57.126:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-24y52.gb1.brightbox.com.182db1121f208edd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-24y52.gb1.brightbox.com,UID:srv-24y52.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-24y52.gb1.brightbox.com,},FirstTimestamp:2025-03-17 20:28:42.252005085 +0000 UTC m=+0.476028998,LastTimestamp:2025-03-17 20:28:42.252005085 +0000 UTC m=+0.476028998,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-24y52.gb1.brightbox.com,}" Mar 17 20:28:42.268090 kubelet[2385]: I0317 20:28:42.268051 2385 server.go:490] "Adding debug handlers to kubelet server" Mar 17 20:28:42.273363 kubelet[2385]: I0317 20:28:42.268510 2385 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 17 20:28:42.273499 kubelet[2385]: I0317 20:28:42.268747 2385 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 17 20:28:42.273749 kubelet[2385]: I0317 20:28:42.273722 2385 reconciler.go:26] "Reconciler: start to sync state" Mar 17 20:28:42.273849 kubelet[2385]: E0317 20:28:42.270286 2385 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"srv-24y52.gb1.brightbox.com\" not found" Mar 17 20:28:42.273849 kubelet[2385]: I0317 20:28:42.270058 2385 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 20:28:42.274789 kubelet[2385]: W0317 20:28:42.274703 2385 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.57.126:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.57.126:6443: connect: connection refused Mar 17 20:28:42.275902 kubelet[2385]: E0317 20:28:42.275870 2385 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.230.57.126:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.57.126:6443: connect: connection refused" logger="UnhandledError" Mar 17 20:28:42.275902 kubelet[2385]: I0317 20:28:42.275835 2385 factory.go:221] Registration of the systemd container factory successfully Mar 17 20:28:42.276025 kubelet[2385]: I0317 20:28:42.276001 2385 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 20:28:42.276456 kubelet[2385]: E0317 20:28:42.276393 2385 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.57.126:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-24y52.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.57.126:6443: connect: connection refused" interval="200ms" Mar 17 20:28:42.279063 kubelet[2385]: E0317 20:28:42.279038 2385 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 20:28:42.279944 kubelet[2385]: I0317 20:28:42.279108 2385 factory.go:221] Registration of the containerd container factory successfully Mar 17 20:28:42.316344 kubelet[2385]: I0317 20:28:42.316125 2385 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 20:28:42.318271 kubelet[2385]: I0317 20:28:42.318247 2385 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 20:28:42.318453 kubelet[2385]: I0317 20:28:42.318431 2385 status_manager.go:227] "Starting to sync pod status with apiserver" Mar 17 20:28:42.320352 kubelet[2385]: I0317 20:28:42.320149 2385 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 17 20:28:42.320352 kubelet[2385]: I0317 20:28:42.320181 2385 kubelet.go:2388] "Starting kubelet main sync loop" Mar 17 20:28:42.320352 kubelet[2385]: E0317 20:28:42.320258 2385 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 20:28:42.323032 kubelet[2385]: I0317 20:28:42.323003 2385 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 17 20:28:42.324822 kubelet[2385]: I0317 20:28:42.323754 2385 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 17 20:28:42.324822 kubelet[2385]: I0317 20:28:42.323798 2385 state_mem.go:36] "Initialized new in-memory state store" Mar 17 20:28:42.324822 kubelet[2385]: W0317 20:28:42.324460 2385 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.57.126:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.57.126:6443: connect: connection refused Mar 17 20:28:42.324822 kubelet[2385]: E0317 20:28:42.324501 2385 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.230.57.126:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.57.126:6443: connect: connection refused" logger="UnhandledError" Mar 17 20:28:42.326768 kubelet[2385]: I0317 20:28:42.326743 2385 policy_none.go:49] "None policy: Start" Mar 17 20:28:42.326932 kubelet[2385]: I0317 20:28:42.326909 2385 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 17 20:28:42.327065 kubelet[2385]: I0317 20:28:42.327045 2385 state_mem.go:35] "Initializing new in-memory state store" Mar 17 20:28:42.336083 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 17 20:28:42.352517 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 17 20:28:42.358993 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 17 20:28:42.371342 kubelet[2385]: I0317 20:28:42.371056 2385 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 20:28:42.371461 kubelet[2385]: I0317 20:28:42.371369 2385 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 17 20:28:42.371461 kubelet[2385]: I0317 20:28:42.371408 2385 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 20:28:42.372172 kubelet[2385]: I0317 20:28:42.372111 2385 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 20:28:42.374738 kubelet[2385]: E0317 20:28:42.374389 2385 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 17 20:28:42.374738 kubelet[2385]: E0317 20:28:42.374491 2385 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-24y52.gb1.brightbox.com\" not found" Mar 17 20:28:42.437001 systemd[1]: Created slice kubepods-burstable-pod4ad671ed4e2a022157c874cd6c193632.slice - libcontainer container kubepods-burstable-pod4ad671ed4e2a022157c874cd6c193632.slice. Mar 17 20:28:42.453147 kubelet[2385]: E0317 20:28:42.453072 2385 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-24y52.gb1.brightbox.com\" not found" node="srv-24y52.gb1.brightbox.com" Mar 17 20:28:42.456499 systemd[1]: Created slice kubepods-burstable-pod2765503312ac0f3bdf248914be766e34.slice - libcontainer container kubepods-burstable-pod2765503312ac0f3bdf248914be766e34.slice. Mar 17 20:28:42.460174 kubelet[2385]: E0317 20:28:42.460143 2385 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-24y52.gb1.brightbox.com\" not found" node="srv-24y52.gb1.brightbox.com" Mar 17 20:28:42.463984 systemd[1]: Created slice kubepods-burstable-podeb1708bb5e223b755fc8d1dc1680ca02.slice - libcontainer container kubepods-burstable-podeb1708bb5e223b755fc8d1dc1680ca02.slice. Mar 17 20:28:42.466857 kubelet[2385]: E0317 20:28:42.466829 2385 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-24y52.gb1.brightbox.com\" not found" node="srv-24y52.gb1.brightbox.com" Mar 17 20:28:42.474782 kubelet[2385]: I0317 20:28:42.474237 2385 kubelet_node_status.go:76] "Attempting to register node" node="srv-24y52.gb1.brightbox.com" Mar 17 20:28:42.474782 kubelet[2385]: I0317 20:28:42.474262 2385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2765503312ac0f3bdf248914be766e34-kubeconfig\") pod \"kube-scheduler-srv-24y52.gb1.brightbox.com\" (UID: \"2765503312ac0f3bdf248914be766e34\") " pod="kube-system/kube-scheduler-srv-24y52.gb1.brightbox.com" Mar 17 20:28:42.474782 kubelet[2385]: I0317 20:28:42.474318 2385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4ad671ed4e2a022157c874cd6c193632-ca-certs\") pod \"kube-controller-manager-srv-24y52.gb1.brightbox.com\" (UID: \"4ad671ed4e2a022157c874cd6c193632\") " pod="kube-system/kube-controller-manager-srv-24y52.gb1.brightbox.com" Mar 17 20:28:42.474782 kubelet[2385]: I0317 20:28:42.474357 2385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4ad671ed4e2a022157c874cd6c193632-flexvolume-dir\") pod \"kube-controller-manager-srv-24y52.gb1.brightbox.com\" (UID: \"4ad671ed4e2a022157c874cd6c193632\") " pod="kube-system/kube-controller-manager-srv-24y52.gb1.brightbox.com" Mar 17 20:28:42.474782 kubelet[2385]: I0317 20:28:42.474385 2385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4ad671ed4e2a022157c874cd6c193632-k8s-certs\") pod \"kube-controller-manager-srv-24y52.gb1.brightbox.com\" (UID: \"4ad671ed4e2a022157c874cd6c193632\") " pod="kube-system/kube-controller-manager-srv-24y52.gb1.brightbox.com" Mar 17 20:28:42.475105 kubelet[2385]: I0317 20:28:42.474417 2385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4ad671ed4e2a022157c874cd6c193632-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-24y52.gb1.brightbox.com\" (UID: \"4ad671ed4e2a022157c874cd6c193632\") " pod="kube-system/kube-controller-manager-srv-24y52.gb1.brightbox.com" Mar 17 20:28:42.475105 kubelet[2385]: I0317 20:28:42.474486 2385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4ad671ed4e2a022157c874cd6c193632-kubeconfig\") pod \"kube-controller-manager-srv-24y52.gb1.brightbox.com\" (UID: \"4ad671ed4e2a022157c874cd6c193632\") " pod="kube-system/kube-controller-manager-srv-24y52.gb1.brightbox.com" Mar 17 20:28:42.475105 kubelet[2385]: I0317 20:28:42.474516 2385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/eb1708bb5e223b755fc8d1dc1680ca02-ca-certs\") pod \"kube-apiserver-srv-24y52.gb1.brightbox.com\" (UID: \"eb1708bb5e223b755fc8d1dc1680ca02\") " pod="kube-system/kube-apiserver-srv-24y52.gb1.brightbox.com" Mar 17 20:28:42.475105 kubelet[2385]: I0317 20:28:42.474545 2385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/eb1708bb5e223b755fc8d1dc1680ca02-k8s-certs\") pod \"kube-apiserver-srv-24y52.gb1.brightbox.com\" (UID: \"eb1708bb5e223b755fc8d1dc1680ca02\") " pod="kube-system/kube-apiserver-srv-24y52.gb1.brightbox.com" Mar 17 20:28:42.475105 kubelet[2385]: I0317 20:28:42.474577 2385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/eb1708bb5e223b755fc8d1dc1680ca02-usr-share-ca-certificates\") pod \"kube-apiserver-srv-24y52.gb1.brightbox.com\" (UID: \"eb1708bb5e223b755fc8d1dc1680ca02\") " pod="kube-system/kube-apiserver-srv-24y52.gb1.brightbox.com" Mar 17 20:28:42.475383 kubelet[2385]: E0317 20:28:42.474734 2385 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.230.57.126:6443/api/v1/nodes\": dial tcp 10.230.57.126:6443: connect: connection refused" node="srv-24y52.gb1.brightbox.com" Mar 17 20:28:42.478058 kubelet[2385]: E0317 20:28:42.478006 2385 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.57.126:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-24y52.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.57.126:6443: connect: connection refused" interval="400ms" Mar 17 20:28:42.678129 kubelet[2385]: I0317 20:28:42.678079 2385 kubelet_node_status.go:76] "Attempting to register node" node="srv-24y52.gb1.brightbox.com" Mar 17 20:28:42.678582 kubelet[2385]: E0317 20:28:42.678540 2385 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.230.57.126:6443/api/v1/nodes\": dial tcp 10.230.57.126:6443: connect: connection refused" node="srv-24y52.gb1.brightbox.com" Mar 17 20:28:42.755852 containerd[1529]: time="2025-03-17T20:28:42.755729569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-24y52.gb1.brightbox.com,Uid:4ad671ed4e2a022157c874cd6c193632,Namespace:kube-system,Attempt:0,}" Mar 17 20:28:42.765448 containerd[1529]: time="2025-03-17T20:28:42.765400346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-24y52.gb1.brightbox.com,Uid:2765503312ac0f3bdf248914be766e34,Namespace:kube-system,Attempt:0,}" Mar 17 20:28:42.768533 containerd[1529]: time="2025-03-17T20:28:42.768230797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-24y52.gb1.brightbox.com,Uid:eb1708bb5e223b755fc8d1dc1680ca02,Namespace:kube-system,Attempt:0,}" Mar 17 20:28:42.879350 kubelet[2385]: E0317 20:28:42.879282 2385 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.57.126:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-24y52.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.57.126:6443: connect: connection refused" interval="800ms" Mar 17 20:28:43.081792 kubelet[2385]: I0317 20:28:43.081600 2385 kubelet_node_status.go:76] "Attempting to register node" node="srv-24y52.gb1.brightbox.com" Mar 17 20:28:43.082623 kubelet[2385]: E0317 20:28:43.082579 2385 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.230.57.126:6443/api/v1/nodes\": dial tcp 10.230.57.126:6443: connect: connection refused" node="srv-24y52.gb1.brightbox.com" Mar 17 20:28:43.174625 kubelet[2385]: W0317 20:28:43.174514 2385 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.57.126:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.57.126:6443: connect: connection refused Mar 17 20:28:43.174625 kubelet[2385]: E0317 20:28:43.174631 2385 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.230.57.126:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.57.126:6443: connect: connection refused" logger="UnhandledError" Mar 17 20:28:43.198293 kubelet[2385]: W0317 20:28:43.198220 2385 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.57.126:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.57.126:6443: connect: connection refused Mar 17 20:28:43.198402 kubelet[2385]: E0317 20:28:43.198304 2385 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.230.57.126:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.57.126:6443: connect: connection refused" logger="UnhandledError" Mar 17 20:28:43.359502 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2500389175.mount: Deactivated successfully. Mar 17 20:28:43.366462 containerd[1529]: time="2025-03-17T20:28:43.365172010Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 20:28:43.367763 containerd[1529]: time="2025-03-17T20:28:43.367684334Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Mar 17 20:28:43.369870 containerd[1529]: time="2025-03-17T20:28:43.369821060Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 20:28:43.371834 containerd[1529]: time="2025-03-17T20:28:43.371791554Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 20:28:43.373569 containerd[1529]: time="2025-03-17T20:28:43.373521242Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 17 20:28:43.383379 containerd[1529]: time="2025-03-17T20:28:43.383329830Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 20:28:43.386139 containerd[1529]: time="2025-03-17T20:28:43.386069681Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 17 20:28:43.387304 containerd[1529]: time="2025-03-17T20:28:43.387206742Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 20:28:43.390705 containerd[1529]: time="2025-03-17T20:28:43.389900725Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 633.975234ms" Mar 17 20:28:43.394255 containerd[1529]: time="2025-03-17T20:28:43.393472287Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 625.147694ms" Mar 17 20:28:43.394951 containerd[1529]: time="2025-03-17T20:28:43.394884818Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 629.216945ms" Mar 17 20:28:43.396589 kubelet[2385]: W0317 20:28:43.396508 2385 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.57.126:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-24y52.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.57.126:6443: connect: connection refused Mar 17 20:28:43.396857 kubelet[2385]: E0317 20:28:43.396826 2385 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.230.57.126:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-24y52.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.57.126:6443: connect: connection refused" logger="UnhandledError" Mar 17 20:28:43.577515 containerd[1529]: time="2025-03-17T20:28:43.577247522Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 20:28:43.577805 containerd[1529]: time="2025-03-17T20:28:43.577148102Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 20:28:43.577805 containerd[1529]: time="2025-03-17T20:28:43.577595781Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 20:28:43.577805 containerd[1529]: time="2025-03-17T20:28:43.577770043Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 20:28:43.578237 containerd[1529]: time="2025-03-17T20:28:43.578145017Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 20:28:43.578422 containerd[1529]: time="2025-03-17T20:28:43.578100838Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 20:28:43.578824 containerd[1529]: time="2025-03-17T20:28:43.578727790Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 20:28:43.579901 containerd[1529]: time="2025-03-17T20:28:43.579777163Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 20:28:43.583584 containerd[1529]: time="2025-03-17T20:28:43.583492443Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 20:28:43.586153 containerd[1529]: time="2025-03-17T20:28:43.585726611Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 20:28:43.586153 containerd[1529]: time="2025-03-17T20:28:43.585757731Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 20:28:43.586153 containerd[1529]: time="2025-03-17T20:28:43.585887010Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 20:28:43.604631 kubelet[2385]: W0317 20:28:43.603615 2385 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.57.126:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.230.57.126:6443: connect: connection refused Mar 17 20:28:43.604631 kubelet[2385]: E0317 20:28:43.603744 2385 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.230.57.126:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.57.126:6443: connect: connection refused" logger="UnhandledError" Mar 17 20:28:43.696184 kubelet[2385]: E0317 20:28:43.695975 2385 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.57.126:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-24y52.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.57.126:6443: connect: connection refused" interval="1.6s" Mar 17 20:28:43.718740 systemd[1]: Started cri-containerd-a5bdfe1a8584805ebe3901ab5fe727df07415010ee5a4e40f09fb1867b0766ef.scope - libcontainer container a5bdfe1a8584805ebe3901ab5fe727df07415010ee5a4e40f09fb1867b0766ef. Mar 17 20:28:43.744884 systemd[1]: Started cri-containerd-1a2beca030910dcd3909e9ae816b0e70e2a508c465d8259b73ebb9692fa55c19.scope - libcontainer container 1a2beca030910dcd3909e9ae816b0e70e2a508c465d8259b73ebb9692fa55c19. Mar 17 20:28:43.748100 systemd[1]: Started cri-containerd-7a51db4f37047c2678fe5bd838517ea77453ade28e1132fb56bf3f7a6737eec4.scope - libcontainer container 7a51db4f37047c2678fe5bd838517ea77453ade28e1132fb56bf3f7a6737eec4. Mar 17 20:28:43.859965 containerd[1529]: time="2025-03-17T20:28:43.859766634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-24y52.gb1.brightbox.com,Uid:2765503312ac0f3bdf248914be766e34,Namespace:kube-system,Attempt:0,} returns sandbox id \"7a51db4f37047c2678fe5bd838517ea77453ade28e1132fb56bf3f7a6737eec4\"" Mar 17 20:28:43.868286 containerd[1529]: time="2025-03-17T20:28:43.868058752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-24y52.gb1.brightbox.com,Uid:eb1708bb5e223b755fc8d1dc1680ca02,Namespace:kube-system,Attempt:0,} returns sandbox id \"1a2beca030910dcd3909e9ae816b0e70e2a508c465d8259b73ebb9692fa55c19\"" Mar 17 20:28:43.870153 containerd[1529]: time="2025-03-17T20:28:43.870091865Z" level=info msg="CreateContainer within sandbox \"7a51db4f37047c2678fe5bd838517ea77453ade28e1132fb56bf3f7a6737eec4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 17 20:28:43.875622 containerd[1529]: time="2025-03-17T20:28:43.875540524Z" level=info msg="CreateContainer within sandbox \"1a2beca030910dcd3909e9ae816b0e70e2a508c465d8259b73ebb9692fa55c19\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 17 20:28:43.881153 containerd[1529]: time="2025-03-17T20:28:43.880896658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-24y52.gb1.brightbox.com,Uid:4ad671ed4e2a022157c874cd6c193632,Namespace:kube-system,Attempt:0,} returns sandbox id \"a5bdfe1a8584805ebe3901ab5fe727df07415010ee5a4e40f09fb1867b0766ef\"" Mar 17 20:28:43.884857 kubelet[2385]: I0317 20:28:43.884827 2385 kubelet_node_status.go:76] "Attempting to register node" node="srv-24y52.gb1.brightbox.com" Mar 17 20:28:43.886170 containerd[1529]: time="2025-03-17T20:28:43.885898764Z" level=info msg="CreateContainer within sandbox \"a5bdfe1a8584805ebe3901ab5fe727df07415010ee5a4e40f09fb1867b0766ef\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 17 20:28:43.886625 kubelet[2385]: E0317 20:28:43.886583 2385 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.230.57.126:6443/api/v1/nodes\": dial tcp 10.230.57.126:6443: connect: connection refused" node="srv-24y52.gb1.brightbox.com" Mar 17 20:28:43.899410 containerd[1529]: time="2025-03-17T20:28:43.899370710Z" level=info msg="CreateContainer within sandbox \"7a51db4f37047c2678fe5bd838517ea77453ade28e1132fb56bf3f7a6737eec4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c25ca26bc1f905f62d31d622451ac9cf4aec3b77e3a6a61cc599f51cf05dda20\"" Mar 17 20:28:43.900108 containerd[1529]: time="2025-03-17T20:28:43.900075952Z" level=info msg="StartContainer for \"c25ca26bc1f905f62d31d622451ac9cf4aec3b77e3a6a61cc599f51cf05dda20\"" Mar 17 20:28:43.906538 containerd[1529]: time="2025-03-17T20:28:43.906210516Z" level=info msg="CreateContainer within sandbox \"1a2beca030910dcd3909e9ae816b0e70e2a508c465d8259b73ebb9692fa55c19\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"350ffeba6125b6184a7da8b9cd1399ab1a907d04fa3b7e3a8cd14df586610f98\"" Mar 17 20:28:43.906803 containerd[1529]: time="2025-03-17T20:28:43.906748993Z" level=info msg="StartContainer for \"350ffeba6125b6184a7da8b9cd1399ab1a907d04fa3b7e3a8cd14df586610f98\"" Mar 17 20:28:43.917323 containerd[1529]: time="2025-03-17T20:28:43.917119679Z" level=info msg="CreateContainer within sandbox \"a5bdfe1a8584805ebe3901ab5fe727df07415010ee5a4e40f09fb1867b0766ef\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d57f6af7b74a70308c387f6408345b053a4a86146df0f9d6aa70dd7153c77c3f\"" Mar 17 20:28:43.919388 containerd[1529]: time="2025-03-17T20:28:43.917953494Z" level=info msg="StartContainer for \"d57f6af7b74a70308c387f6408345b053a4a86146df0f9d6aa70dd7153c77c3f\"" Mar 17 20:28:43.958373 systemd[1]: Started cri-containerd-350ffeba6125b6184a7da8b9cd1399ab1a907d04fa3b7e3a8cd14df586610f98.scope - libcontainer container 350ffeba6125b6184a7da8b9cd1399ab1a907d04fa3b7e3a8cd14df586610f98. Mar 17 20:28:43.972854 systemd[1]: Started cri-containerd-c25ca26bc1f905f62d31d622451ac9cf4aec3b77e3a6a61cc599f51cf05dda20.scope - libcontainer container c25ca26bc1f905f62d31d622451ac9cf4aec3b77e3a6a61cc599f51cf05dda20. Mar 17 20:28:43.995852 systemd[1]: Started cri-containerd-d57f6af7b74a70308c387f6408345b053a4a86146df0f9d6aa70dd7153c77c3f.scope - libcontainer container d57f6af7b74a70308c387f6408345b053a4a86146df0f9d6aa70dd7153c77c3f. Mar 17 20:28:44.063142 containerd[1529]: time="2025-03-17T20:28:44.062914282Z" level=info msg="StartContainer for \"350ffeba6125b6184a7da8b9cd1399ab1a907d04fa3b7e3a8cd14df586610f98\" returns successfully" Mar 17 20:28:44.096004 containerd[1529]: time="2025-03-17T20:28:44.095953131Z" level=info msg="StartContainer for \"d57f6af7b74a70308c387f6408345b053a4a86146df0f9d6aa70dd7153c77c3f\" returns successfully" Mar 17 20:28:44.111722 containerd[1529]: time="2025-03-17T20:28:44.111659854Z" level=info msg="StartContainer for \"c25ca26bc1f905f62d31d622451ac9cf4aec3b77e3a6a61cc599f51cf05dda20\" returns successfully" Mar 17 20:28:44.342128 kubelet[2385]: E0317 20:28:44.341009 2385 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-24y52.gb1.brightbox.com\" not found" node="srv-24y52.gb1.brightbox.com" Mar 17 20:28:44.375464 kubelet[2385]: E0317 20:28:44.368346 2385 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-24y52.gb1.brightbox.com\" not found" node="srv-24y52.gb1.brightbox.com" Mar 17 20:28:44.375464 kubelet[2385]: E0317 20:28:44.368813 2385 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.230.57.126:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.57.126:6443: connect: connection refused" logger="UnhandledError" Mar 17 20:28:44.375464 kubelet[2385]: E0317 20:28:44.374754 2385 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-24y52.gb1.brightbox.com\" not found" node="srv-24y52.gb1.brightbox.com" Mar 17 20:28:45.376469 kubelet[2385]: E0317 20:28:45.376430 2385 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-24y52.gb1.brightbox.com\" not found" node="srv-24y52.gb1.brightbox.com" Mar 17 20:28:45.377139 kubelet[2385]: E0317 20:28:45.377050 2385 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-24y52.gb1.brightbox.com\" not found" node="srv-24y52.gb1.brightbox.com" Mar 17 20:28:45.491150 kubelet[2385]: I0317 20:28:45.491110 2385 kubelet_node_status.go:76] "Attempting to register node" node="srv-24y52.gb1.brightbox.com" Mar 17 20:28:46.383672 kubelet[2385]: E0317 20:28:46.382155 2385 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-24y52.gb1.brightbox.com\" not found" node="srv-24y52.gb1.brightbox.com" Mar 17 20:28:47.085084 kubelet[2385]: E0317 20:28:47.085007 2385 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-24y52.gb1.brightbox.com\" not found" node="srv-24y52.gb1.brightbox.com" Mar 17 20:28:47.172334 kubelet[2385]: I0317 20:28:47.172270 2385 kubelet_node_status.go:79] "Successfully registered node" node="srv-24y52.gb1.brightbox.com" Mar 17 20:28:47.172334 kubelet[2385]: E0317 20:28:47.172333 2385 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"srv-24y52.gb1.brightbox.com\": node \"srv-24y52.gb1.brightbox.com\" not found" Mar 17 20:28:47.246693 kubelet[2385]: I0317 20:28:47.245335 2385 apiserver.go:52] "Watching apiserver" Mar 17 20:28:47.271755 kubelet[2385]: I0317 20:28:47.271695 2385 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-24y52.gb1.brightbox.com" Mar 17 20:28:47.274417 kubelet[2385]: I0317 20:28:47.274339 2385 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 20:28:47.278822 kubelet[2385]: E0317 20:28:47.278754 2385 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-24y52.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-24y52.gb1.brightbox.com" Mar 17 20:28:47.278822 kubelet[2385]: I0317 20:28:47.278786 2385 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-24y52.gb1.brightbox.com" Mar 17 20:28:47.280860 kubelet[2385]: E0317 20:28:47.280824 2385 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-24y52.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-24y52.gb1.brightbox.com" Mar 17 20:28:47.280981 kubelet[2385]: I0317 20:28:47.280860 2385 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-24y52.gb1.brightbox.com" Mar 17 20:28:47.283167 kubelet[2385]: E0317 20:28:47.282977 2385 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-24y52.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-24y52.gb1.brightbox.com" Mar 17 20:28:47.546695 kubelet[2385]: I0317 20:28:47.546355 2385 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-24y52.gb1.brightbox.com" Mar 17 20:28:47.549699 kubelet[2385]: E0317 20:28:47.549666 2385 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-24y52.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-24y52.gb1.brightbox.com" Mar 17 20:28:49.240701 systemd[1]: Reload requested from client PID 2662 ('systemctl') (unit session-11.scope)... Mar 17 20:28:49.241299 systemd[1]: Reloading... Mar 17 20:28:49.409730 zram_generator::config[2708]: No configuration found. Mar 17 20:28:49.640978 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 20:28:49.823465 systemd[1]: Reloading finished in 581 ms. Mar 17 20:28:49.862245 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 20:28:49.876515 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 20:28:49.876984 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 20:28:49.877085 systemd[1]: kubelet.service: Consumed 1.057s CPU time, 122.4M memory peak. Mar 17 20:28:49.885089 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 20:28:50.093040 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 20:28:50.104311 (kubelet)[2772]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 20:28:50.232925 kubelet[2772]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 20:28:50.232925 kubelet[2772]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 17 20:28:50.232925 kubelet[2772]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 20:28:50.233438 kubelet[2772]: I0317 20:28:50.233012 2772 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 20:28:50.243175 kubelet[2772]: I0317 20:28:50.243122 2772 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Mar 17 20:28:50.243175 kubelet[2772]: I0317 20:28:50.243157 2772 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 20:28:50.243482 kubelet[2772]: I0317 20:28:50.243439 2772 server.go:954] "Client rotation is on, will bootstrap in background" Mar 17 20:28:50.248759 kubelet[2772]: I0317 20:28:50.248567 2772 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 17 20:28:50.252955 kubelet[2772]: I0317 20:28:50.252928 2772 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 20:28:50.266463 kubelet[2772]: E0317 20:28:50.266238 2772 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 17 20:28:50.266463 kubelet[2772]: I0317 20:28:50.266279 2772 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 17 20:28:50.267489 sudo[2785]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 17 20:28:50.268754 sudo[2785]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 17 20:28:50.271889 kubelet[2772]: I0317 20:28:50.271712 2772 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 20:28:50.272512 kubelet[2772]: I0317 20:28:50.272275 2772 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 20:28:50.272512 kubelet[2772]: I0317 20:28:50.272326 2772 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-24y52.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 17 20:28:50.272770 kubelet[2772]: I0317 20:28:50.272532 2772 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 20:28:50.272770 kubelet[2772]: I0317 20:28:50.272548 2772 container_manager_linux.go:304] "Creating device plugin manager" Mar 17 20:28:50.272770 kubelet[2772]: I0317 20:28:50.272607 2772 state_mem.go:36] "Initialized new in-memory state store" Mar 17 20:28:50.272914 kubelet[2772]: I0317 20:28:50.272868 2772 kubelet.go:446] "Attempting to sync node with API server" Mar 17 20:28:50.272914 kubelet[2772]: I0317 20:28:50.272890 2772 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 20:28:50.276925 kubelet[2772]: I0317 20:28:50.275020 2772 kubelet.go:352] "Adding apiserver pod source" Mar 17 20:28:50.276925 kubelet[2772]: I0317 20:28:50.276917 2772 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 20:28:50.281459 kubelet[2772]: I0317 20:28:50.280162 2772 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 17 20:28:50.281459 kubelet[2772]: I0317 20:28:50.280688 2772 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 20:28:50.281459 kubelet[2772]: I0317 20:28:50.281341 2772 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 17 20:28:50.281459 kubelet[2772]: I0317 20:28:50.281400 2772 server.go:1287] "Started kubelet" Mar 17 20:28:50.299665 kubelet[2772]: I0317 20:28:50.297758 2772 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 20:28:50.302861 kubelet[2772]: I0317 20:28:50.302830 2772 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 17 20:28:50.302970 kubelet[2772]: I0317 20:28:50.302932 2772 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 20:28:50.305451 kubelet[2772]: I0317 20:28:50.304273 2772 server.go:490] "Adding debug handlers to kubelet server" Mar 17 20:28:50.306721 kubelet[2772]: I0317 20:28:50.306572 2772 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 20:28:50.307323 kubelet[2772]: I0317 20:28:50.307123 2772 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 20:28:50.318090 kubelet[2772]: I0317 20:28:50.317935 2772 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 17 20:28:50.319943 kubelet[2772]: E0317 20:28:50.319908 2772 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"srv-24y52.gb1.brightbox.com\" not found" Mar 17 20:28:50.322339 kubelet[2772]: I0317 20:28:50.321229 2772 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 20:28:50.327105 kubelet[2772]: I0317 20:28:50.326189 2772 reconciler.go:26] "Reconciler: start to sync state" Mar 17 20:28:50.329612 kubelet[2772]: I0317 20:28:50.329478 2772 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 20:28:50.337211 kubelet[2772]: E0317 20:28:50.337182 2772 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 20:28:50.345772 kubelet[2772]: I0317 20:28:50.345670 2772 factory.go:221] Registration of the containerd container factory successfully Mar 17 20:28:50.345772 kubelet[2772]: I0317 20:28:50.345692 2772 factory.go:221] Registration of the systemd container factory successfully Mar 17 20:28:50.369825 kubelet[2772]: I0317 20:28:50.369732 2772 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 20:28:50.373364 kubelet[2772]: I0317 20:28:50.373320 2772 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 20:28:50.373364 kubelet[2772]: I0317 20:28:50.373357 2772 status_manager.go:227] "Starting to sync pod status with apiserver" Mar 17 20:28:50.373839 kubelet[2772]: I0317 20:28:50.373385 2772 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 17 20:28:50.373839 kubelet[2772]: I0317 20:28:50.373397 2772 kubelet.go:2388] "Starting kubelet main sync loop" Mar 17 20:28:50.373839 kubelet[2772]: E0317 20:28:50.373490 2772 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 20:28:50.475267 kubelet[2772]: I0317 20:28:50.474541 2772 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 17 20:28:50.475267 kubelet[2772]: I0317 20:28:50.474570 2772 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 17 20:28:50.475267 kubelet[2772]: I0317 20:28:50.474597 2772 state_mem.go:36] "Initialized new in-memory state store" Mar 17 20:28:50.475267 kubelet[2772]: E0317 20:28:50.474842 2772 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 17 20:28:50.475939 kubelet[2772]: I0317 20:28:50.475907 2772 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 17 20:28:50.476042 kubelet[2772]: I0317 20:28:50.475936 2772 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 17 20:28:50.476042 kubelet[2772]: I0317 20:28:50.475967 2772 policy_none.go:49] "None policy: Start" Mar 17 20:28:50.476042 kubelet[2772]: I0317 20:28:50.475981 2772 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 17 20:28:50.476042 kubelet[2772]: I0317 20:28:50.476000 2772 state_mem.go:35] "Initializing new in-memory state store" Mar 17 20:28:50.476974 kubelet[2772]: I0317 20:28:50.476188 2772 state_mem.go:75] "Updated machine memory state" Mar 17 20:28:50.490296 kubelet[2772]: I0317 20:28:50.490133 2772 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 20:28:50.492666 kubelet[2772]: I0317 20:28:50.491727 2772 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 17 20:28:50.492666 kubelet[2772]: I0317 20:28:50.491754 2772 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 20:28:50.498333 kubelet[2772]: I0317 20:28:50.496887 2772 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 20:28:50.498999 kubelet[2772]: E0317 20:28:50.498972 2772 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 17 20:28:50.623612 kubelet[2772]: I0317 20:28:50.622992 2772 kubelet_node_status.go:76] "Attempting to register node" node="srv-24y52.gb1.brightbox.com" Mar 17 20:28:50.635946 kubelet[2772]: I0317 20:28:50.635904 2772 kubelet_node_status.go:125] "Node was previously registered" node="srv-24y52.gb1.brightbox.com" Mar 17 20:28:50.636067 kubelet[2772]: I0317 20:28:50.636008 2772 kubelet_node_status.go:79] "Successfully registered node" node="srv-24y52.gb1.brightbox.com" Mar 17 20:28:50.678123 kubelet[2772]: I0317 20:28:50.676570 2772 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-24y52.gb1.brightbox.com" Mar 17 20:28:50.681996 kubelet[2772]: I0317 20:28:50.681770 2772 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-24y52.gb1.brightbox.com" Mar 17 20:28:50.683872 kubelet[2772]: I0317 20:28:50.682915 2772 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-24y52.gb1.brightbox.com" Mar 17 20:28:50.687676 kubelet[2772]: W0317 20:28:50.687636 2772 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 20:28:50.692685 kubelet[2772]: W0317 20:28:50.691742 2772 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 20:28:50.694539 kubelet[2772]: W0317 20:28:50.694219 2772 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 20:28:50.727907 kubelet[2772]: I0317 20:28:50.727700 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4ad671ed4e2a022157c874cd6c193632-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-24y52.gb1.brightbox.com\" (UID: \"4ad671ed4e2a022157c874cd6c193632\") " pod="kube-system/kube-controller-manager-srv-24y52.gb1.brightbox.com" Mar 17 20:28:50.727907 kubelet[2772]: I0317 20:28:50.727777 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2765503312ac0f3bdf248914be766e34-kubeconfig\") pod \"kube-scheduler-srv-24y52.gb1.brightbox.com\" (UID: \"2765503312ac0f3bdf248914be766e34\") " pod="kube-system/kube-scheduler-srv-24y52.gb1.brightbox.com" Mar 17 20:28:50.728970 kubelet[2772]: I0317 20:28:50.728526 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/eb1708bb5e223b755fc8d1dc1680ca02-ca-certs\") pod \"kube-apiserver-srv-24y52.gb1.brightbox.com\" (UID: \"eb1708bb5e223b755fc8d1dc1680ca02\") " pod="kube-system/kube-apiserver-srv-24y52.gb1.brightbox.com" Mar 17 20:28:50.728970 kubelet[2772]: I0317 20:28:50.728595 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/eb1708bb5e223b755fc8d1dc1680ca02-k8s-certs\") pod \"kube-apiserver-srv-24y52.gb1.brightbox.com\" (UID: \"eb1708bb5e223b755fc8d1dc1680ca02\") " pod="kube-system/kube-apiserver-srv-24y52.gb1.brightbox.com" Mar 17 20:28:50.728970 kubelet[2772]: I0317 20:28:50.728634 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4ad671ed4e2a022157c874cd6c193632-flexvolume-dir\") pod \"kube-controller-manager-srv-24y52.gb1.brightbox.com\" (UID: \"4ad671ed4e2a022157c874cd6c193632\") " pod="kube-system/kube-controller-manager-srv-24y52.gb1.brightbox.com" Mar 17 20:28:50.728970 kubelet[2772]: I0317 20:28:50.728693 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4ad671ed4e2a022157c874cd6c193632-k8s-certs\") pod \"kube-controller-manager-srv-24y52.gb1.brightbox.com\" (UID: \"4ad671ed4e2a022157c874cd6c193632\") " pod="kube-system/kube-controller-manager-srv-24y52.gb1.brightbox.com" Mar 17 20:28:50.728970 kubelet[2772]: I0317 20:28:50.728742 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4ad671ed4e2a022157c874cd6c193632-kubeconfig\") pod \"kube-controller-manager-srv-24y52.gb1.brightbox.com\" (UID: \"4ad671ed4e2a022157c874cd6c193632\") " pod="kube-system/kube-controller-manager-srv-24y52.gb1.brightbox.com" Mar 17 20:28:50.729434 kubelet[2772]: I0317 20:28:50.728774 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/eb1708bb5e223b755fc8d1dc1680ca02-usr-share-ca-certificates\") pod \"kube-apiserver-srv-24y52.gb1.brightbox.com\" (UID: \"eb1708bb5e223b755fc8d1dc1680ca02\") " pod="kube-system/kube-apiserver-srv-24y52.gb1.brightbox.com" Mar 17 20:28:50.729434 kubelet[2772]: I0317 20:28:50.728802 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4ad671ed4e2a022157c874cd6c193632-ca-certs\") pod \"kube-controller-manager-srv-24y52.gb1.brightbox.com\" (UID: \"4ad671ed4e2a022157c874cd6c193632\") " pod="kube-system/kube-controller-manager-srv-24y52.gb1.brightbox.com" Mar 17 20:28:51.175253 sudo[2785]: pam_unix(sudo:session): session closed for user root Mar 17 20:28:51.280660 kubelet[2772]: I0317 20:28:51.280568 2772 apiserver.go:52] "Watching apiserver" Mar 17 20:28:51.326469 kubelet[2772]: I0317 20:28:51.326376 2772 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 20:28:51.413736 kubelet[2772]: I0317 20:28:51.412896 2772 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-24y52.gb1.brightbox.com" Mar 17 20:28:51.413736 kubelet[2772]: I0317 20:28:51.413538 2772 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-24y52.gb1.brightbox.com" Mar 17 20:28:51.418656 kubelet[2772]: I0317 20:28:51.417429 2772 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-24y52.gb1.brightbox.com" Mar 17 20:28:51.431133 kubelet[2772]: W0317 20:28:51.429493 2772 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 20:28:51.431133 kubelet[2772]: E0317 20:28:51.429556 2772 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-24y52.gb1.brightbox.com\" already exists" pod="kube-system/kube-controller-manager-srv-24y52.gb1.brightbox.com" Mar 17 20:28:51.431133 kubelet[2772]: W0317 20:28:51.429850 2772 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 20:28:51.431133 kubelet[2772]: E0317 20:28:51.429894 2772 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-24y52.gb1.brightbox.com\" already exists" pod="kube-system/kube-scheduler-srv-24y52.gb1.brightbox.com" Mar 17 20:28:51.439661 kubelet[2772]: W0317 20:28:51.438671 2772 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 20:28:51.439661 kubelet[2772]: E0317 20:28:51.438732 2772 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-24y52.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-24y52.gb1.brightbox.com" Mar 17 20:28:51.478160 kubelet[2772]: I0317 20:28:51.477864 2772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-24y52.gb1.brightbox.com" podStartSLOduration=1.477801372 podStartE2EDuration="1.477801372s" podCreationTimestamp="2025-03-17 20:28:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 20:28:51.476188954 +0000 UTC m=+1.316455441" watchObservedRunningTime="2025-03-17 20:28:51.477801372 +0000 UTC m=+1.318067846" Mar 17 20:28:51.501184 kubelet[2772]: I0317 20:28:51.500255 2772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-24y52.gb1.brightbox.com" podStartSLOduration=1.5002383080000001 podStartE2EDuration="1.500238308s" podCreationTimestamp="2025-03-17 20:28:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 20:28:51.498290552 +0000 UTC m=+1.338557048" watchObservedRunningTime="2025-03-17 20:28:51.500238308 +0000 UTC m=+1.340504799" Mar 17 20:28:51.501184 kubelet[2772]: I0317 20:28:51.500367 2772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-24y52.gb1.brightbox.com" podStartSLOduration=1.50036011 podStartE2EDuration="1.50036011s" podCreationTimestamp="2025-03-17 20:28:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 20:28:51.489113187 +0000 UTC m=+1.329379673" watchObservedRunningTime="2025-03-17 20:28:51.50036011 +0000 UTC m=+1.340626589" Mar 17 20:28:53.103679 sudo[1802]: pam_unix(sudo:session): session closed for user root Mar 17 20:28:53.246973 sshd[1801]: Connection closed by 139.178.89.65 port 56132 Mar 17 20:28:53.250950 sshd-session[1799]: pam_unix(sshd:session): session closed for user core Mar 17 20:28:53.256194 systemd[1]: sshd@8-10.230.57.126:22-139.178.89.65:56132.service: Deactivated successfully. Mar 17 20:28:53.259794 systemd[1]: session-11.scope: Deactivated successfully. Mar 17 20:28:53.260178 systemd[1]: session-11.scope: Consumed 7.125s CPU time, 210M memory peak. Mar 17 20:28:53.263510 systemd-logind[1507]: Session 11 logged out. Waiting for processes to exit. Mar 17 20:28:53.266066 systemd-logind[1507]: Removed session 11. Mar 17 20:28:54.128349 kubelet[2772]: I0317 20:28:54.127944 2772 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 17 20:28:54.129983 containerd[1529]: time="2025-03-17T20:28:54.129471927Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 17 20:28:54.132020 kubelet[2772]: I0317 20:28:54.130834 2772 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 17 20:28:54.966912 kubelet[2772]: I0317 20:28:54.966818 2772 status_manager.go:890] "Failed to get status for pod" podUID="83a3df04-39d8-4bc9-ad90-c50f9cb68393" pod="kube-system/kube-proxy-chxf8" err="pods \"kube-proxy-chxf8\" is forbidden: User \"system:node:srv-24y52.gb1.brightbox.com\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'srv-24y52.gb1.brightbox.com' and this object" Mar 17 20:28:54.966912 kubelet[2772]: W0317 20:28:54.966859 2772 reflector.go:569] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:srv-24y52.gb1.brightbox.com" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'srv-24y52.gb1.brightbox.com' and this object Mar 17 20:28:54.967218 kubelet[2772]: E0317 20:28:54.966932 2772 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:srv-24y52.gb1.brightbox.com\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'srv-24y52.gb1.brightbox.com' and this object" logger="UnhandledError" Mar 17 20:28:54.971270 kubelet[2772]: W0317 20:28:54.971223 2772 reflector.go:569] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:srv-24y52.gb1.brightbox.com" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'srv-24y52.gb1.brightbox.com' and this object Mar 17 20:28:54.971394 kubelet[2772]: E0317 20:28:54.971264 2772 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:srv-24y52.gb1.brightbox.com\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'srv-24y52.gb1.brightbox.com' and this object" logger="UnhandledError" Mar 17 20:28:54.972528 systemd[1]: Created slice kubepods-besteffort-pod83a3df04_39d8_4bc9_ad90_c50f9cb68393.slice - libcontainer container kubepods-besteffort-pod83a3df04_39d8_4bc9_ad90_c50f9cb68393.slice. Mar 17 20:28:55.006966 systemd[1]: Created slice kubepods-burstable-pod451330fa_c9d5_43aa_a54d_add34474be19.slice - libcontainer container kubepods-burstable-pod451330fa_c9d5_43aa_a54d_add34474be19.slice. Mar 17 20:28:55.058033 kubelet[2772]: I0317 20:28:55.057964 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/451330fa-c9d5-43aa-a54d-add34474be19-host-proc-sys-kernel\") pod \"cilium-66r6x\" (UID: \"451330fa-c9d5-43aa-a54d-add34474be19\") " pod="kube-system/cilium-66r6x" Mar 17 20:28:55.058033 kubelet[2772]: I0317 20:28:55.058037 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/451330fa-c9d5-43aa-a54d-add34474be19-cilium-cgroup\") pod \"cilium-66r6x\" (UID: \"451330fa-c9d5-43aa-a54d-add34474be19\") " pod="kube-system/cilium-66r6x" Mar 17 20:28:55.058321 kubelet[2772]: I0317 20:28:55.058068 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/451330fa-c9d5-43aa-a54d-add34474be19-clustermesh-secrets\") pod \"cilium-66r6x\" (UID: \"451330fa-c9d5-43aa-a54d-add34474be19\") " pod="kube-system/cilium-66r6x" Mar 17 20:28:55.058321 kubelet[2772]: I0317 20:28:55.058106 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lg8q\" (UniqueName: \"kubernetes.io/projected/451330fa-c9d5-43aa-a54d-add34474be19-kube-api-access-9lg8q\") pod \"cilium-66r6x\" (UID: \"451330fa-c9d5-43aa-a54d-add34474be19\") " pod="kube-system/cilium-66r6x" Mar 17 20:28:55.058321 kubelet[2772]: I0317 20:28:55.058160 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/83a3df04-39d8-4bc9-ad90-c50f9cb68393-xtables-lock\") pod \"kube-proxy-chxf8\" (UID: \"83a3df04-39d8-4bc9-ad90-c50f9cb68393\") " pod="kube-system/kube-proxy-chxf8" Mar 17 20:28:55.058321 kubelet[2772]: I0317 20:28:55.058193 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/451330fa-c9d5-43aa-a54d-add34474be19-hostproc\") pod \"cilium-66r6x\" (UID: \"451330fa-c9d5-43aa-a54d-add34474be19\") " pod="kube-system/cilium-66r6x" Mar 17 20:28:55.058321 kubelet[2772]: I0317 20:28:55.058220 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/451330fa-c9d5-43aa-a54d-add34474be19-etc-cni-netd\") pod \"cilium-66r6x\" (UID: \"451330fa-c9d5-43aa-a54d-add34474be19\") " pod="kube-system/cilium-66r6x" Mar 17 20:28:55.059122 kubelet[2772]: I0317 20:28:55.058249 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/451330fa-c9d5-43aa-a54d-add34474be19-cilium-config-path\") pod \"cilium-66r6x\" (UID: \"451330fa-c9d5-43aa-a54d-add34474be19\") " pod="kube-system/cilium-66r6x" Mar 17 20:28:55.059122 kubelet[2772]: I0317 20:28:55.058277 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/451330fa-c9d5-43aa-a54d-add34474be19-cilium-run\") pod \"cilium-66r6x\" (UID: \"451330fa-c9d5-43aa-a54d-add34474be19\") " pod="kube-system/cilium-66r6x" Mar 17 20:28:55.059122 kubelet[2772]: I0317 20:28:55.058307 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/451330fa-c9d5-43aa-a54d-add34474be19-xtables-lock\") pod \"cilium-66r6x\" (UID: \"451330fa-c9d5-43aa-a54d-add34474be19\") " pod="kube-system/cilium-66r6x" Mar 17 20:28:55.059122 kubelet[2772]: I0317 20:28:55.058341 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqz95\" (UniqueName: \"kubernetes.io/projected/83a3df04-39d8-4bc9-ad90-c50f9cb68393-kube-api-access-dqz95\") pod \"kube-proxy-chxf8\" (UID: \"83a3df04-39d8-4bc9-ad90-c50f9cb68393\") " pod="kube-system/kube-proxy-chxf8" Mar 17 20:28:55.059122 kubelet[2772]: I0317 20:28:55.058373 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/451330fa-c9d5-43aa-a54d-add34474be19-cni-path\") pod \"cilium-66r6x\" (UID: \"451330fa-c9d5-43aa-a54d-add34474be19\") " pod="kube-system/cilium-66r6x" Mar 17 20:28:55.059122 kubelet[2772]: I0317 20:28:55.058403 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/83a3df04-39d8-4bc9-ad90-c50f9cb68393-kube-proxy\") pod \"kube-proxy-chxf8\" (UID: \"83a3df04-39d8-4bc9-ad90-c50f9cb68393\") " pod="kube-system/kube-proxy-chxf8" Mar 17 20:28:55.059941 kubelet[2772]: I0317 20:28:55.058431 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/83a3df04-39d8-4bc9-ad90-c50f9cb68393-lib-modules\") pod \"kube-proxy-chxf8\" (UID: \"83a3df04-39d8-4bc9-ad90-c50f9cb68393\") " pod="kube-system/kube-proxy-chxf8" Mar 17 20:28:55.059941 kubelet[2772]: I0317 20:28:55.058457 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/451330fa-c9d5-43aa-a54d-add34474be19-lib-modules\") pod \"cilium-66r6x\" (UID: \"451330fa-c9d5-43aa-a54d-add34474be19\") " pod="kube-system/cilium-66r6x" Mar 17 20:28:55.059941 kubelet[2772]: I0317 20:28:55.058487 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/451330fa-c9d5-43aa-a54d-add34474be19-hubble-tls\") pod \"cilium-66r6x\" (UID: \"451330fa-c9d5-43aa-a54d-add34474be19\") " pod="kube-system/cilium-66r6x" Mar 17 20:28:55.059941 kubelet[2772]: I0317 20:28:55.058514 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/451330fa-c9d5-43aa-a54d-add34474be19-bpf-maps\") pod \"cilium-66r6x\" (UID: \"451330fa-c9d5-43aa-a54d-add34474be19\") " pod="kube-system/cilium-66r6x" Mar 17 20:28:55.059941 kubelet[2772]: I0317 20:28:55.058552 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/451330fa-c9d5-43aa-a54d-add34474be19-host-proc-sys-net\") pod \"cilium-66r6x\" (UID: \"451330fa-c9d5-43aa-a54d-add34474be19\") " pod="kube-system/cilium-66r6x" Mar 17 20:28:55.287160 systemd[1]: Created slice kubepods-besteffort-pod6b16d145_6f58_4f1d_ac6c_ea3969459599.slice - libcontainer container kubepods-besteffort-pod6b16d145_6f58_4f1d_ac6c_ea3969459599.slice. Mar 17 20:28:55.361671 kubelet[2772]: I0317 20:28:55.361575 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rgnj\" (UniqueName: \"kubernetes.io/projected/6b16d145-6f58-4f1d-ac6c-ea3969459599-kube-api-access-2rgnj\") pod \"cilium-operator-6c4d7847fc-dfxp5\" (UID: \"6b16d145-6f58-4f1d-ac6c-ea3969459599\") " pod="kube-system/cilium-operator-6c4d7847fc-dfxp5" Mar 17 20:28:55.361671 kubelet[2772]: I0317 20:28:55.361676 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6b16d145-6f58-4f1d-ac6c-ea3969459599-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-dfxp5\" (UID: \"6b16d145-6f58-4f1d-ac6c-ea3969459599\") " pod="kube-system/cilium-operator-6c4d7847fc-dfxp5" Mar 17 20:28:55.895260 containerd[1529]: time="2025-03-17T20:28:55.895101250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-dfxp5,Uid:6b16d145-6f58-4f1d-ac6c-ea3969459599,Namespace:kube-system,Attempt:0,}" Mar 17 20:28:55.918478 containerd[1529]: time="2025-03-17T20:28:55.917809260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-66r6x,Uid:451330fa-c9d5-43aa-a54d-add34474be19,Namespace:kube-system,Attempt:0,}" Mar 17 20:28:55.931395 containerd[1529]: time="2025-03-17T20:28:55.930894136Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 20:28:55.931395 containerd[1529]: time="2025-03-17T20:28:55.931026885Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 20:28:55.931395 containerd[1529]: time="2025-03-17T20:28:55.931052519Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 20:28:55.933795 containerd[1529]: time="2025-03-17T20:28:55.933621804Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 20:28:55.962882 systemd[1]: Started cri-containerd-acee099f4cebf8c28e1b883d55e112b4b0be73a44ee909524b37be8e5298a1a5.scope - libcontainer container acee099f4cebf8c28e1b883d55e112b4b0be73a44ee909524b37be8e5298a1a5. Mar 17 20:28:55.975422 containerd[1529]: time="2025-03-17T20:28:55.975271712Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 20:28:55.979360 containerd[1529]: time="2025-03-17T20:28:55.977989983Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 20:28:55.979360 containerd[1529]: time="2025-03-17T20:28:55.978027105Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 20:28:55.979360 containerd[1529]: time="2025-03-17T20:28:55.978236403Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 20:28:56.028808 systemd[1]: Started cri-containerd-b44bc39aad5e37db760a0876be692210d2e5f8c494c1db2d54ee631c153407fd.scope - libcontainer container b44bc39aad5e37db760a0876be692210d2e5f8c494c1db2d54ee631c153407fd. Mar 17 20:28:56.079689 containerd[1529]: time="2025-03-17T20:28:56.078915105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-dfxp5,Uid:6b16d145-6f58-4f1d-ac6c-ea3969459599,Namespace:kube-system,Attempt:0,} returns sandbox id \"acee099f4cebf8c28e1b883d55e112b4b0be73a44ee909524b37be8e5298a1a5\"" Mar 17 20:28:56.083694 containerd[1529]: time="2025-03-17T20:28:56.082792009Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 17 20:28:56.102461 containerd[1529]: time="2025-03-17T20:28:56.102417807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-66r6x,Uid:451330fa-c9d5-43aa-a54d-add34474be19,Namespace:kube-system,Attempt:0,} returns sandbox id \"b44bc39aad5e37db760a0876be692210d2e5f8c494c1db2d54ee631c153407fd\"" Mar 17 20:28:56.163779 kubelet[2772]: E0317 20:28:56.163248 2772 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 17 20:28:56.163779 kubelet[2772]: E0317 20:28:56.163392 2772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/83a3df04-39d8-4bc9-ad90-c50f9cb68393-kube-proxy podName:83a3df04-39d8-4bc9-ad90-c50f9cb68393 nodeName:}" failed. No retries permitted until 2025-03-17 20:28:56.663358819 +0000 UTC m=+6.503625291 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/83a3df04-39d8-4bc9-ad90-c50f9cb68393-kube-proxy") pod "kube-proxy-chxf8" (UID: "83a3df04-39d8-4bc9-ad90-c50f9cb68393") : failed to sync configmap cache: timed out waiting for the condition Mar 17 20:28:56.785170 containerd[1529]: time="2025-03-17T20:28:56.785072388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-chxf8,Uid:83a3df04-39d8-4bc9-ad90-c50f9cb68393,Namespace:kube-system,Attempt:0,}" Mar 17 20:28:56.833817 containerd[1529]: time="2025-03-17T20:28:56.833687021Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 20:28:56.833817 containerd[1529]: time="2025-03-17T20:28:56.833755074Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 20:28:56.833817 containerd[1529]: time="2025-03-17T20:28:56.833772776Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 20:28:56.834911 containerd[1529]: time="2025-03-17T20:28:56.833867305Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 20:28:56.868022 systemd[1]: Started cri-containerd-25618e642d05830f38bde7b997ab4a6b331ac9a14b6b44aa0be7900683fcbd79.scope - libcontainer container 25618e642d05830f38bde7b997ab4a6b331ac9a14b6b44aa0be7900683fcbd79. Mar 17 20:28:56.900280 containerd[1529]: time="2025-03-17T20:28:56.900217245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-chxf8,Uid:83a3df04-39d8-4bc9-ad90-c50f9cb68393,Namespace:kube-system,Attempt:0,} returns sandbox id \"25618e642d05830f38bde7b997ab4a6b331ac9a14b6b44aa0be7900683fcbd79\"" Mar 17 20:28:56.905388 containerd[1529]: time="2025-03-17T20:28:56.905350739Z" level=info msg="CreateContainer within sandbox \"25618e642d05830f38bde7b997ab4a6b331ac9a14b6b44aa0be7900683fcbd79\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 17 20:28:56.930765 containerd[1529]: time="2025-03-17T20:28:56.930703155Z" level=info msg="CreateContainer within sandbox \"25618e642d05830f38bde7b997ab4a6b331ac9a14b6b44aa0be7900683fcbd79\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9d96d400b91ceae3746238fd650892d15052ea83e6cb4ba9b7e4fce435456e57\"" Mar 17 20:28:56.932696 containerd[1529]: time="2025-03-17T20:28:56.931490088Z" level=info msg="StartContainer for \"9d96d400b91ceae3746238fd650892d15052ea83e6cb4ba9b7e4fce435456e57\"" Mar 17 20:28:56.970013 systemd[1]: Started cri-containerd-9d96d400b91ceae3746238fd650892d15052ea83e6cb4ba9b7e4fce435456e57.scope - libcontainer container 9d96d400b91ceae3746238fd650892d15052ea83e6cb4ba9b7e4fce435456e57. Mar 17 20:28:57.022955 containerd[1529]: time="2025-03-17T20:28:57.022891426Z" level=info msg="StartContainer for \"9d96d400b91ceae3746238fd650892d15052ea83e6cb4ba9b7e4fce435456e57\" returns successfully" Mar 17 20:28:57.177030 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount929639136.mount: Deactivated successfully. Mar 17 20:28:57.888248 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2439322485.mount: Deactivated successfully. Mar 17 20:28:59.026585 containerd[1529]: time="2025-03-17T20:28:59.025976660Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 20:28:59.028323 containerd[1529]: time="2025-03-17T20:28:59.028258077Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Mar 17 20:28:59.029351 containerd[1529]: time="2025-03-17T20:28:59.029315634Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 20:28:59.031928 containerd[1529]: time="2025-03-17T20:28:59.031350198Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.948515564s" Mar 17 20:28:59.031928 containerd[1529]: time="2025-03-17T20:28:59.031394677Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Mar 17 20:28:59.033913 containerd[1529]: time="2025-03-17T20:28:59.033102131Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 17 20:28:59.037381 containerd[1529]: time="2025-03-17T20:28:59.037254090Z" level=info msg="CreateContainer within sandbox \"acee099f4cebf8c28e1b883d55e112b4b0be73a44ee909524b37be8e5298a1a5\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 17 20:28:59.061832 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1860131666.mount: Deactivated successfully. Mar 17 20:28:59.065734 containerd[1529]: time="2025-03-17T20:28:59.065539484Z" level=info msg="CreateContainer within sandbox \"acee099f4cebf8c28e1b883d55e112b4b0be73a44ee909524b37be8e5298a1a5\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"e53b50978e9d4b7eb802cd3ad643bbf67ca04a2b584bc73c739da56041000eb8\"" Mar 17 20:28:59.068588 containerd[1529]: time="2025-03-17T20:28:59.066259475Z" level=info msg="StartContainer for \"e53b50978e9d4b7eb802cd3ad643bbf67ca04a2b584bc73c739da56041000eb8\"" Mar 17 20:28:59.114001 kubelet[2772]: I0317 20:28:59.113300 2772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-chxf8" podStartSLOduration=5.113276076 podStartE2EDuration="5.113276076s" podCreationTimestamp="2025-03-17 20:28:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 20:28:57.454962364 +0000 UTC m=+7.295228850" watchObservedRunningTime="2025-03-17 20:28:59.113276076 +0000 UTC m=+8.953542556" Mar 17 20:28:59.136283 systemd[1]: Started cri-containerd-e53b50978e9d4b7eb802cd3ad643bbf67ca04a2b584bc73c739da56041000eb8.scope - libcontainer container e53b50978e9d4b7eb802cd3ad643bbf67ca04a2b584bc73c739da56041000eb8. Mar 17 20:28:59.181930 containerd[1529]: time="2025-03-17T20:28:59.181855205Z" level=info msg="StartContainer for \"e53b50978e9d4b7eb802cd3ad643bbf67ca04a2b584bc73c739da56041000eb8\" returns successfully" Mar 17 20:28:59.465108 kubelet[2772]: I0317 20:28:59.465040 2772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-dfxp5" podStartSLOduration=1.514535822 podStartE2EDuration="4.465019663s" podCreationTimestamp="2025-03-17 20:28:55 +0000 UTC" firstStartedPulling="2025-03-17 20:28:56.082220583 +0000 UTC m=+5.922487063" lastFinishedPulling="2025-03-17 20:28:59.032704419 +0000 UTC m=+8.872970904" observedRunningTime="2025-03-17 20:28:59.464760877 +0000 UTC m=+9.305027377" watchObservedRunningTime="2025-03-17 20:28:59.465019663 +0000 UTC m=+9.305286144" Mar 17 20:29:06.238625 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1708701523.mount: Deactivated successfully. Mar 17 20:29:09.475659 containerd[1529]: time="2025-03-17T20:29:09.475573530Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 20:29:09.478805 containerd[1529]: time="2025-03-17T20:29:09.478735193Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Mar 17 20:29:09.485577 containerd[1529]: time="2025-03-17T20:29:09.485483849Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 20:29:09.489070 containerd[1529]: time="2025-03-17T20:29:09.488667659Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.455495013s" Mar 17 20:29:09.489070 containerd[1529]: time="2025-03-17T20:29:09.488722352Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Mar 17 20:29:09.493039 containerd[1529]: time="2025-03-17T20:29:09.492868157Z" level=info msg="CreateContainer within sandbox \"b44bc39aad5e37db760a0876be692210d2e5f8c494c1db2d54ee631c153407fd\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 20:29:09.541376 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount712594719.mount: Deactivated successfully. Mar 17 20:29:09.548893 containerd[1529]: time="2025-03-17T20:29:09.548803572Z" level=info msg="CreateContainer within sandbox \"b44bc39aad5e37db760a0876be692210d2e5f8c494c1db2d54ee631c153407fd\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f3f8f1c65d63f2d01d886696d2a221854f9ece613797a7c63fb66b03586dd3ee\"" Mar 17 20:29:09.551471 containerd[1529]: time="2025-03-17T20:29:09.550122657Z" level=info msg="StartContainer for \"f3f8f1c65d63f2d01d886696d2a221854f9ece613797a7c63fb66b03586dd3ee\"" Mar 17 20:29:09.709009 systemd[1]: Started cri-containerd-f3f8f1c65d63f2d01d886696d2a221854f9ece613797a7c63fb66b03586dd3ee.scope - libcontainer container f3f8f1c65d63f2d01d886696d2a221854f9ece613797a7c63fb66b03586dd3ee. Mar 17 20:29:09.765743 containerd[1529]: time="2025-03-17T20:29:09.763747416Z" level=info msg="StartContainer for \"f3f8f1c65d63f2d01d886696d2a221854f9ece613797a7c63fb66b03586dd3ee\" returns successfully" Mar 17 20:29:09.783207 systemd[1]: cri-containerd-f3f8f1c65d63f2d01d886696d2a221854f9ece613797a7c63fb66b03586dd3ee.scope: Deactivated successfully. Mar 17 20:29:10.053328 containerd[1529]: time="2025-03-17T20:29:10.041914286Z" level=info msg="shim disconnected" id=f3f8f1c65d63f2d01d886696d2a221854f9ece613797a7c63fb66b03586dd3ee namespace=k8s.io Mar 17 20:29:10.053889 containerd[1529]: time="2025-03-17T20:29:10.053573724Z" level=warning msg="cleaning up after shim disconnected" id=f3f8f1c65d63f2d01d886696d2a221854f9ece613797a7c63fb66b03586dd3ee namespace=k8s.io Mar 17 20:29:10.053889 containerd[1529]: time="2025-03-17T20:29:10.053617680Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 20:29:10.070060 containerd[1529]: time="2025-03-17T20:29:10.069981261Z" level=warning msg="cleanup warnings time=\"2025-03-17T20:29:10Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 17 20:29:10.494520 containerd[1529]: time="2025-03-17T20:29:10.494398111Z" level=info msg="CreateContainer within sandbox \"b44bc39aad5e37db760a0876be692210d2e5f8c494c1db2d54ee631c153407fd\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 20:29:10.518136 containerd[1529]: time="2025-03-17T20:29:10.516445199Z" level=info msg="CreateContainer within sandbox \"b44bc39aad5e37db760a0876be692210d2e5f8c494c1db2d54ee631c153407fd\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"eedab9fe115f4783adecc7361c71993a796cabe45ae91cee4c1d217db0052cc2\"" Mar 17 20:29:10.518136 containerd[1529]: time="2025-03-17T20:29:10.517066139Z" level=info msg="StartContainer for \"eedab9fe115f4783adecc7361c71993a796cabe45ae91cee4c1d217db0052cc2\"" Mar 17 20:29:10.538193 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f3f8f1c65d63f2d01d886696d2a221854f9ece613797a7c63fb66b03586dd3ee-rootfs.mount: Deactivated successfully. Mar 17 20:29:10.576269 systemd[1]: run-containerd-runc-k8s.io-eedab9fe115f4783adecc7361c71993a796cabe45ae91cee4c1d217db0052cc2-runc.yomwKn.mount: Deactivated successfully. Mar 17 20:29:10.588879 systemd[1]: Started cri-containerd-eedab9fe115f4783adecc7361c71993a796cabe45ae91cee4c1d217db0052cc2.scope - libcontainer container eedab9fe115f4783adecc7361c71993a796cabe45ae91cee4c1d217db0052cc2. Mar 17 20:29:10.634506 containerd[1529]: time="2025-03-17T20:29:10.634362567Z" level=info msg="StartContainer for \"eedab9fe115f4783adecc7361c71993a796cabe45ae91cee4c1d217db0052cc2\" returns successfully" Mar 17 20:29:10.657534 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 20:29:10.658352 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 17 20:29:10.660828 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 17 20:29:10.667215 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 20:29:10.670792 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 17 20:29:10.671472 systemd[1]: cri-containerd-eedab9fe115f4783adecc7361c71993a796cabe45ae91cee4c1d217db0052cc2.scope: Deactivated successfully. Mar 17 20:29:10.709779 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 20:29:10.712529 containerd[1529]: time="2025-03-17T20:29:10.712356776Z" level=info msg="shim disconnected" id=eedab9fe115f4783adecc7361c71993a796cabe45ae91cee4c1d217db0052cc2 namespace=k8s.io Mar 17 20:29:10.712713 containerd[1529]: time="2025-03-17T20:29:10.712567312Z" level=warning msg="cleaning up after shim disconnected" id=eedab9fe115f4783adecc7361c71993a796cabe45ae91cee4c1d217db0052cc2 namespace=k8s.io Mar 17 20:29:10.712713 containerd[1529]: time="2025-03-17T20:29:10.712586427Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 20:29:11.500472 containerd[1529]: time="2025-03-17T20:29:11.500368232Z" level=info msg="CreateContainer within sandbox \"b44bc39aad5e37db760a0876be692210d2e5f8c494c1db2d54ee631c153407fd\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 20:29:11.540437 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eedab9fe115f4783adecc7361c71993a796cabe45ae91cee4c1d217db0052cc2-rootfs.mount: Deactivated successfully. Mar 17 20:29:11.549925 containerd[1529]: time="2025-03-17T20:29:11.549854947Z" level=info msg="CreateContainer within sandbox \"b44bc39aad5e37db760a0876be692210d2e5f8c494c1db2d54ee631c153407fd\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ca067d93c109dffde9a227c1896e90e54ca8dd2f3bac27a1859f215fe221bc91\"" Mar 17 20:29:11.551221 containerd[1529]: time="2025-03-17T20:29:11.551182144Z" level=info msg="StartContainer for \"ca067d93c109dffde9a227c1896e90e54ca8dd2f3bac27a1859f215fe221bc91\"" Mar 17 20:29:11.602077 systemd[1]: Started cri-containerd-ca067d93c109dffde9a227c1896e90e54ca8dd2f3bac27a1859f215fe221bc91.scope - libcontainer container ca067d93c109dffde9a227c1896e90e54ca8dd2f3bac27a1859f215fe221bc91. Mar 17 20:29:11.648473 containerd[1529]: time="2025-03-17T20:29:11.648254186Z" level=info msg="StartContainer for \"ca067d93c109dffde9a227c1896e90e54ca8dd2f3bac27a1859f215fe221bc91\" returns successfully" Mar 17 20:29:11.656522 systemd[1]: cri-containerd-ca067d93c109dffde9a227c1896e90e54ca8dd2f3bac27a1859f215fe221bc91.scope: Deactivated successfully. Mar 17 20:29:11.657844 systemd[1]: cri-containerd-ca067d93c109dffde9a227c1896e90e54ca8dd2f3bac27a1859f215fe221bc91.scope: Consumed 29ms CPU time, 7.5M memory peak, 1M read from disk. Mar 17 20:29:11.695430 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ca067d93c109dffde9a227c1896e90e54ca8dd2f3bac27a1859f215fe221bc91-rootfs.mount: Deactivated successfully. Mar 17 20:29:11.708715 containerd[1529]: time="2025-03-17T20:29:11.708429922Z" level=info msg="shim disconnected" id=ca067d93c109dffde9a227c1896e90e54ca8dd2f3bac27a1859f215fe221bc91 namespace=k8s.io Mar 17 20:29:11.708715 containerd[1529]: time="2025-03-17T20:29:11.708496799Z" level=warning msg="cleaning up after shim disconnected" id=ca067d93c109dffde9a227c1896e90e54ca8dd2f3bac27a1859f215fe221bc91 namespace=k8s.io Mar 17 20:29:11.708715 containerd[1529]: time="2025-03-17T20:29:11.708512495Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 20:29:12.504379 containerd[1529]: time="2025-03-17T20:29:12.504303654Z" level=info msg="CreateContainer within sandbox \"b44bc39aad5e37db760a0876be692210d2e5f8c494c1db2d54ee631c153407fd\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 20:29:12.554520 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount238576437.mount: Deactivated successfully. Mar 17 20:29:12.556171 containerd[1529]: time="2025-03-17T20:29:12.556108638Z" level=info msg="CreateContainer within sandbox \"b44bc39aad5e37db760a0876be692210d2e5f8c494c1db2d54ee631c153407fd\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e77c7e4a5fcd9c8bd4694e9ce7e374694d5bfe20f407eb22cbd183ddc833a2c1\"" Mar 17 20:29:12.558800 containerd[1529]: time="2025-03-17T20:29:12.557881211Z" level=info msg="StartContainer for \"e77c7e4a5fcd9c8bd4694e9ce7e374694d5bfe20f407eb22cbd183ddc833a2c1\"" Mar 17 20:29:12.596878 systemd[1]: Started cri-containerd-e77c7e4a5fcd9c8bd4694e9ce7e374694d5bfe20f407eb22cbd183ddc833a2c1.scope - libcontainer container e77c7e4a5fcd9c8bd4694e9ce7e374694d5bfe20f407eb22cbd183ddc833a2c1. Mar 17 20:29:12.640686 systemd[1]: cri-containerd-e77c7e4a5fcd9c8bd4694e9ce7e374694d5bfe20f407eb22cbd183ddc833a2c1.scope: Deactivated successfully. Mar 17 20:29:12.642586 containerd[1529]: time="2025-03-17T20:29:12.642038596Z" level=info msg="StartContainer for \"e77c7e4a5fcd9c8bd4694e9ce7e374694d5bfe20f407eb22cbd183ddc833a2c1\" returns successfully" Mar 17 20:29:12.673248 containerd[1529]: time="2025-03-17T20:29:12.673150809Z" level=info msg="shim disconnected" id=e77c7e4a5fcd9c8bd4694e9ce7e374694d5bfe20f407eb22cbd183ddc833a2c1 namespace=k8s.io Mar 17 20:29:12.673248 containerd[1529]: time="2025-03-17T20:29:12.673230194Z" level=warning msg="cleaning up after shim disconnected" id=e77c7e4a5fcd9c8bd4694e9ce7e374694d5bfe20f407eb22cbd183ddc833a2c1 namespace=k8s.io Mar 17 20:29:12.673248 containerd[1529]: time="2025-03-17T20:29:12.673245065Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 20:29:13.511939 containerd[1529]: time="2025-03-17T20:29:13.511546258Z" level=info msg="CreateContainer within sandbox \"b44bc39aad5e37db760a0876be692210d2e5f8c494c1db2d54ee631c153407fd\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 20:29:13.541057 containerd[1529]: time="2025-03-17T20:29:13.540554001Z" level=info msg="CreateContainer within sandbox \"b44bc39aad5e37db760a0876be692210d2e5f8c494c1db2d54ee631c153407fd\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c170966b065b267e9846594e2a8c8f062a58bf6a7539c40cb2407bf795ac3cff\"" Mar 17 20:29:13.545621 containerd[1529]: time="2025-03-17T20:29:13.541585994Z" level=info msg="StartContainer for \"c170966b065b267e9846594e2a8c8f062a58bf6a7539c40cb2407bf795ac3cff\"" Mar 17 20:29:13.546657 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e77c7e4a5fcd9c8bd4694e9ce7e374694d5bfe20f407eb22cbd183ddc833a2c1-rootfs.mount: Deactivated successfully. Mar 17 20:29:13.602867 systemd[1]: Started cri-containerd-c170966b065b267e9846594e2a8c8f062a58bf6a7539c40cb2407bf795ac3cff.scope - libcontainer container c170966b065b267e9846594e2a8c8f062a58bf6a7539c40cb2407bf795ac3cff. Mar 17 20:29:13.648809 containerd[1529]: time="2025-03-17T20:29:13.648601350Z" level=info msg="StartContainer for \"c170966b065b267e9846594e2a8c8f062a58bf6a7539c40cb2407bf795ac3cff\" returns successfully" Mar 17 20:29:13.756902 systemd[1]: run-containerd-runc-k8s.io-c170966b065b267e9846594e2a8c8f062a58bf6a7539c40cb2407bf795ac3cff-runc.R1hFd2.mount: Deactivated successfully. Mar 17 20:29:13.952777 kubelet[2772]: I0317 20:29:13.951824 2772 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Mar 17 20:29:14.021433 systemd[1]: Created slice kubepods-burstable-pod18d94811_494a_4b82_b2fe_8ac27410f3ad.slice - libcontainer container kubepods-burstable-pod18d94811_494a_4b82_b2fe_8ac27410f3ad.slice. Mar 17 20:29:14.035076 systemd[1]: Created slice kubepods-burstable-podd181dda6_f19f_4e22_bf51_6f76d1c7da5c.slice - libcontainer container kubepods-burstable-podd181dda6_f19f_4e22_bf51_6f76d1c7da5c.slice. Mar 17 20:29:14.057326 kubelet[2772]: I0317 20:29:14.057280 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/18d94811-494a-4b82-b2fe-8ac27410f3ad-config-volume\") pod \"coredns-668d6bf9bc-gm6lx\" (UID: \"18d94811-494a-4b82-b2fe-8ac27410f3ad\") " pod="kube-system/coredns-668d6bf9bc-gm6lx" Mar 17 20:29:14.057741 kubelet[2772]: I0317 20:29:14.057697 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbk9q\" (UniqueName: \"kubernetes.io/projected/d181dda6-f19f-4e22-bf51-6f76d1c7da5c-kube-api-access-lbk9q\") pod \"coredns-668d6bf9bc-2vtj7\" (UID: \"d181dda6-f19f-4e22-bf51-6f76d1c7da5c\") " pod="kube-system/coredns-668d6bf9bc-2vtj7" Mar 17 20:29:14.057961 kubelet[2772]: I0317 20:29:14.057907 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdqhp\" (UniqueName: \"kubernetes.io/projected/18d94811-494a-4b82-b2fe-8ac27410f3ad-kube-api-access-kdqhp\") pod \"coredns-668d6bf9bc-gm6lx\" (UID: \"18d94811-494a-4b82-b2fe-8ac27410f3ad\") " pod="kube-system/coredns-668d6bf9bc-gm6lx" Mar 17 20:29:14.058317 kubelet[2772]: I0317 20:29:14.058186 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d181dda6-f19f-4e22-bf51-6f76d1c7da5c-config-volume\") pod \"coredns-668d6bf9bc-2vtj7\" (UID: \"d181dda6-f19f-4e22-bf51-6f76d1c7da5c\") " pod="kube-system/coredns-668d6bf9bc-2vtj7" Mar 17 20:29:14.346496 containerd[1529]: time="2025-03-17T20:29:14.346270280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gm6lx,Uid:18d94811-494a-4b82-b2fe-8ac27410f3ad,Namespace:kube-system,Attempt:0,}" Mar 17 20:29:14.346916 containerd[1529]: time="2025-03-17T20:29:14.346298466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2vtj7,Uid:d181dda6-f19f-4e22-bf51-6f76d1c7da5c,Namespace:kube-system,Attempt:0,}" Mar 17 20:29:14.541273 kubelet[2772]: I0317 20:29:14.541201 2772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-66r6x" podStartSLOduration=7.155542362 podStartE2EDuration="20.540440895s" podCreationTimestamp="2025-03-17 20:28:54 +0000 UTC" firstStartedPulling="2025-03-17 20:28:56.105072414 +0000 UTC m=+5.945338883" lastFinishedPulling="2025-03-17 20:29:09.489970947 +0000 UTC m=+19.330237416" observedRunningTime="2025-03-17 20:29:14.539951824 +0000 UTC m=+24.380218320" watchObservedRunningTime="2025-03-17 20:29:14.540440895 +0000 UTC m=+24.380707376" Mar 17 20:29:16.294218 systemd-networkd[1454]: cilium_host: Link UP Mar 17 20:29:16.294495 systemd-networkd[1454]: cilium_net: Link UP Mar 17 20:29:16.300939 systemd-networkd[1454]: cilium_net: Gained carrier Mar 17 20:29:16.301315 systemd-networkd[1454]: cilium_host: Gained carrier Mar 17 20:29:16.477727 systemd-networkd[1454]: cilium_vxlan: Link UP Mar 17 20:29:16.477740 systemd-networkd[1454]: cilium_vxlan: Gained carrier Mar 17 20:29:17.055368 kernel: NET: Registered PF_ALG protocol family Mar 17 20:29:17.213040 systemd-networkd[1454]: cilium_host: Gained IPv6LL Mar 17 20:29:17.276958 systemd-networkd[1454]: cilium_net: Gained IPv6LL Mar 17 20:29:17.916964 systemd-networkd[1454]: cilium_vxlan: Gained IPv6LL Mar 17 20:29:18.167412 systemd-networkd[1454]: lxc_health: Link UP Mar 17 20:29:18.179138 systemd-networkd[1454]: lxc_health: Gained carrier Mar 17 20:29:18.446264 systemd-networkd[1454]: lxc3459747b3991: Link UP Mar 17 20:29:18.480592 kernel: eth0: renamed from tmp73a9b Mar 17 20:29:18.507781 kernel: eth0: renamed from tmp9e900 Mar 17 20:29:18.514156 systemd-networkd[1454]: lxcd554cd75fe99: Link UP Mar 17 20:29:18.517683 systemd-networkd[1454]: lxc3459747b3991: Gained carrier Mar 17 20:29:18.522739 systemd-networkd[1454]: lxcd554cd75fe99: Gained carrier Mar 17 20:29:19.324901 systemd-networkd[1454]: lxc_health: Gained IPv6LL Mar 17 20:29:20.220895 systemd-networkd[1454]: lxc3459747b3991: Gained IPv6LL Mar 17 20:29:20.542718 systemd-networkd[1454]: lxcd554cd75fe99: Gained IPv6LL Mar 17 20:29:24.290507 containerd[1529]: time="2025-03-17T20:29:24.290298304Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 20:29:24.290507 containerd[1529]: time="2025-03-17T20:29:24.290446894Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 20:29:24.290507 containerd[1529]: time="2025-03-17T20:29:24.290581451Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 20:29:24.292762 containerd[1529]: time="2025-03-17T20:29:24.292465531Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 20:29:24.382304 systemd[1]: Started cri-containerd-9e900b2740fd7de2216a20b961c570730f145d8a1283403e98d0cfe1c9e0f662.scope - libcontainer container 9e900b2740fd7de2216a20b961c570730f145d8a1283403e98d0cfe1c9e0f662. Mar 17 20:29:24.470474 containerd[1529]: time="2025-03-17T20:29:24.470160609Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 20:29:24.470474 containerd[1529]: time="2025-03-17T20:29:24.470256898Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 20:29:24.470474 containerd[1529]: time="2025-03-17T20:29:24.470280302Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 20:29:24.471954 containerd[1529]: time="2025-03-17T20:29:24.470432532Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 20:29:24.521896 systemd[1]: Started cri-containerd-73a9bfd6bfb0e30b5b8c27289b945b24cdb0925474a8e386b428088f8bf63fed.scope - libcontainer container 73a9bfd6bfb0e30b5b8c27289b945b24cdb0925474a8e386b428088f8bf63fed. Mar 17 20:29:24.604569 containerd[1529]: time="2025-03-17T20:29:24.603522387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gm6lx,Uid:18d94811-494a-4b82-b2fe-8ac27410f3ad,Namespace:kube-system,Attempt:0,} returns sandbox id \"9e900b2740fd7de2216a20b961c570730f145d8a1283403e98d0cfe1c9e0f662\"" Mar 17 20:29:24.614968 containerd[1529]: time="2025-03-17T20:29:24.614719050Z" level=info msg="CreateContainer within sandbox \"9e900b2740fd7de2216a20b961c570730f145d8a1283403e98d0cfe1c9e0f662\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 20:29:24.648773 containerd[1529]: time="2025-03-17T20:29:24.648723336Z" level=info msg="CreateContainer within sandbox \"9e900b2740fd7de2216a20b961c570730f145d8a1283403e98d0cfe1c9e0f662\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"11fa2433e7519004d901a6d719a3b4a279b7d4dea3f8b13d5e0d47d05fd698c2\"" Mar 17 20:29:24.651725 containerd[1529]: time="2025-03-17T20:29:24.650043159Z" level=info msg="StartContainer for \"11fa2433e7519004d901a6d719a3b4a279b7d4dea3f8b13d5e0d47d05fd698c2\"" Mar 17 20:29:24.689314 containerd[1529]: time="2025-03-17T20:29:24.689258184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2vtj7,Uid:d181dda6-f19f-4e22-bf51-6f76d1c7da5c,Namespace:kube-system,Attempt:0,} returns sandbox id \"73a9bfd6bfb0e30b5b8c27289b945b24cdb0925474a8e386b428088f8bf63fed\"" Mar 17 20:29:24.694985 containerd[1529]: time="2025-03-17T20:29:24.694949895Z" level=info msg="CreateContainer within sandbox \"73a9bfd6bfb0e30b5b8c27289b945b24cdb0925474a8e386b428088f8bf63fed\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 20:29:24.724899 systemd[1]: Started cri-containerd-11fa2433e7519004d901a6d719a3b4a279b7d4dea3f8b13d5e0d47d05fd698c2.scope - libcontainer container 11fa2433e7519004d901a6d719a3b4a279b7d4dea3f8b13d5e0d47d05fd698c2. Mar 17 20:29:24.730121 containerd[1529]: time="2025-03-17T20:29:24.730008141Z" level=info msg="CreateContainer within sandbox \"73a9bfd6bfb0e30b5b8c27289b945b24cdb0925474a8e386b428088f8bf63fed\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"78e0109a8597e28692f08e9ea5223f6bf2703625202ae4d6c57e438864a0241a\"" Mar 17 20:29:24.732944 containerd[1529]: time="2025-03-17T20:29:24.732302037Z" level=info msg="StartContainer for \"78e0109a8597e28692f08e9ea5223f6bf2703625202ae4d6c57e438864a0241a\"" Mar 17 20:29:24.783506 containerd[1529]: time="2025-03-17T20:29:24.783190079Z" level=info msg="StartContainer for \"11fa2433e7519004d901a6d719a3b4a279b7d4dea3f8b13d5e0d47d05fd698c2\" returns successfully" Mar 17 20:29:24.797898 systemd[1]: Started cri-containerd-78e0109a8597e28692f08e9ea5223f6bf2703625202ae4d6c57e438864a0241a.scope - libcontainer container 78e0109a8597e28692f08e9ea5223f6bf2703625202ae4d6c57e438864a0241a. Mar 17 20:29:24.847602 containerd[1529]: time="2025-03-17T20:29:24.846713789Z" level=info msg="StartContainer for \"78e0109a8597e28692f08e9ea5223f6bf2703625202ae4d6c57e438864a0241a\" returns successfully" Mar 17 20:29:25.303200 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1482847072.mount: Deactivated successfully. Mar 17 20:29:25.577113 kubelet[2772]: I0317 20:29:25.576896 2772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-2vtj7" podStartSLOduration=30.576870158 podStartE2EDuration="30.576870158s" podCreationTimestamp="2025-03-17 20:28:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 20:29:25.576164046 +0000 UTC m=+35.416430550" watchObservedRunningTime="2025-03-17 20:29:25.576870158 +0000 UTC m=+35.417136631" Mar 17 20:30:01.901387 systemd[1]: Started sshd@9-10.230.57.126:22-139.178.89.65:46182.service - OpenSSH per-connection server daemon (139.178.89.65:46182). Mar 17 20:30:02.566976 update_engine[1508]: I20250317 20:30:02.563545 1508 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Mar 17 20:30:02.566976 update_engine[1508]: I20250317 20:30:02.563707 1508 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Mar 17 20:30:02.570212 update_engine[1508]: I20250317 20:30:02.567523 1508 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Mar 17 20:30:02.570212 update_engine[1508]: I20250317 20:30:02.569222 1508 omaha_request_params.cc:62] Current group set to beta Mar 17 20:30:02.570212 update_engine[1508]: I20250317 20:30:02.569559 1508 update_attempter.cc:499] Already updated boot flags. Skipping. Mar 17 20:30:02.570212 update_engine[1508]: I20250317 20:30:02.569584 1508 update_attempter.cc:643] Scheduling an action processor start. Mar 17 20:30:02.570212 update_engine[1508]: I20250317 20:30:02.569623 1508 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 17 20:30:02.570212 update_engine[1508]: I20250317 20:30:02.569939 1508 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Mar 17 20:30:02.570212 update_engine[1508]: I20250317 20:30:02.570060 1508 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 17 20:30:02.570212 update_engine[1508]: I20250317 20:30:02.570084 1508 omaha_request_action.cc:272] Request: Mar 17 20:30:02.570212 update_engine[1508]: Mar 17 20:30:02.570212 update_engine[1508]: Mar 17 20:30:02.570212 update_engine[1508]: Mar 17 20:30:02.570212 update_engine[1508]: Mar 17 20:30:02.570212 update_engine[1508]: Mar 17 20:30:02.570212 update_engine[1508]: Mar 17 20:30:02.570212 update_engine[1508]: Mar 17 20:30:02.570212 update_engine[1508]: Mar 17 20:30:02.570212 update_engine[1508]: I20250317 20:30:02.570106 1508 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 17 20:30:02.582817 update_engine[1508]: I20250317 20:30:02.582751 1508 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 17 20:30:02.583735 update_engine[1508]: I20250317 20:30:02.583591 1508 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 17 20:30:02.594830 update_engine[1508]: E20250317 20:30:02.594738 1508 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 17 20:30:02.595022 update_engine[1508]: I20250317 20:30:02.594884 1508 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Mar 17 20:30:02.597864 locksmithd[1533]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Mar 17 20:30:02.849187 sshd[4157]: Accepted publickey for core from 139.178.89.65 port 46182 ssh2: RSA SHA256:k1ZHd7Wei2LOtqjxADh/qNuu/xdqobLaJ/Va6KemVy0 Mar 17 20:30:02.851901 sshd-session[4157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 20:30:02.861833 systemd-logind[1507]: New session 12 of user core. Mar 17 20:30:02.871867 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 17 20:30:03.999012 sshd[4159]: Connection closed by 139.178.89.65 port 46182 Mar 17 20:30:04.000733 sshd-session[4157]: pam_unix(sshd:session): session closed for user core Mar 17 20:30:04.005867 systemd[1]: sshd@9-10.230.57.126:22-139.178.89.65:46182.service: Deactivated successfully. Mar 17 20:30:04.009435 systemd[1]: session-12.scope: Deactivated successfully. Mar 17 20:30:04.011702 systemd-logind[1507]: Session 12 logged out. Waiting for processes to exit. Mar 17 20:30:04.014010 systemd-logind[1507]: Removed session 12. Mar 17 20:30:09.158961 systemd[1]: Started sshd@10-10.230.57.126:22-139.178.89.65:46192.service - OpenSSH per-connection server daemon (139.178.89.65:46192). Mar 17 20:30:10.060706 sshd[4172]: Accepted publickey for core from 139.178.89.65 port 46192 ssh2: RSA SHA256:k1ZHd7Wei2LOtqjxADh/qNuu/xdqobLaJ/Va6KemVy0 Mar 17 20:30:10.062837 sshd-session[4172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 20:30:10.071014 systemd-logind[1507]: New session 13 of user core. Mar 17 20:30:10.078873 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 17 20:30:10.777058 sshd[4174]: Connection closed by 139.178.89.65 port 46192 Mar 17 20:30:10.778149 sshd-session[4172]: pam_unix(sshd:session): session closed for user core Mar 17 20:30:10.784848 systemd[1]: sshd@10-10.230.57.126:22-139.178.89.65:46192.service: Deactivated successfully. Mar 17 20:30:10.788011 systemd[1]: session-13.scope: Deactivated successfully. Mar 17 20:30:10.789767 systemd-logind[1507]: Session 13 logged out. Waiting for processes to exit. Mar 17 20:30:10.791363 systemd-logind[1507]: Removed session 13. Mar 17 20:30:12.510673 update_engine[1508]: I20250317 20:30:12.510540 1508 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 17 20:30:12.511306 update_engine[1508]: I20250317 20:30:12.510957 1508 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 17 20:30:12.511364 update_engine[1508]: I20250317 20:30:12.511296 1508 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 17 20:30:12.511931 update_engine[1508]: E20250317 20:30:12.511881 1508 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 17 20:30:12.512017 update_engine[1508]: I20250317 20:30:12.511961 1508 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Mar 17 20:30:15.952234 systemd[1]: Started sshd@11-10.230.57.126:22-139.178.89.65:39252.service - OpenSSH per-connection server daemon (139.178.89.65:39252). Mar 17 20:30:16.841465 sshd[4188]: Accepted publickey for core from 139.178.89.65 port 39252 ssh2: RSA SHA256:k1ZHd7Wei2LOtqjxADh/qNuu/xdqobLaJ/Va6KemVy0 Mar 17 20:30:16.843671 sshd-session[4188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 20:30:16.851804 systemd-logind[1507]: New session 14 of user core. Mar 17 20:30:16.858874 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 17 20:30:17.546255 sshd[4190]: Connection closed by 139.178.89.65 port 39252 Mar 17 20:30:17.546830 sshd-session[4188]: pam_unix(sshd:session): session closed for user core Mar 17 20:30:17.552209 systemd-logind[1507]: Session 14 logged out. Waiting for processes to exit. Mar 17 20:30:17.553245 systemd[1]: sshd@11-10.230.57.126:22-139.178.89.65:39252.service: Deactivated successfully. Mar 17 20:30:17.557770 systemd[1]: session-14.scope: Deactivated successfully. Mar 17 20:30:17.559557 systemd-logind[1507]: Removed session 14. Mar 17 20:30:22.514728 update_engine[1508]: I20250317 20:30:22.514297 1508 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 17 20:30:22.515403 update_engine[1508]: I20250317 20:30:22.514752 1508 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 17 20:30:22.515403 update_engine[1508]: I20250317 20:30:22.515105 1508 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 17 20:30:22.515863 update_engine[1508]: E20250317 20:30:22.515815 1508 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 17 20:30:22.515946 update_engine[1508]: I20250317 20:30:22.515897 1508 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Mar 17 20:30:22.709021 systemd[1]: Started sshd@12-10.230.57.126:22-139.178.89.65:49988.service - OpenSSH per-connection server daemon (139.178.89.65:49988). Mar 17 20:30:23.653484 sshd[4203]: Accepted publickey for core from 139.178.89.65 port 49988 ssh2: RSA SHA256:k1ZHd7Wei2LOtqjxADh/qNuu/xdqobLaJ/Va6KemVy0 Mar 17 20:30:23.655584 sshd-session[4203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 20:30:23.664445 systemd-logind[1507]: New session 15 of user core. Mar 17 20:30:23.670867 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 17 20:30:24.381719 sshd[4205]: Connection closed by 139.178.89.65 port 49988 Mar 17 20:30:24.383184 sshd-session[4203]: pam_unix(sshd:session): session closed for user core Mar 17 20:30:24.388772 systemd-logind[1507]: Session 15 logged out. Waiting for processes to exit. Mar 17 20:30:24.389508 systemd[1]: sshd@12-10.230.57.126:22-139.178.89.65:49988.service: Deactivated successfully. Mar 17 20:30:24.392543 systemd[1]: session-15.scope: Deactivated successfully. Mar 17 20:30:24.394734 systemd-logind[1507]: Removed session 15. Mar 17 20:30:24.541522 systemd[1]: Started sshd@13-10.230.57.126:22-139.178.89.65:50002.service - OpenSSH per-connection server daemon (139.178.89.65:50002). Mar 17 20:30:25.437909 sshd[4218]: Accepted publickey for core from 139.178.89.65 port 50002 ssh2: RSA SHA256:k1ZHd7Wei2LOtqjxADh/qNuu/xdqobLaJ/Va6KemVy0 Mar 17 20:30:25.439855 sshd-session[4218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 20:30:25.450109 systemd-logind[1507]: New session 16 of user core. Mar 17 20:30:25.465863 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 17 20:30:26.222538 sshd[4220]: Connection closed by 139.178.89.65 port 50002 Mar 17 20:30:26.223867 sshd-session[4218]: pam_unix(sshd:session): session closed for user core Mar 17 20:30:26.229613 systemd[1]: sshd@13-10.230.57.126:22-139.178.89.65:50002.service: Deactivated successfully. Mar 17 20:30:26.232036 systemd[1]: session-16.scope: Deactivated successfully. Mar 17 20:30:26.233221 systemd-logind[1507]: Session 16 logged out. Waiting for processes to exit. Mar 17 20:30:26.235123 systemd-logind[1507]: Removed session 16. Mar 17 20:30:26.379105 systemd[1]: Started sshd@14-10.230.57.126:22-139.178.89.65:50016.service - OpenSSH per-connection server daemon (139.178.89.65:50016). Mar 17 20:30:27.291904 sshd[4230]: Accepted publickey for core from 139.178.89.65 port 50016 ssh2: RSA SHA256:k1ZHd7Wei2LOtqjxADh/qNuu/xdqobLaJ/Va6KemVy0 Mar 17 20:30:27.294548 sshd-session[4230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 20:30:27.301982 systemd-logind[1507]: New session 17 of user core. Mar 17 20:30:27.308894 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 17 20:30:28.013043 sshd[4232]: Connection closed by 139.178.89.65 port 50016 Mar 17 20:30:28.014363 sshd-session[4230]: pam_unix(sshd:session): session closed for user core Mar 17 20:30:28.019293 systemd[1]: sshd@14-10.230.57.126:22-139.178.89.65:50016.service: Deactivated successfully. Mar 17 20:30:28.021845 systemd[1]: session-17.scope: Deactivated successfully. Mar 17 20:30:28.023102 systemd-logind[1507]: Session 17 logged out. Waiting for processes to exit. Mar 17 20:30:28.024627 systemd-logind[1507]: Removed session 17. Mar 17 20:30:32.513448 update_engine[1508]: I20250317 20:30:32.513288 1508 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 17 20:30:32.514945 update_engine[1508]: I20250317 20:30:32.513906 1508 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 17 20:30:32.514945 update_engine[1508]: I20250317 20:30:32.514380 1508 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 17 20:30:32.515125 update_engine[1508]: E20250317 20:30:32.514976 1508 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 17 20:30:32.515125 update_engine[1508]: I20250317 20:30:32.515053 1508 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 17 20:30:32.515125 update_engine[1508]: I20250317 20:30:32.515074 1508 omaha_request_action.cc:617] Omaha request response: Mar 17 20:30:32.515303 update_engine[1508]: E20250317 20:30:32.515230 1508 omaha_request_action.cc:636] Omaha request network transfer failed. Mar 17 20:30:32.518257 update_engine[1508]: I20250317 20:30:32.518187 1508 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Mar 17 20:30:32.518257 update_engine[1508]: I20250317 20:30:32.518229 1508 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 17 20:30:32.518257 update_engine[1508]: I20250317 20:30:32.518244 1508 update_attempter.cc:306] Processing Done. Mar 17 20:30:32.518457 update_engine[1508]: E20250317 20:30:32.518302 1508 update_attempter.cc:619] Update failed. Mar 17 20:30:32.518457 update_engine[1508]: I20250317 20:30:32.518329 1508 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Mar 17 20:30:32.518457 update_engine[1508]: I20250317 20:30:32.518343 1508 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Mar 17 20:30:32.518457 update_engine[1508]: I20250317 20:30:32.518357 1508 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Mar 17 20:30:32.518716 update_engine[1508]: I20250317 20:30:32.518486 1508 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 17 20:30:32.518716 update_engine[1508]: I20250317 20:30:32.518532 1508 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 17 20:30:32.518716 update_engine[1508]: I20250317 20:30:32.518548 1508 omaha_request_action.cc:272] Request: Mar 17 20:30:32.518716 update_engine[1508]: Mar 17 20:30:32.518716 update_engine[1508]: Mar 17 20:30:32.518716 update_engine[1508]: Mar 17 20:30:32.518716 update_engine[1508]: Mar 17 20:30:32.518716 update_engine[1508]: Mar 17 20:30:32.518716 update_engine[1508]: Mar 17 20:30:32.518716 update_engine[1508]: I20250317 20:30:32.518561 1508 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 17 20:30:32.519094 update_engine[1508]: I20250317 20:30:32.518857 1508 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 17 20:30:32.519167 update_engine[1508]: I20250317 20:30:32.519127 1508 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 17 20:30:32.519905 locksmithd[1533]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Mar 17 20:30:32.520451 update_engine[1508]: E20250317 20:30:32.520205 1508 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 17 20:30:32.520451 update_engine[1508]: I20250317 20:30:32.520306 1508 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 17 20:30:32.520451 update_engine[1508]: I20250317 20:30:32.520326 1508 omaha_request_action.cc:617] Omaha request response: Mar 17 20:30:32.520451 update_engine[1508]: I20250317 20:30:32.520340 1508 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 17 20:30:32.520451 update_engine[1508]: I20250317 20:30:32.520354 1508 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 17 20:30:32.520451 update_engine[1508]: I20250317 20:30:32.520365 1508 update_attempter.cc:306] Processing Done. Mar 17 20:30:32.520451 update_engine[1508]: I20250317 20:30:32.520394 1508 update_attempter.cc:310] Error event sent. Mar 17 20:30:32.520451 update_engine[1508]: I20250317 20:30:32.520414 1508 update_check_scheduler.cc:74] Next update check in 42m41s Mar 17 20:30:32.521244 locksmithd[1533]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Mar 17 20:30:33.181111 systemd[1]: Started sshd@15-10.230.57.126:22-139.178.89.65:38116.service - OpenSSH per-connection server daemon (139.178.89.65:38116). Mar 17 20:30:34.079009 sshd[4245]: Accepted publickey for core from 139.178.89.65 port 38116 ssh2: RSA SHA256:k1ZHd7Wei2LOtqjxADh/qNuu/xdqobLaJ/Va6KemVy0 Mar 17 20:30:34.081033 sshd-session[4245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 20:30:34.089121 systemd-logind[1507]: New session 18 of user core. Mar 17 20:30:34.097866 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 17 20:30:34.801887 sshd[4247]: Connection closed by 139.178.89.65 port 38116 Mar 17 20:30:34.803357 sshd-session[4245]: pam_unix(sshd:session): session closed for user core Mar 17 20:30:34.809438 systemd[1]: sshd@15-10.230.57.126:22-139.178.89.65:38116.service: Deactivated successfully. Mar 17 20:30:34.812494 systemd[1]: session-18.scope: Deactivated successfully. Mar 17 20:30:34.814216 systemd-logind[1507]: Session 18 logged out. Waiting for processes to exit. Mar 17 20:30:34.816352 systemd-logind[1507]: Removed session 18. Mar 17 20:30:39.961153 systemd[1]: Started sshd@16-10.230.57.126:22-139.178.89.65:38126.service - OpenSSH per-connection server daemon (139.178.89.65:38126). Mar 17 20:30:40.847458 sshd[4259]: Accepted publickey for core from 139.178.89.65 port 38126 ssh2: RSA SHA256:k1ZHd7Wei2LOtqjxADh/qNuu/xdqobLaJ/Va6KemVy0 Mar 17 20:30:40.849448 sshd-session[4259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 20:30:40.856081 systemd-logind[1507]: New session 19 of user core. Mar 17 20:30:40.861855 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 17 20:30:41.548395 sshd[4261]: Connection closed by 139.178.89.65 port 38126 Mar 17 20:30:41.549051 sshd-session[4259]: pam_unix(sshd:session): session closed for user core Mar 17 20:30:41.554988 systemd[1]: sshd@16-10.230.57.126:22-139.178.89.65:38126.service: Deactivated successfully. Mar 17 20:30:41.558161 systemd[1]: session-19.scope: Deactivated successfully. Mar 17 20:30:41.560753 systemd-logind[1507]: Session 19 logged out. Waiting for processes to exit. Mar 17 20:30:41.562580 systemd-logind[1507]: Removed session 19. Mar 17 20:30:41.711958 systemd[1]: Started sshd@17-10.230.57.126:22-139.178.89.65:54864.service - OpenSSH per-connection server daemon (139.178.89.65:54864). Mar 17 20:30:42.601155 sshd[4273]: Accepted publickey for core from 139.178.89.65 port 54864 ssh2: RSA SHA256:k1ZHd7Wei2LOtqjxADh/qNuu/xdqobLaJ/Va6KemVy0 Mar 17 20:30:42.603217 sshd-session[4273]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 20:30:42.610128 systemd-logind[1507]: New session 20 of user core. Mar 17 20:30:42.615876 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 17 20:30:43.666797 sshd[4275]: Connection closed by 139.178.89.65 port 54864 Mar 17 20:30:43.669367 sshd-session[4273]: pam_unix(sshd:session): session closed for user core Mar 17 20:30:43.675335 systemd-logind[1507]: Session 20 logged out. Waiting for processes to exit. Mar 17 20:30:43.676210 systemd[1]: sshd@17-10.230.57.126:22-139.178.89.65:54864.service: Deactivated successfully. Mar 17 20:30:43.679439 systemd[1]: session-20.scope: Deactivated successfully. Mar 17 20:30:43.682097 systemd-logind[1507]: Removed session 20. Mar 17 20:30:43.831025 systemd[1]: Started sshd@18-10.230.57.126:22-139.178.89.65:54874.service - OpenSSH per-connection server daemon (139.178.89.65:54874). Mar 17 20:30:44.759091 sshd[4286]: Accepted publickey for core from 139.178.89.65 port 54874 ssh2: RSA SHA256:k1ZHd7Wei2LOtqjxADh/qNuu/xdqobLaJ/Va6KemVy0 Mar 17 20:30:44.761616 sshd-session[4286]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 20:30:44.769064 systemd-logind[1507]: New session 21 of user core. Mar 17 20:30:44.773926 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 17 20:30:46.555425 sshd[4288]: Connection closed by 139.178.89.65 port 54874 Mar 17 20:30:46.555140 sshd-session[4286]: pam_unix(sshd:session): session closed for user core Mar 17 20:30:46.560179 systemd[1]: sshd@18-10.230.57.126:22-139.178.89.65:54874.service: Deactivated successfully. Mar 17 20:30:46.562988 systemd[1]: session-21.scope: Deactivated successfully. Mar 17 20:30:46.565331 systemd-logind[1507]: Session 21 logged out. Waiting for processes to exit. Mar 17 20:30:46.567166 systemd-logind[1507]: Removed session 21. Mar 17 20:30:46.717028 systemd[1]: Started sshd@19-10.230.57.126:22-139.178.89.65:54880.service - OpenSSH per-connection server daemon (139.178.89.65:54880). Mar 17 20:30:47.607887 sshd[4305]: Accepted publickey for core from 139.178.89.65 port 54880 ssh2: RSA SHA256:k1ZHd7Wei2LOtqjxADh/qNuu/xdqobLaJ/Va6KemVy0 Mar 17 20:30:47.609929 sshd-session[4305]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 20:30:47.617890 systemd-logind[1507]: New session 22 of user core. Mar 17 20:30:47.624884 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 17 20:30:48.490905 sshd[4307]: Connection closed by 139.178.89.65 port 54880 Mar 17 20:30:48.491411 sshd-session[4305]: pam_unix(sshd:session): session closed for user core Mar 17 20:30:48.497225 systemd[1]: sshd@19-10.230.57.126:22-139.178.89.65:54880.service: Deactivated successfully. Mar 17 20:30:48.500856 systemd[1]: session-22.scope: Deactivated successfully. Mar 17 20:30:48.502356 systemd-logind[1507]: Session 22 logged out. Waiting for processes to exit. Mar 17 20:30:48.503882 systemd-logind[1507]: Removed session 22. Mar 17 20:30:48.653021 systemd[1]: Started sshd@20-10.230.57.126:22-139.178.89.65:54892.service - OpenSSH per-connection server daemon (139.178.89.65:54892). Mar 17 20:30:49.544547 sshd[4317]: Accepted publickey for core from 139.178.89.65 port 54892 ssh2: RSA SHA256:k1ZHd7Wei2LOtqjxADh/qNuu/xdqobLaJ/Va6KemVy0 Mar 17 20:30:49.547759 sshd-session[4317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 20:30:49.556573 systemd-logind[1507]: New session 23 of user core. Mar 17 20:30:49.564835 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 17 20:30:50.247699 sshd[4319]: Connection closed by 139.178.89.65 port 54892 Mar 17 20:30:50.248611 sshd-session[4317]: pam_unix(sshd:session): session closed for user core Mar 17 20:30:50.253995 systemd-logind[1507]: Session 23 logged out. Waiting for processes to exit. Mar 17 20:30:50.254403 systemd[1]: sshd@20-10.230.57.126:22-139.178.89.65:54892.service: Deactivated successfully. Mar 17 20:30:50.257290 systemd[1]: session-23.scope: Deactivated successfully. Mar 17 20:30:50.258927 systemd-logind[1507]: Removed session 23. Mar 17 20:30:55.415001 systemd[1]: Started sshd@21-10.230.57.126:22-139.178.89.65:42858.service - OpenSSH per-connection server daemon (139.178.89.65:42858). Mar 17 20:30:56.311685 sshd[4335]: Accepted publickey for core from 139.178.89.65 port 42858 ssh2: RSA SHA256:k1ZHd7Wei2LOtqjxADh/qNuu/xdqobLaJ/Va6KemVy0 Mar 17 20:30:56.313859 sshd-session[4335]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 20:30:56.322089 systemd-logind[1507]: New session 24 of user core. Mar 17 20:30:56.337926 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 17 20:30:57.020569 sshd[4337]: Connection closed by 139.178.89.65 port 42858 Mar 17 20:30:57.021616 sshd-session[4335]: pam_unix(sshd:session): session closed for user core Mar 17 20:30:57.026974 systemd[1]: sshd@21-10.230.57.126:22-139.178.89.65:42858.service: Deactivated successfully. Mar 17 20:30:57.030705 systemd[1]: session-24.scope: Deactivated successfully. Mar 17 20:30:57.032783 systemd-logind[1507]: Session 24 logged out. Waiting for processes to exit. Mar 17 20:30:57.034752 systemd-logind[1507]: Removed session 24. Mar 17 20:31:02.178989 systemd[1]: Started sshd@22-10.230.57.126:22-139.178.89.65:33476.service - OpenSSH per-connection server daemon (139.178.89.65:33476). Mar 17 20:31:03.069124 sshd[4351]: Accepted publickey for core from 139.178.89.65 port 33476 ssh2: RSA SHA256:k1ZHd7Wei2LOtqjxADh/qNuu/xdqobLaJ/Va6KemVy0 Mar 17 20:31:03.071283 sshd-session[4351]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 20:31:03.079616 systemd-logind[1507]: New session 25 of user core. Mar 17 20:31:03.083862 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 17 20:31:03.771979 sshd[4353]: Connection closed by 139.178.89.65 port 33476 Mar 17 20:31:03.772945 sshd-session[4351]: pam_unix(sshd:session): session closed for user core Mar 17 20:31:03.777998 systemd-logind[1507]: Session 25 logged out. Waiting for processes to exit. Mar 17 20:31:03.779077 systemd[1]: sshd@22-10.230.57.126:22-139.178.89.65:33476.service: Deactivated successfully. Mar 17 20:31:03.781981 systemd[1]: session-25.scope: Deactivated successfully. Mar 17 20:31:03.783774 systemd-logind[1507]: Removed session 25. Mar 17 20:31:08.537986 systemd[1]: Started sshd@23-10.230.57.126:22-103.90.136.32:48564.service - OpenSSH per-connection server daemon (103.90.136.32:48564). Mar 17 20:31:08.936021 systemd[1]: Started sshd@24-10.230.57.126:22-139.178.89.65:33484.service - OpenSSH per-connection server daemon (139.178.89.65:33484). Mar 17 20:31:09.760181 sshd[4364]: Connection closed by authenticating user root 103.90.136.32 port 48564 [preauth] Mar 17 20:31:09.762452 systemd[1]: sshd@23-10.230.57.126:22-103.90.136.32:48564.service: Deactivated successfully. Mar 17 20:31:09.826635 sshd[4367]: Accepted publickey for core from 139.178.89.65 port 33484 ssh2: RSA SHA256:k1ZHd7Wei2LOtqjxADh/qNuu/xdqobLaJ/Va6KemVy0 Mar 17 20:31:09.828712 sshd-session[4367]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 20:31:09.837009 systemd-logind[1507]: New session 26 of user core. Mar 17 20:31:09.846888 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 17 20:31:10.528847 sshd[4371]: Connection closed by 139.178.89.65 port 33484 Mar 17 20:31:10.529930 sshd-session[4367]: pam_unix(sshd:session): session closed for user core Mar 17 20:31:10.534972 systemd-logind[1507]: Session 26 logged out. Waiting for processes to exit. Mar 17 20:31:10.535426 systemd[1]: sshd@24-10.230.57.126:22-139.178.89.65:33484.service: Deactivated successfully. Mar 17 20:31:10.538379 systemd[1]: session-26.scope: Deactivated successfully. Mar 17 20:31:10.539865 systemd-logind[1507]: Removed session 26. Mar 17 20:31:10.688003 systemd[1]: Started sshd@25-10.230.57.126:22-139.178.89.65:33488.service - OpenSSH per-connection server daemon (139.178.89.65:33488). Mar 17 20:31:11.582525 sshd[4382]: Accepted publickey for core from 139.178.89.65 port 33488 ssh2: RSA SHA256:k1ZHd7Wei2LOtqjxADh/qNuu/xdqobLaJ/Va6KemVy0 Mar 17 20:31:11.584579 sshd-session[4382]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 20:31:11.591981 systemd-logind[1507]: New session 27 of user core. Mar 17 20:31:11.599894 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 17 20:31:13.939667 kubelet[2772]: I0317 20:31:13.937433 2772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-gm6lx" podStartSLOduration=138.937358783 podStartE2EDuration="2m18.937358783s" podCreationTimestamp="2025-03-17 20:28:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 20:29:25.614012484 +0000 UTC m=+35.454278972" watchObservedRunningTime="2025-03-17 20:31:13.937358783 +0000 UTC m=+143.777625258" Mar 17 20:31:14.008144 containerd[1529]: time="2025-03-17T20:31:14.007213787Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 20:31:14.011766 containerd[1529]: time="2025-03-17T20:31:14.011631998Z" level=info msg="StopContainer for \"e53b50978e9d4b7eb802cd3ad643bbf67ca04a2b584bc73c739da56041000eb8\" with timeout 30 (s)" Mar 17 20:31:14.014828 containerd[1529]: time="2025-03-17T20:31:14.013835831Z" level=info msg="Stop container \"e53b50978e9d4b7eb802cd3ad643bbf67ca04a2b584bc73c739da56041000eb8\" with signal terminated" Mar 17 20:31:14.031782 containerd[1529]: time="2025-03-17T20:31:14.031703983Z" level=info msg="StopContainer for \"c170966b065b267e9846594e2a8c8f062a58bf6a7539c40cb2407bf795ac3cff\" with timeout 2 (s)" Mar 17 20:31:14.033017 containerd[1529]: time="2025-03-17T20:31:14.032236557Z" level=info msg="Stop container \"c170966b065b267e9846594e2a8c8f062a58bf6a7539c40cb2407bf795ac3cff\" with signal terminated" Mar 17 20:31:14.034943 systemd[1]: cri-containerd-e53b50978e9d4b7eb802cd3ad643bbf67ca04a2b584bc73c739da56041000eb8.scope: Deactivated successfully. Mar 17 20:31:14.035883 systemd[1]: cri-containerd-e53b50978e9d4b7eb802cd3ad643bbf67ca04a2b584bc73c739da56041000eb8.scope: Consumed 546ms CPU time, 34.1M memory peak, 7.3M read from disk, 4K written to disk. Mar 17 20:31:14.052816 systemd-networkd[1454]: lxc_health: Link DOWN Mar 17 20:31:14.052827 systemd-networkd[1454]: lxc_health: Lost carrier Mar 17 20:31:14.073008 systemd[1]: cri-containerd-c170966b065b267e9846594e2a8c8f062a58bf6a7539c40cb2407bf795ac3cff.scope: Deactivated successfully. Mar 17 20:31:14.073906 systemd[1]: cri-containerd-c170966b065b267e9846594e2a8c8f062a58bf6a7539c40cb2407bf795ac3cff.scope: Consumed 10.421s CPU time, 193.4M memory peak, 68.1M read from disk, 13.3M written to disk. Mar 17 20:31:14.114506 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e53b50978e9d4b7eb802cd3ad643bbf67ca04a2b584bc73c739da56041000eb8-rootfs.mount: Deactivated successfully. Mar 17 20:31:14.123531 containerd[1529]: time="2025-03-17T20:31:14.122247840Z" level=info msg="shim disconnected" id=e53b50978e9d4b7eb802cd3ad643bbf67ca04a2b584bc73c739da56041000eb8 namespace=k8s.io Mar 17 20:31:14.123531 containerd[1529]: time="2025-03-17T20:31:14.122383873Z" level=warning msg="cleaning up after shim disconnected" id=e53b50978e9d4b7eb802cd3ad643bbf67ca04a2b584bc73c739da56041000eb8 namespace=k8s.io Mar 17 20:31:14.123531 containerd[1529]: time="2025-03-17T20:31:14.122413555Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 20:31:14.125360 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c170966b065b267e9846594e2a8c8f062a58bf6a7539c40cb2407bf795ac3cff-rootfs.mount: Deactivated successfully. Mar 17 20:31:14.132306 containerd[1529]: time="2025-03-17T20:31:14.132230048Z" level=info msg="shim disconnected" id=c170966b065b267e9846594e2a8c8f062a58bf6a7539c40cb2407bf795ac3cff namespace=k8s.io Mar 17 20:31:14.132306 containerd[1529]: time="2025-03-17T20:31:14.132301056Z" level=warning msg="cleaning up after shim disconnected" id=c170966b065b267e9846594e2a8c8f062a58bf6a7539c40cb2407bf795ac3cff namespace=k8s.io Mar 17 20:31:14.132682 containerd[1529]: time="2025-03-17T20:31:14.132317764Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 20:31:14.157625 containerd[1529]: time="2025-03-17T20:31:14.157555415Z" level=info msg="StopContainer for \"e53b50978e9d4b7eb802cd3ad643bbf67ca04a2b584bc73c739da56041000eb8\" returns successfully" Mar 17 20:31:14.159897 containerd[1529]: time="2025-03-17T20:31:14.159447610Z" level=info msg="StopContainer for \"c170966b065b267e9846594e2a8c8f062a58bf6a7539c40cb2407bf795ac3cff\" returns successfully" Mar 17 20:31:14.162936 containerd[1529]: time="2025-03-17T20:31:14.162683761Z" level=info msg="StopPodSandbox for \"acee099f4cebf8c28e1b883d55e112b4b0be73a44ee909524b37be8e5298a1a5\"" Mar 17 20:31:14.170705 containerd[1529]: time="2025-03-17T20:31:14.168208351Z" level=info msg="StopPodSandbox for \"b44bc39aad5e37db760a0876be692210d2e5f8c494c1db2d54ee631c153407fd\"" Mar 17 20:31:14.170705 containerd[1529]: time="2025-03-17T20:31:14.168278347Z" level=info msg="Container to stop \"c170966b065b267e9846594e2a8c8f062a58bf6a7539c40cb2407bf795ac3cff\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 20:31:14.170705 containerd[1529]: time="2025-03-17T20:31:14.168346092Z" level=info msg="Container to stop \"ca067d93c109dffde9a227c1896e90e54ca8dd2f3bac27a1859f215fe221bc91\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 20:31:14.170705 containerd[1529]: time="2025-03-17T20:31:14.168363577Z" level=info msg="Container to stop \"e77c7e4a5fcd9c8bd4694e9ce7e374694d5bfe20f407eb22cbd183ddc833a2c1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 20:31:14.170705 containerd[1529]: time="2025-03-17T20:31:14.168379422Z" level=info msg="Container to stop \"f3f8f1c65d63f2d01d886696d2a221854f9ece613797a7c63fb66b03586dd3ee\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 20:31:14.170705 containerd[1529]: time="2025-03-17T20:31:14.168393745Z" level=info msg="Container to stop \"eedab9fe115f4783adecc7361c71993a796cabe45ae91cee4c1d217db0052cc2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 20:31:14.171072 containerd[1529]: time="2025-03-17T20:31:14.164620822Z" level=info msg="Container to stop \"e53b50978e9d4b7eb802cd3ad643bbf67ca04a2b584bc73c739da56041000eb8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 20:31:14.174854 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b44bc39aad5e37db760a0876be692210d2e5f8c494c1db2d54ee631c153407fd-shm.mount: Deactivated successfully. Mar 17 20:31:14.181078 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-acee099f4cebf8c28e1b883d55e112b4b0be73a44ee909524b37be8e5298a1a5-shm.mount: Deactivated successfully. Mar 17 20:31:14.186888 systemd[1]: cri-containerd-b44bc39aad5e37db760a0876be692210d2e5f8c494c1db2d54ee631c153407fd.scope: Deactivated successfully. Mar 17 20:31:14.194778 systemd[1]: cri-containerd-acee099f4cebf8c28e1b883d55e112b4b0be73a44ee909524b37be8e5298a1a5.scope: Deactivated successfully. Mar 17 20:31:14.234570 containerd[1529]: time="2025-03-17T20:31:14.234342214Z" level=info msg="shim disconnected" id=b44bc39aad5e37db760a0876be692210d2e5f8c494c1db2d54ee631c153407fd namespace=k8s.io Mar 17 20:31:14.234570 containerd[1529]: time="2025-03-17T20:31:14.234427921Z" level=warning msg="cleaning up after shim disconnected" id=b44bc39aad5e37db760a0876be692210d2e5f8c494c1db2d54ee631c153407fd namespace=k8s.io Mar 17 20:31:14.234570 containerd[1529]: time="2025-03-17T20:31:14.234447347Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 20:31:14.238386 containerd[1529]: time="2025-03-17T20:31:14.238128948Z" level=info msg="shim disconnected" id=acee099f4cebf8c28e1b883d55e112b4b0be73a44ee909524b37be8e5298a1a5 namespace=k8s.io Mar 17 20:31:14.238386 containerd[1529]: time="2025-03-17T20:31:14.238179307Z" level=warning msg="cleaning up after shim disconnected" id=acee099f4cebf8c28e1b883d55e112b4b0be73a44ee909524b37be8e5298a1a5 namespace=k8s.io Mar 17 20:31:14.238386 containerd[1529]: time="2025-03-17T20:31:14.238193659Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 20:31:14.264312 containerd[1529]: time="2025-03-17T20:31:14.264154876Z" level=info msg="TearDown network for sandbox \"b44bc39aad5e37db760a0876be692210d2e5f8c494c1db2d54ee631c153407fd\" successfully" Mar 17 20:31:14.264312 containerd[1529]: time="2025-03-17T20:31:14.264230434Z" level=info msg="TearDown network for sandbox \"acee099f4cebf8c28e1b883d55e112b4b0be73a44ee909524b37be8e5298a1a5\" successfully" Mar 17 20:31:14.264312 containerd[1529]: time="2025-03-17T20:31:14.264273186Z" level=info msg="StopPodSandbox for \"acee099f4cebf8c28e1b883d55e112b4b0be73a44ee909524b37be8e5298a1a5\" returns successfully" Mar 17 20:31:14.264969 containerd[1529]: time="2025-03-17T20:31:14.264229854Z" level=info msg="StopPodSandbox for \"b44bc39aad5e37db760a0876be692210d2e5f8c494c1db2d54ee631c153407fd\" returns successfully" Mar 17 20:31:14.428544 kubelet[2772]: I0317 20:31:14.428472 2772 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/451330fa-c9d5-43aa-a54d-add34474be19-cilium-cgroup\") pod \"451330fa-c9d5-43aa-a54d-add34474be19\" (UID: \"451330fa-c9d5-43aa-a54d-add34474be19\") " Mar 17 20:31:14.428544 kubelet[2772]: I0317 20:31:14.428546 2772 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/451330fa-c9d5-43aa-a54d-add34474be19-clustermesh-secrets\") pod \"451330fa-c9d5-43aa-a54d-add34474be19\" (UID: \"451330fa-c9d5-43aa-a54d-add34474be19\") " Mar 17 20:31:14.428888 kubelet[2772]: I0317 20:31:14.428589 2772 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6b16d145-6f58-4f1d-ac6c-ea3969459599-cilium-config-path\") pod \"6b16d145-6f58-4f1d-ac6c-ea3969459599\" (UID: \"6b16d145-6f58-4f1d-ac6c-ea3969459599\") " Mar 17 20:31:14.428888 kubelet[2772]: I0317 20:31:14.428621 2772 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/451330fa-c9d5-43aa-a54d-add34474be19-bpf-maps\") pod \"451330fa-c9d5-43aa-a54d-add34474be19\" (UID: \"451330fa-c9d5-43aa-a54d-add34474be19\") " Mar 17 20:31:14.428888 kubelet[2772]: I0317 20:31:14.428834 2772 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9lg8q\" (UniqueName: \"kubernetes.io/projected/451330fa-c9d5-43aa-a54d-add34474be19-kube-api-access-9lg8q\") pod \"451330fa-c9d5-43aa-a54d-add34474be19\" (UID: \"451330fa-c9d5-43aa-a54d-add34474be19\") " Mar 17 20:31:14.428888 kubelet[2772]: I0317 20:31:14.428866 2772 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/451330fa-c9d5-43aa-a54d-add34474be19-hostproc\") pod \"451330fa-c9d5-43aa-a54d-add34474be19\" (UID: \"451330fa-c9d5-43aa-a54d-add34474be19\") " Mar 17 20:31:14.429190 kubelet[2772]: I0317 20:31:14.428894 2772 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/451330fa-c9d5-43aa-a54d-add34474be19-host-proc-sys-net\") pod \"451330fa-c9d5-43aa-a54d-add34474be19\" (UID: \"451330fa-c9d5-43aa-a54d-add34474be19\") " Mar 17 20:31:14.429190 kubelet[2772]: I0317 20:31:14.428932 2772 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/451330fa-c9d5-43aa-a54d-add34474be19-host-proc-sys-kernel\") pod \"451330fa-c9d5-43aa-a54d-add34474be19\" (UID: \"451330fa-c9d5-43aa-a54d-add34474be19\") " Mar 17 20:31:14.429190 kubelet[2772]: I0317 20:31:14.428965 2772 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/451330fa-c9d5-43aa-a54d-add34474be19-xtables-lock\") pod \"451330fa-c9d5-43aa-a54d-add34474be19\" (UID: \"451330fa-c9d5-43aa-a54d-add34474be19\") " Mar 17 20:31:14.429190 kubelet[2772]: I0317 20:31:14.428994 2772 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/451330fa-c9d5-43aa-a54d-add34474be19-cilium-config-path\") pod \"451330fa-c9d5-43aa-a54d-add34474be19\" (UID: \"451330fa-c9d5-43aa-a54d-add34474be19\") " Mar 17 20:31:14.429190 kubelet[2772]: I0317 20:31:14.429069 2772 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/451330fa-c9d5-43aa-a54d-add34474be19-cilium-run\") pod \"451330fa-c9d5-43aa-a54d-add34474be19\" (UID: \"451330fa-c9d5-43aa-a54d-add34474be19\") " Mar 17 20:31:14.429190 kubelet[2772]: I0317 20:31:14.429103 2772 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/451330fa-c9d5-43aa-a54d-add34474be19-cni-path\") pod \"451330fa-c9d5-43aa-a54d-add34474be19\" (UID: \"451330fa-c9d5-43aa-a54d-add34474be19\") " Mar 17 20:31:14.429519 kubelet[2772]: I0317 20:31:14.429137 2772 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rgnj\" (UniqueName: \"kubernetes.io/projected/6b16d145-6f58-4f1d-ac6c-ea3969459599-kube-api-access-2rgnj\") pod \"6b16d145-6f58-4f1d-ac6c-ea3969459599\" (UID: \"6b16d145-6f58-4f1d-ac6c-ea3969459599\") " Mar 17 20:31:14.429519 kubelet[2772]: I0317 20:31:14.429165 2772 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/451330fa-c9d5-43aa-a54d-add34474be19-lib-modules\") pod \"451330fa-c9d5-43aa-a54d-add34474be19\" (UID: \"451330fa-c9d5-43aa-a54d-add34474be19\") " Mar 17 20:31:14.429519 kubelet[2772]: I0317 20:31:14.429189 2772 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/451330fa-c9d5-43aa-a54d-add34474be19-etc-cni-netd\") pod \"451330fa-c9d5-43aa-a54d-add34474be19\" (UID: \"451330fa-c9d5-43aa-a54d-add34474be19\") " Mar 17 20:31:14.429677 kubelet[2772]: I0317 20:31:14.429530 2772 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/451330fa-c9d5-43aa-a54d-add34474be19-hubble-tls\") pod \"451330fa-c9d5-43aa-a54d-add34474be19\" (UID: \"451330fa-c9d5-43aa-a54d-add34474be19\") " Mar 17 20:31:14.446443 kubelet[2772]: I0317 20:31:14.444590 2772 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/451330fa-c9d5-43aa-a54d-add34474be19-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "451330fa-c9d5-43aa-a54d-add34474be19" (UID: "451330fa-c9d5-43aa-a54d-add34474be19"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 20:31:14.446443 kubelet[2772]: I0317 20:31:14.445789 2772 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/451330fa-c9d5-43aa-a54d-add34474be19-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "451330fa-c9d5-43aa-a54d-add34474be19" (UID: "451330fa-c9d5-43aa-a54d-add34474be19"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 17 20:31:14.446443 kubelet[2772]: I0317 20:31:14.445820 2772 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/451330fa-c9d5-43aa-a54d-add34474be19-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "451330fa-c9d5-43aa-a54d-add34474be19" (UID: "451330fa-c9d5-43aa-a54d-add34474be19"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 17 20:31:14.451237 kubelet[2772]: I0317 20:31:14.451197 2772 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b16d145-6f58-4f1d-ac6c-ea3969459599-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6b16d145-6f58-4f1d-ac6c-ea3969459599" (UID: "6b16d145-6f58-4f1d-ac6c-ea3969459599"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 17 20:31:14.451348 kubelet[2772]: I0317 20:31:14.451296 2772 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/451330fa-c9d5-43aa-a54d-add34474be19-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "451330fa-c9d5-43aa-a54d-add34474be19" (UID: "451330fa-c9d5-43aa-a54d-add34474be19"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 20:31:14.451348 kubelet[2772]: I0317 20:31:14.451336 2772 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/451330fa-c9d5-43aa-a54d-add34474be19-cni-path" (OuterVolumeSpecName: "cni-path") pod "451330fa-c9d5-43aa-a54d-add34474be19" (UID: "451330fa-c9d5-43aa-a54d-add34474be19"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 20:31:14.452478 kubelet[2772]: I0317 20:31:14.452444 2772 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/451330fa-c9d5-43aa-a54d-add34474be19-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "451330fa-c9d5-43aa-a54d-add34474be19" (UID: "451330fa-c9d5-43aa-a54d-add34474be19"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 17 20:31:14.452622 kubelet[2772]: I0317 20:31:14.452597 2772 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/451330fa-c9d5-43aa-a54d-add34474be19-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "451330fa-c9d5-43aa-a54d-add34474be19" (UID: "451330fa-c9d5-43aa-a54d-add34474be19"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 20:31:14.455003 kubelet[2772]: I0317 20:31:14.454967 2772 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/451330fa-c9d5-43aa-a54d-add34474be19-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "451330fa-c9d5-43aa-a54d-add34474be19" (UID: "451330fa-c9d5-43aa-a54d-add34474be19"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 20:31:14.455095 kubelet[2772]: I0317 20:31:14.455018 2772 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/451330fa-c9d5-43aa-a54d-add34474be19-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "451330fa-c9d5-43aa-a54d-add34474be19" (UID: "451330fa-c9d5-43aa-a54d-add34474be19"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 20:31:14.455549 kubelet[2772]: I0317 20:31:14.455494 2772 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b16d145-6f58-4f1d-ac6c-ea3969459599-kube-api-access-2rgnj" (OuterVolumeSpecName: "kube-api-access-2rgnj") pod "6b16d145-6f58-4f1d-ac6c-ea3969459599" (UID: "6b16d145-6f58-4f1d-ac6c-ea3969459599"). InnerVolumeSpecName "kube-api-access-2rgnj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 17 20:31:14.456983 kubelet[2772]: I0317 20:31:14.456821 2772 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/451330fa-c9d5-43aa-a54d-add34474be19-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "451330fa-c9d5-43aa-a54d-add34474be19" (UID: "451330fa-c9d5-43aa-a54d-add34474be19"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 20:31:14.456983 kubelet[2772]: I0317 20:31:14.456873 2772 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/451330fa-c9d5-43aa-a54d-add34474be19-hostproc" (OuterVolumeSpecName: "hostproc") pod "451330fa-c9d5-43aa-a54d-add34474be19" (UID: "451330fa-c9d5-43aa-a54d-add34474be19"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 20:31:14.456983 kubelet[2772]: I0317 20:31:14.456912 2772 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/451330fa-c9d5-43aa-a54d-add34474be19-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "451330fa-c9d5-43aa-a54d-add34474be19" (UID: "451330fa-c9d5-43aa-a54d-add34474be19"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 20:31:14.456983 kubelet[2772]: I0317 20:31:14.456948 2772 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/451330fa-c9d5-43aa-a54d-add34474be19-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "451330fa-c9d5-43aa-a54d-add34474be19" (UID: "451330fa-c9d5-43aa-a54d-add34474be19"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 20:31:14.458044 kubelet[2772]: I0317 20:31:14.457989 2772 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/451330fa-c9d5-43aa-a54d-add34474be19-kube-api-access-9lg8q" (OuterVolumeSpecName: "kube-api-access-9lg8q") pod "451330fa-c9d5-43aa-a54d-add34474be19" (UID: "451330fa-c9d5-43aa-a54d-add34474be19"). InnerVolumeSpecName "kube-api-access-9lg8q". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 17 20:31:14.531086 kubelet[2772]: I0317 20:31:14.530743 2772 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/451330fa-c9d5-43aa-a54d-add34474be19-bpf-maps\") on node \"srv-24y52.gb1.brightbox.com\" DevicePath \"\"" Mar 17 20:31:14.531086 kubelet[2772]: I0317 20:31:14.530802 2772 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/451330fa-c9d5-43aa-a54d-add34474be19-host-proc-sys-net\") on node \"srv-24y52.gb1.brightbox.com\" DevicePath \"\"" Mar 17 20:31:14.531086 kubelet[2772]: I0317 20:31:14.530821 2772 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/451330fa-c9d5-43aa-a54d-add34474be19-host-proc-sys-kernel\") on node \"srv-24y52.gb1.brightbox.com\" DevicePath \"\"" Mar 17 20:31:14.531086 kubelet[2772]: I0317 20:31:14.530838 2772 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/451330fa-c9d5-43aa-a54d-add34474be19-xtables-lock\") on node \"srv-24y52.gb1.brightbox.com\" DevicePath \"\"" Mar 17 20:31:14.531086 kubelet[2772]: I0317 20:31:14.530855 2772 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/451330fa-c9d5-43aa-a54d-add34474be19-cilium-config-path\") on node \"srv-24y52.gb1.brightbox.com\" DevicePath \"\"" Mar 17 20:31:14.531086 kubelet[2772]: I0317 20:31:14.530871 2772 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9lg8q\" (UniqueName: \"kubernetes.io/projected/451330fa-c9d5-43aa-a54d-add34474be19-kube-api-access-9lg8q\") on node \"srv-24y52.gb1.brightbox.com\" DevicePath \"\"" Mar 17 20:31:14.531086 kubelet[2772]: I0317 20:31:14.530887 2772 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/451330fa-c9d5-43aa-a54d-add34474be19-hostproc\") on node \"srv-24y52.gb1.brightbox.com\" DevicePath \"\"" Mar 17 20:31:14.531086 kubelet[2772]: I0317 20:31:14.530903 2772 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/451330fa-c9d5-43aa-a54d-add34474be19-cilium-run\") on node \"srv-24y52.gb1.brightbox.com\" DevicePath \"\"" Mar 17 20:31:14.531760 kubelet[2772]: I0317 20:31:14.530918 2772 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/451330fa-c9d5-43aa-a54d-add34474be19-cni-path\") on node \"srv-24y52.gb1.brightbox.com\" DevicePath \"\"" Mar 17 20:31:14.531760 kubelet[2772]: I0317 20:31:14.530933 2772 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2rgnj\" (UniqueName: \"kubernetes.io/projected/6b16d145-6f58-4f1d-ac6c-ea3969459599-kube-api-access-2rgnj\") on node \"srv-24y52.gb1.brightbox.com\" DevicePath \"\"" Mar 17 20:31:14.531760 kubelet[2772]: I0317 20:31:14.530950 2772 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/451330fa-c9d5-43aa-a54d-add34474be19-lib-modules\") on node \"srv-24y52.gb1.brightbox.com\" DevicePath \"\"" Mar 17 20:31:14.531760 kubelet[2772]: I0317 20:31:14.530966 2772 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/451330fa-c9d5-43aa-a54d-add34474be19-etc-cni-netd\") on node \"srv-24y52.gb1.brightbox.com\" DevicePath \"\"" Mar 17 20:31:14.531760 kubelet[2772]: I0317 20:31:14.530981 2772 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/451330fa-c9d5-43aa-a54d-add34474be19-hubble-tls\") on node \"srv-24y52.gb1.brightbox.com\" DevicePath \"\"" Mar 17 20:31:14.531760 kubelet[2772]: I0317 20:31:14.530996 2772 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/451330fa-c9d5-43aa-a54d-add34474be19-cilium-cgroup\") on node \"srv-24y52.gb1.brightbox.com\" DevicePath \"\"" Mar 17 20:31:14.531760 kubelet[2772]: I0317 20:31:14.531020 2772 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/451330fa-c9d5-43aa-a54d-add34474be19-clustermesh-secrets\") on node \"srv-24y52.gb1.brightbox.com\" DevicePath \"\"" Mar 17 20:31:14.531760 kubelet[2772]: I0317 20:31:14.531038 2772 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6b16d145-6f58-4f1d-ac6c-ea3969459599-cilium-config-path\") on node \"srv-24y52.gb1.brightbox.com\" DevicePath \"\"" Mar 17 20:31:14.879162 systemd[1]: Removed slice kubepods-burstable-pod451330fa_c9d5_43aa_a54d_add34474be19.slice - libcontainer container kubepods-burstable-pod451330fa_c9d5_43aa_a54d_add34474be19.slice. Mar 17 20:31:14.879324 systemd[1]: kubepods-burstable-pod451330fa_c9d5_43aa_a54d_add34474be19.slice: Consumed 10.540s CPU time, 193.8M memory peak, 69.2M read from disk, 13.3M written to disk. Mar 17 20:31:14.882193 kubelet[2772]: I0317 20:31:14.882152 2772 scope.go:117] "RemoveContainer" containerID="c170966b065b267e9846594e2a8c8f062a58bf6a7539c40cb2407bf795ac3cff" Mar 17 20:31:14.897684 containerd[1529]: time="2025-03-17T20:31:14.897552773Z" level=info msg="RemoveContainer for \"c170966b065b267e9846594e2a8c8f062a58bf6a7539c40cb2407bf795ac3cff\"" Mar 17 20:31:14.905620 containerd[1529]: time="2025-03-17T20:31:14.904934671Z" level=info msg="RemoveContainer for \"c170966b065b267e9846594e2a8c8f062a58bf6a7539c40cb2407bf795ac3cff\" returns successfully" Mar 17 20:31:14.914996 systemd[1]: Removed slice kubepods-besteffort-pod6b16d145_6f58_4f1d_ac6c_ea3969459599.slice - libcontainer container kubepods-besteffort-pod6b16d145_6f58_4f1d_ac6c_ea3969459599.slice. Mar 17 20:31:14.915198 systemd[1]: kubepods-besteffort-pod6b16d145_6f58_4f1d_ac6c_ea3969459599.slice: Consumed 588ms CPU time, 34.4M memory peak, 7.3M read from disk, 4K written to disk. Mar 17 20:31:14.919669 kubelet[2772]: I0317 20:31:14.919144 2772 scope.go:117] "RemoveContainer" containerID="e77c7e4a5fcd9c8bd4694e9ce7e374694d5bfe20f407eb22cbd183ddc833a2c1" Mar 17 20:31:14.923619 containerd[1529]: time="2025-03-17T20:31:14.923220847Z" level=info msg="RemoveContainer for \"e77c7e4a5fcd9c8bd4694e9ce7e374694d5bfe20f407eb22cbd183ddc833a2c1\"" Mar 17 20:31:14.926977 containerd[1529]: time="2025-03-17T20:31:14.926944073Z" level=info msg="RemoveContainer for \"e77c7e4a5fcd9c8bd4694e9ce7e374694d5bfe20f407eb22cbd183ddc833a2c1\" returns successfully" Mar 17 20:31:14.927293 kubelet[2772]: I0317 20:31:14.927266 2772 scope.go:117] "RemoveContainer" containerID="ca067d93c109dffde9a227c1896e90e54ca8dd2f3bac27a1859f215fe221bc91" Mar 17 20:31:14.929792 containerd[1529]: time="2025-03-17T20:31:14.929516502Z" level=info msg="RemoveContainer for \"ca067d93c109dffde9a227c1896e90e54ca8dd2f3bac27a1859f215fe221bc91\"" Mar 17 20:31:14.936178 containerd[1529]: time="2025-03-17T20:31:14.936141972Z" level=info msg="RemoveContainer for \"ca067d93c109dffde9a227c1896e90e54ca8dd2f3bac27a1859f215fe221bc91\" returns successfully" Mar 17 20:31:14.936595 kubelet[2772]: I0317 20:31:14.936544 2772 scope.go:117] "RemoveContainer" containerID="eedab9fe115f4783adecc7361c71993a796cabe45ae91cee4c1d217db0052cc2" Mar 17 20:31:14.939337 containerd[1529]: time="2025-03-17T20:31:14.939303711Z" level=info msg="RemoveContainer for \"eedab9fe115f4783adecc7361c71993a796cabe45ae91cee4c1d217db0052cc2\"" Mar 17 20:31:14.944862 containerd[1529]: time="2025-03-17T20:31:14.944828629Z" level=info msg="RemoveContainer for \"eedab9fe115f4783adecc7361c71993a796cabe45ae91cee4c1d217db0052cc2\" returns successfully" Mar 17 20:31:14.945364 kubelet[2772]: I0317 20:31:14.945192 2772 scope.go:117] "RemoveContainer" containerID="f3f8f1c65d63f2d01d886696d2a221854f9ece613797a7c63fb66b03586dd3ee" Mar 17 20:31:14.949443 containerd[1529]: time="2025-03-17T20:31:14.947983265Z" level=info msg="RemoveContainer for \"f3f8f1c65d63f2d01d886696d2a221854f9ece613797a7c63fb66b03586dd3ee\"" Mar 17 20:31:14.953059 containerd[1529]: time="2025-03-17T20:31:14.952949614Z" level=info msg="RemoveContainer for \"f3f8f1c65d63f2d01d886696d2a221854f9ece613797a7c63fb66b03586dd3ee\" returns successfully" Mar 17 20:31:14.953295 kubelet[2772]: I0317 20:31:14.953174 2772 scope.go:117] "RemoveContainer" containerID="c170966b065b267e9846594e2a8c8f062a58bf6a7539c40cb2407bf795ac3cff" Mar 17 20:31:14.953961 containerd[1529]: time="2025-03-17T20:31:14.953572793Z" level=error msg="ContainerStatus for \"c170966b065b267e9846594e2a8c8f062a58bf6a7539c40cb2407bf795ac3cff\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c170966b065b267e9846594e2a8c8f062a58bf6a7539c40cb2407bf795ac3cff\": not found" Mar 17 20:31:14.962623 kubelet[2772]: E0317 20:31:14.962546 2772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c170966b065b267e9846594e2a8c8f062a58bf6a7539c40cb2407bf795ac3cff\": not found" containerID="c170966b065b267e9846594e2a8c8f062a58bf6a7539c40cb2407bf795ac3cff" Mar 17 20:31:14.969497 kubelet[2772]: I0317 20:31:14.963431 2772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c170966b065b267e9846594e2a8c8f062a58bf6a7539c40cb2407bf795ac3cff"} err="failed to get container status \"c170966b065b267e9846594e2a8c8f062a58bf6a7539c40cb2407bf795ac3cff\": rpc error: code = NotFound desc = an error occurred when try to find container \"c170966b065b267e9846594e2a8c8f062a58bf6a7539c40cb2407bf795ac3cff\": not found" Mar 17 20:31:14.969609 kubelet[2772]: I0317 20:31:14.969487 2772 scope.go:117] "RemoveContainer" containerID="e77c7e4a5fcd9c8bd4694e9ce7e374694d5bfe20f407eb22cbd183ddc833a2c1" Mar 17 20:31:14.969824 containerd[1529]: time="2025-03-17T20:31:14.969781199Z" level=error msg="ContainerStatus for \"e77c7e4a5fcd9c8bd4694e9ce7e374694d5bfe20f407eb22cbd183ddc833a2c1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e77c7e4a5fcd9c8bd4694e9ce7e374694d5bfe20f407eb22cbd183ddc833a2c1\": not found" Mar 17 20:31:14.969977 kubelet[2772]: E0317 20:31:14.969949 2772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e77c7e4a5fcd9c8bd4694e9ce7e374694d5bfe20f407eb22cbd183ddc833a2c1\": not found" containerID="e77c7e4a5fcd9c8bd4694e9ce7e374694d5bfe20f407eb22cbd183ddc833a2c1" Mar 17 20:31:14.974540 kubelet[2772]: I0317 20:31:14.969984 2772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e77c7e4a5fcd9c8bd4694e9ce7e374694d5bfe20f407eb22cbd183ddc833a2c1"} err="failed to get container status \"e77c7e4a5fcd9c8bd4694e9ce7e374694d5bfe20f407eb22cbd183ddc833a2c1\": rpc error: code = NotFound desc = an error occurred when try to find container \"e77c7e4a5fcd9c8bd4694e9ce7e374694d5bfe20f407eb22cbd183ddc833a2c1\": not found" Mar 17 20:31:14.974540 kubelet[2772]: I0317 20:31:14.973983 2772 scope.go:117] "RemoveContainer" containerID="ca067d93c109dffde9a227c1896e90e54ca8dd2f3bac27a1859f215fe221bc91" Mar 17 20:31:14.975044 containerd[1529]: time="2025-03-17T20:31:14.974916738Z" level=error msg="ContainerStatus for \"ca067d93c109dffde9a227c1896e90e54ca8dd2f3bac27a1859f215fe221bc91\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ca067d93c109dffde9a227c1896e90e54ca8dd2f3bac27a1859f215fe221bc91\": not found" Mar 17 20:31:14.977268 kubelet[2772]: E0317 20:31:14.977218 2772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ca067d93c109dffde9a227c1896e90e54ca8dd2f3bac27a1859f215fe221bc91\": not found" containerID="ca067d93c109dffde9a227c1896e90e54ca8dd2f3bac27a1859f215fe221bc91" Mar 17 20:31:14.977936 kubelet[2772]: I0317 20:31:14.977383 2772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ca067d93c109dffde9a227c1896e90e54ca8dd2f3bac27a1859f215fe221bc91"} err="failed to get container status \"ca067d93c109dffde9a227c1896e90e54ca8dd2f3bac27a1859f215fe221bc91\": rpc error: code = NotFound desc = an error occurred when try to find container \"ca067d93c109dffde9a227c1896e90e54ca8dd2f3bac27a1859f215fe221bc91\": not found" Mar 17 20:31:14.977936 kubelet[2772]: I0317 20:31:14.977418 2772 scope.go:117] "RemoveContainer" containerID="eedab9fe115f4783adecc7361c71993a796cabe45ae91cee4c1d217db0052cc2" Mar 17 20:31:14.978361 containerd[1529]: time="2025-03-17T20:31:14.977757038Z" level=error msg="ContainerStatus for \"eedab9fe115f4783adecc7361c71993a796cabe45ae91cee4c1d217db0052cc2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"eedab9fe115f4783adecc7361c71993a796cabe45ae91cee4c1d217db0052cc2\": not found" Mar 17 20:31:14.978508 kubelet[2772]: E0317 20:31:14.977986 2772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"eedab9fe115f4783adecc7361c71993a796cabe45ae91cee4c1d217db0052cc2\": not found" containerID="eedab9fe115f4783adecc7361c71993a796cabe45ae91cee4c1d217db0052cc2" Mar 17 20:31:14.978508 kubelet[2772]: I0317 20:31:14.978017 2772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"eedab9fe115f4783adecc7361c71993a796cabe45ae91cee4c1d217db0052cc2"} err="failed to get container status \"eedab9fe115f4783adecc7361c71993a796cabe45ae91cee4c1d217db0052cc2\": rpc error: code = NotFound desc = an error occurred when try to find container \"eedab9fe115f4783adecc7361c71993a796cabe45ae91cee4c1d217db0052cc2\": not found" Mar 17 20:31:14.978508 kubelet[2772]: I0317 20:31:14.978040 2772 scope.go:117] "RemoveContainer" containerID="f3f8f1c65d63f2d01d886696d2a221854f9ece613797a7c63fb66b03586dd3ee" Mar 17 20:31:14.978806 containerd[1529]: time="2025-03-17T20:31:14.978350650Z" level=error msg="ContainerStatus for \"f3f8f1c65d63f2d01d886696d2a221854f9ece613797a7c63fb66b03586dd3ee\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f3f8f1c65d63f2d01d886696d2a221854f9ece613797a7c63fb66b03586dd3ee\": not found" Mar 17 20:31:14.979447 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b44bc39aad5e37db760a0876be692210d2e5f8c494c1db2d54ee631c153407fd-rootfs.mount: Deactivated successfully. Mar 17 20:31:14.979641 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-acee099f4cebf8c28e1b883d55e112b4b0be73a44ee909524b37be8e5298a1a5-rootfs.mount: Deactivated successfully. Mar 17 20:31:14.979818 systemd[1]: var-lib-kubelet-pods-451330fa\x2dc9d5\x2d43aa\x2da54d\x2dadd34474be19-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9lg8q.mount: Deactivated successfully. Mar 17 20:31:14.980098 kubelet[2772]: E0317 20:31:14.979838 2772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f3f8f1c65d63f2d01d886696d2a221854f9ece613797a7c63fb66b03586dd3ee\": not found" containerID="f3f8f1c65d63f2d01d886696d2a221854f9ece613797a7c63fb66b03586dd3ee" Mar 17 20:31:14.980098 kubelet[2772]: I0317 20:31:14.979877 2772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f3f8f1c65d63f2d01d886696d2a221854f9ece613797a7c63fb66b03586dd3ee"} err="failed to get container status \"f3f8f1c65d63f2d01d886696d2a221854f9ece613797a7c63fb66b03586dd3ee\": rpc error: code = NotFound desc = an error occurred when try to find container \"f3f8f1c65d63f2d01d886696d2a221854f9ece613797a7c63fb66b03586dd3ee\": not found" Mar 17 20:31:14.980098 kubelet[2772]: I0317 20:31:14.979906 2772 scope.go:117] "RemoveContainer" containerID="e53b50978e9d4b7eb802cd3ad643bbf67ca04a2b584bc73c739da56041000eb8" Mar 17 20:31:14.979979 systemd[1]: var-lib-kubelet-pods-6b16d145\x2d6f58\x2d4f1d\x2dac6c\x2dea3969459599-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2rgnj.mount: Deactivated successfully. Mar 17 20:31:14.980108 systemd[1]: var-lib-kubelet-pods-451330fa\x2dc9d5\x2d43aa\x2da54d\x2dadd34474be19-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 17 20:31:14.980229 systemd[1]: var-lib-kubelet-pods-451330fa\x2dc9d5\x2d43aa\x2da54d\x2dadd34474be19-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 17 20:31:14.983732 containerd[1529]: time="2025-03-17T20:31:14.983201331Z" level=info msg="RemoveContainer for \"e53b50978e9d4b7eb802cd3ad643bbf67ca04a2b584bc73c739da56041000eb8\"" Mar 17 20:31:14.988019 containerd[1529]: time="2025-03-17T20:31:14.987986327Z" level=info msg="RemoveContainer for \"e53b50978e9d4b7eb802cd3ad643bbf67ca04a2b584bc73c739da56041000eb8\" returns successfully" Mar 17 20:31:14.988379 kubelet[2772]: I0317 20:31:14.988283 2772 scope.go:117] "RemoveContainer" containerID="e53b50978e9d4b7eb802cd3ad643bbf67ca04a2b584bc73c739da56041000eb8" Mar 17 20:31:14.988910 containerd[1529]: time="2025-03-17T20:31:14.988596635Z" level=error msg="ContainerStatus for \"e53b50978e9d4b7eb802cd3ad643bbf67ca04a2b584bc73c739da56041000eb8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e53b50978e9d4b7eb802cd3ad643bbf67ca04a2b584bc73c739da56041000eb8\": not found" Mar 17 20:31:14.989162 kubelet[2772]: E0317 20:31:14.989068 2772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e53b50978e9d4b7eb802cd3ad643bbf67ca04a2b584bc73c739da56041000eb8\": not found" containerID="e53b50978e9d4b7eb802cd3ad643bbf67ca04a2b584bc73c739da56041000eb8" Mar 17 20:31:14.989162 kubelet[2772]: I0317 20:31:14.989115 2772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e53b50978e9d4b7eb802cd3ad643bbf67ca04a2b584bc73c739da56041000eb8"} err="failed to get container status \"e53b50978e9d4b7eb802cd3ad643bbf67ca04a2b584bc73c739da56041000eb8\": rpc error: code = NotFound desc = an error occurred when try to find container \"e53b50978e9d4b7eb802cd3ad643bbf67ca04a2b584bc73c739da56041000eb8\": not found" Mar 17 20:31:15.548194 kubelet[2772]: E0317 20:31:15.548091 2772 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 20:31:16.032121 sshd[4384]: Connection closed by 139.178.89.65 port 33488 Mar 17 20:31:16.033020 sshd-session[4382]: pam_unix(sshd:session): session closed for user core Mar 17 20:31:16.038512 systemd[1]: sshd@25-10.230.57.126:22-139.178.89.65:33488.service: Deactivated successfully. Mar 17 20:31:16.041349 systemd[1]: session-27.scope: Deactivated successfully. Mar 17 20:31:16.041708 systemd[1]: session-27.scope: Consumed 1.251s CPU time, 29.4M memory peak. Mar 17 20:31:16.042613 systemd-logind[1507]: Session 27 logged out. Waiting for processes to exit. Mar 17 20:31:16.044498 systemd-logind[1507]: Removed session 27. Mar 17 20:31:16.199950 systemd[1]: Started sshd@26-10.230.57.126:22-139.178.89.65:43814.service - OpenSSH per-connection server daemon (139.178.89.65:43814). Mar 17 20:31:16.378757 kubelet[2772]: I0317 20:31:16.378701 2772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="451330fa-c9d5-43aa-a54d-add34474be19" path="/var/lib/kubelet/pods/451330fa-c9d5-43aa-a54d-add34474be19/volumes" Mar 17 20:31:16.380355 kubelet[2772]: I0317 20:31:16.380305 2772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b16d145-6f58-4f1d-ac6c-ea3969459599" path="/var/lib/kubelet/pods/6b16d145-6f58-4f1d-ac6c-ea3969459599/volumes" Mar 17 20:31:17.101013 sshd[4544]: Accepted publickey for core from 139.178.89.65 port 43814 ssh2: RSA SHA256:k1ZHd7Wei2LOtqjxADh/qNuu/xdqobLaJ/Va6KemVy0 Mar 17 20:31:17.103119 sshd-session[4544]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 20:31:17.111873 systemd-logind[1507]: New session 28 of user core. Mar 17 20:31:17.122855 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 17 20:31:18.592458 kubelet[2772]: I0317 20:31:18.592393 2772 memory_manager.go:355] "RemoveStaleState removing state" podUID="6b16d145-6f58-4f1d-ac6c-ea3969459599" containerName="cilium-operator" Mar 17 20:31:18.592458 kubelet[2772]: I0317 20:31:18.592448 2772 memory_manager.go:355] "RemoveStaleState removing state" podUID="451330fa-c9d5-43aa-a54d-add34474be19" containerName="cilium-agent" Mar 17 20:31:18.638863 systemd[1]: Created slice kubepods-burstable-pod5cbc7625_9d0d_4a82_98dd_b7e718d4adf8.slice - libcontainer container kubepods-burstable-pod5cbc7625_9d0d_4a82_98dd_b7e718d4adf8.slice. Mar 17 20:31:18.713704 sshd[4546]: Connection closed by 139.178.89.65 port 43814 Mar 17 20:31:18.714710 sshd-session[4544]: pam_unix(sshd:session): session closed for user core Mar 17 20:31:18.719123 systemd[1]: sshd@26-10.230.57.126:22-139.178.89.65:43814.service: Deactivated successfully. Mar 17 20:31:18.722761 systemd[1]: session-28.scope: Deactivated successfully. Mar 17 20:31:18.725309 systemd-logind[1507]: Session 28 logged out. Waiting for processes to exit. Mar 17 20:31:18.727282 systemd-logind[1507]: Removed session 28. Mar 17 20:31:18.760989 kubelet[2772]: I0317 20:31:18.760318 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5cbc7625-9d0d-4a82-98dd-b7e718d4adf8-lib-modules\") pod \"cilium-kt6w4\" (UID: \"5cbc7625-9d0d-4a82-98dd-b7e718d4adf8\") " pod="kube-system/cilium-kt6w4" Mar 17 20:31:18.760989 kubelet[2772]: I0317 20:31:18.760386 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5cbc7625-9d0d-4a82-98dd-b7e718d4adf8-host-proc-sys-net\") pod \"cilium-kt6w4\" (UID: \"5cbc7625-9d0d-4a82-98dd-b7e718d4adf8\") " pod="kube-system/cilium-kt6w4" Mar 17 20:31:18.760989 kubelet[2772]: I0317 20:31:18.760420 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5cbc7625-9d0d-4a82-98dd-b7e718d4adf8-host-proc-sys-kernel\") pod \"cilium-kt6w4\" (UID: \"5cbc7625-9d0d-4a82-98dd-b7e718d4adf8\") " pod="kube-system/cilium-kt6w4" Mar 17 20:31:18.760989 kubelet[2772]: I0317 20:31:18.760489 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzpff\" (UniqueName: \"kubernetes.io/projected/5cbc7625-9d0d-4a82-98dd-b7e718d4adf8-kube-api-access-zzpff\") pod \"cilium-kt6w4\" (UID: \"5cbc7625-9d0d-4a82-98dd-b7e718d4adf8\") " pod="kube-system/cilium-kt6w4" Mar 17 20:31:18.760989 kubelet[2772]: I0317 20:31:18.760523 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5cbc7625-9d0d-4a82-98dd-b7e718d4adf8-cni-path\") pod \"cilium-kt6w4\" (UID: \"5cbc7625-9d0d-4a82-98dd-b7e718d4adf8\") " pod="kube-system/cilium-kt6w4" Mar 17 20:31:18.760989 kubelet[2772]: I0317 20:31:18.760549 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5cbc7625-9d0d-4a82-98dd-b7e718d4adf8-xtables-lock\") pod \"cilium-kt6w4\" (UID: \"5cbc7625-9d0d-4a82-98dd-b7e718d4adf8\") " pod="kube-system/cilium-kt6w4" Mar 17 20:31:18.761440 kubelet[2772]: I0317 20:31:18.760574 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5cbc7625-9d0d-4a82-98dd-b7e718d4adf8-cilium-ipsec-secrets\") pod \"cilium-kt6w4\" (UID: \"5cbc7625-9d0d-4a82-98dd-b7e718d4adf8\") " pod="kube-system/cilium-kt6w4" Mar 17 20:31:18.761440 kubelet[2772]: I0317 20:31:18.760605 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5cbc7625-9d0d-4a82-98dd-b7e718d4adf8-cilium-cgroup\") pod \"cilium-kt6w4\" (UID: \"5cbc7625-9d0d-4a82-98dd-b7e718d4adf8\") " pod="kube-system/cilium-kt6w4" Mar 17 20:31:18.761440 kubelet[2772]: I0317 20:31:18.760636 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5cbc7625-9d0d-4a82-98dd-b7e718d4adf8-hostproc\") pod \"cilium-kt6w4\" (UID: \"5cbc7625-9d0d-4a82-98dd-b7e718d4adf8\") " pod="kube-system/cilium-kt6w4" Mar 17 20:31:18.761440 kubelet[2772]: I0317 20:31:18.760711 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5cbc7625-9d0d-4a82-98dd-b7e718d4adf8-cilium-run\") pod \"cilium-kt6w4\" (UID: \"5cbc7625-9d0d-4a82-98dd-b7e718d4adf8\") " pod="kube-system/cilium-kt6w4" Mar 17 20:31:18.761440 kubelet[2772]: I0317 20:31:18.760749 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5cbc7625-9d0d-4a82-98dd-b7e718d4adf8-clustermesh-secrets\") pod \"cilium-kt6w4\" (UID: \"5cbc7625-9d0d-4a82-98dd-b7e718d4adf8\") " pod="kube-system/cilium-kt6w4" Mar 17 20:31:18.761440 kubelet[2772]: I0317 20:31:18.760779 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5cbc7625-9d0d-4a82-98dd-b7e718d4adf8-cilium-config-path\") pod \"cilium-kt6w4\" (UID: \"5cbc7625-9d0d-4a82-98dd-b7e718d4adf8\") " pod="kube-system/cilium-kt6w4" Mar 17 20:31:18.761819 kubelet[2772]: I0317 20:31:18.760808 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5cbc7625-9d0d-4a82-98dd-b7e718d4adf8-bpf-maps\") pod \"cilium-kt6w4\" (UID: \"5cbc7625-9d0d-4a82-98dd-b7e718d4adf8\") " pod="kube-system/cilium-kt6w4" Mar 17 20:31:18.761819 kubelet[2772]: I0317 20:31:18.760858 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5cbc7625-9d0d-4a82-98dd-b7e718d4adf8-etc-cni-netd\") pod \"cilium-kt6w4\" (UID: \"5cbc7625-9d0d-4a82-98dd-b7e718d4adf8\") " pod="kube-system/cilium-kt6w4" Mar 17 20:31:18.761819 kubelet[2772]: I0317 20:31:18.760886 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5cbc7625-9d0d-4a82-98dd-b7e718d4adf8-hubble-tls\") pod \"cilium-kt6w4\" (UID: \"5cbc7625-9d0d-4a82-98dd-b7e718d4adf8\") " pod="kube-system/cilium-kt6w4" Mar 17 20:31:18.915948 systemd[1]: Started sshd@27-10.230.57.126:22-139.178.89.65:43830.service - OpenSSH per-connection server daemon (139.178.89.65:43830). Mar 17 20:31:18.951258 containerd[1529]: time="2025-03-17T20:31:18.950994363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kt6w4,Uid:5cbc7625-9d0d-4a82-98dd-b7e718d4adf8,Namespace:kube-system,Attempt:0,}" Mar 17 20:31:18.982550 containerd[1529]: time="2025-03-17T20:31:18.982309730Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 20:31:18.982550 containerd[1529]: time="2025-03-17T20:31:18.982532556Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 20:31:18.983465 containerd[1529]: time="2025-03-17T20:31:18.982581167Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 20:31:18.983465 containerd[1529]: time="2025-03-17T20:31:18.982773487Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 20:31:19.006846 systemd[1]: Started cri-containerd-0304aab60bdd87b84d2ad7e9c13d38d87fdc3778df26e11cd4683f70625eddd2.scope - libcontainer container 0304aab60bdd87b84d2ad7e9c13d38d87fdc3778df26e11cd4683f70625eddd2. Mar 17 20:31:19.042390 containerd[1529]: time="2025-03-17T20:31:19.042002110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kt6w4,Uid:5cbc7625-9d0d-4a82-98dd-b7e718d4adf8,Namespace:kube-system,Attempt:0,} returns sandbox id \"0304aab60bdd87b84d2ad7e9c13d38d87fdc3778df26e11cd4683f70625eddd2\"" Mar 17 20:31:19.047666 containerd[1529]: time="2025-03-17T20:31:19.047573469Z" level=info msg="CreateContainer within sandbox \"0304aab60bdd87b84d2ad7e9c13d38d87fdc3778df26e11cd4683f70625eddd2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 20:31:19.063441 containerd[1529]: time="2025-03-17T20:31:19.063365776Z" level=info msg="CreateContainer within sandbox \"0304aab60bdd87b84d2ad7e9c13d38d87fdc3778df26e11cd4683f70625eddd2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"147b08dd5d9be048ad56c94f55704a083ed1320da554893157479394a7ca0f17\"" Mar 17 20:31:19.064085 containerd[1529]: time="2025-03-17T20:31:19.064032131Z" level=info msg="StartContainer for \"147b08dd5d9be048ad56c94f55704a083ed1320da554893157479394a7ca0f17\"" Mar 17 20:31:19.106414 systemd[1]: Started cri-containerd-147b08dd5d9be048ad56c94f55704a083ed1320da554893157479394a7ca0f17.scope - libcontainer container 147b08dd5d9be048ad56c94f55704a083ed1320da554893157479394a7ca0f17. Mar 17 20:31:19.144944 containerd[1529]: time="2025-03-17T20:31:19.144883681Z" level=info msg="StartContainer for \"147b08dd5d9be048ad56c94f55704a083ed1320da554893157479394a7ca0f17\" returns successfully" Mar 17 20:31:19.165375 systemd[1]: cri-containerd-147b08dd5d9be048ad56c94f55704a083ed1320da554893157479394a7ca0f17.scope: Deactivated successfully. Mar 17 20:31:19.208618 containerd[1529]: time="2025-03-17T20:31:19.208254602Z" level=info msg="shim disconnected" id=147b08dd5d9be048ad56c94f55704a083ed1320da554893157479394a7ca0f17 namespace=k8s.io Mar 17 20:31:19.208618 containerd[1529]: time="2025-03-17T20:31:19.208329169Z" level=warning msg="cleaning up after shim disconnected" id=147b08dd5d9be048ad56c94f55704a083ed1320da554893157479394a7ca0f17 namespace=k8s.io Mar 17 20:31:19.208618 containerd[1529]: time="2025-03-17T20:31:19.208344927Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 20:31:19.816511 sshd[4560]: Accepted publickey for core from 139.178.89.65 port 43830 ssh2: RSA SHA256:k1ZHd7Wei2LOtqjxADh/qNuu/xdqobLaJ/Va6KemVy0 Mar 17 20:31:19.818548 sshd-session[4560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 20:31:19.826345 systemd-logind[1507]: New session 29 of user core. Mar 17 20:31:19.830848 systemd[1]: Started session-29.scope - Session 29 of User core. Mar 17 20:31:19.925846 containerd[1529]: time="2025-03-17T20:31:19.925483348Z" level=info msg="CreateContainer within sandbox \"0304aab60bdd87b84d2ad7e9c13d38d87fdc3778df26e11cd4683f70625eddd2\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 20:31:19.953040 containerd[1529]: time="2025-03-17T20:31:19.952611800Z" level=info msg="CreateContainer within sandbox \"0304aab60bdd87b84d2ad7e9c13d38d87fdc3778df26e11cd4683f70625eddd2\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"bdcc7f77b5832a3be91906c57baf7f0e7a5932c0d4e5b13917e2eca56d654497\"" Mar 17 20:31:19.953679 containerd[1529]: time="2025-03-17T20:31:19.953355146Z" level=info msg="StartContainer for \"bdcc7f77b5832a3be91906c57baf7f0e7a5932c0d4e5b13917e2eca56d654497\"" Mar 17 20:31:19.999961 systemd[1]: Started cri-containerd-bdcc7f77b5832a3be91906c57baf7f0e7a5932c0d4e5b13917e2eca56d654497.scope - libcontainer container bdcc7f77b5832a3be91906c57baf7f0e7a5932c0d4e5b13917e2eca56d654497. Mar 17 20:31:20.040189 containerd[1529]: time="2025-03-17T20:31:20.040042613Z" level=info msg="StartContainer for \"bdcc7f77b5832a3be91906c57baf7f0e7a5932c0d4e5b13917e2eca56d654497\" returns successfully" Mar 17 20:31:20.052113 systemd[1]: cri-containerd-bdcc7f77b5832a3be91906c57baf7f0e7a5932c0d4e5b13917e2eca56d654497.scope: Deactivated successfully. Mar 17 20:31:20.085443 containerd[1529]: time="2025-03-17T20:31:20.084732958Z" level=info msg="shim disconnected" id=bdcc7f77b5832a3be91906c57baf7f0e7a5932c0d4e5b13917e2eca56d654497 namespace=k8s.io Mar 17 20:31:20.085443 containerd[1529]: time="2025-03-17T20:31:20.084899784Z" level=warning msg="cleaning up after shim disconnected" id=bdcc7f77b5832a3be91906c57baf7f0e7a5932c0d4e5b13917e2eca56d654497 namespace=k8s.io Mar 17 20:31:20.085443 containerd[1529]: time="2025-03-17T20:31:20.084936483Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 20:31:20.433692 sshd[4664]: Connection closed by 139.178.89.65 port 43830 Mar 17 20:31:20.433031 sshd-session[4560]: pam_unix(sshd:session): session closed for user core Mar 17 20:31:20.439714 systemd[1]: sshd@27-10.230.57.126:22-139.178.89.65:43830.service: Deactivated successfully. Mar 17 20:31:20.444154 systemd[1]: session-29.scope: Deactivated successfully. Mar 17 20:31:20.446028 systemd-logind[1507]: Session 29 logged out. Waiting for processes to exit. Mar 17 20:31:20.448264 systemd-logind[1507]: Removed session 29. Mar 17 20:31:20.549724 kubelet[2772]: E0317 20:31:20.549606 2772 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 20:31:20.598074 systemd[1]: Started sshd@28-10.230.57.126:22-139.178.89.65:43842.service - OpenSSH per-connection server daemon (139.178.89.65:43842). Mar 17 20:31:20.881840 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bdcc7f77b5832a3be91906c57baf7f0e7a5932c0d4e5b13917e2eca56d654497-rootfs.mount: Deactivated successfully. Mar 17 20:31:20.929111 containerd[1529]: time="2025-03-17T20:31:20.928948880Z" level=info msg="CreateContainer within sandbox \"0304aab60bdd87b84d2ad7e9c13d38d87fdc3778df26e11cd4683f70625eddd2\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 20:31:20.955907 containerd[1529]: time="2025-03-17T20:31:20.955844410Z" level=info msg="CreateContainer within sandbox \"0304aab60bdd87b84d2ad7e9c13d38d87fdc3778df26e11cd4683f70625eddd2\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d4c00b187e2414977e3b4e6d69a4934921d300914e05750e025129f92216354c\"" Mar 17 20:31:20.959219 containerd[1529]: time="2025-03-17T20:31:20.957945043Z" level=info msg="StartContainer for \"d4c00b187e2414977e3b4e6d69a4934921d300914e05750e025129f92216354c\"" Mar 17 20:31:21.014864 systemd[1]: Started cri-containerd-d4c00b187e2414977e3b4e6d69a4934921d300914e05750e025129f92216354c.scope - libcontainer container d4c00b187e2414977e3b4e6d69a4934921d300914e05750e025129f92216354c. Mar 17 20:31:21.072507 containerd[1529]: time="2025-03-17T20:31:21.072411083Z" level=info msg="StartContainer for \"d4c00b187e2414977e3b4e6d69a4934921d300914e05750e025129f92216354c\" returns successfully" Mar 17 20:31:21.078895 systemd[1]: cri-containerd-d4c00b187e2414977e3b4e6d69a4934921d300914e05750e025129f92216354c.scope: Deactivated successfully. Mar 17 20:31:21.112577 containerd[1529]: time="2025-03-17T20:31:21.112507399Z" level=info msg="shim disconnected" id=d4c00b187e2414977e3b4e6d69a4934921d300914e05750e025129f92216354c namespace=k8s.io Mar 17 20:31:21.113153 containerd[1529]: time="2025-03-17T20:31:21.112905522Z" level=warning msg="cleaning up after shim disconnected" id=d4c00b187e2414977e3b4e6d69a4934921d300914e05750e025129f92216354c namespace=k8s.io Mar 17 20:31:21.113153 containerd[1529]: time="2025-03-17T20:31:21.112954758Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 20:31:21.129702 containerd[1529]: time="2025-03-17T20:31:21.129621698Z" level=warning msg="cleanup warnings time=\"2025-03-17T20:31:21Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 17 20:31:21.489117 sshd[4733]: Accepted publickey for core from 139.178.89.65 port 43842 ssh2: RSA SHA256:k1ZHd7Wei2LOtqjxADh/qNuu/xdqobLaJ/Va6KemVy0 Mar 17 20:31:21.491138 sshd-session[4733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 20:31:21.499039 systemd-logind[1507]: New session 30 of user core. Mar 17 20:31:21.504855 systemd[1]: Started session-30.scope - Session 30 of User core. Mar 17 20:31:21.881787 systemd[1]: run-containerd-runc-k8s.io-d4c00b187e2414977e3b4e6d69a4934921d300914e05750e025129f92216354c-runc.M8gStY.mount: Deactivated successfully. Mar 17 20:31:21.881954 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d4c00b187e2414977e3b4e6d69a4934921d300914e05750e025129f92216354c-rootfs.mount: Deactivated successfully. Mar 17 20:31:21.931678 containerd[1529]: time="2025-03-17T20:31:21.931470510Z" level=info msg="CreateContainer within sandbox \"0304aab60bdd87b84d2ad7e9c13d38d87fdc3778df26e11cd4683f70625eddd2\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 20:31:21.953980 containerd[1529]: time="2025-03-17T20:31:21.953833288Z" level=info msg="CreateContainer within sandbox \"0304aab60bdd87b84d2ad7e9c13d38d87fdc3778df26e11cd4683f70625eddd2\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"51256c3222afff1307e54021e2fda72d3bd77536fe147eaf80f0ba0628959e3b\"" Mar 17 20:31:21.956540 containerd[1529]: time="2025-03-17T20:31:21.955170413Z" level=info msg="StartContainer for \"51256c3222afff1307e54021e2fda72d3bd77536fe147eaf80f0ba0628959e3b\"" Mar 17 20:31:22.016217 systemd[1]: Started cri-containerd-51256c3222afff1307e54021e2fda72d3bd77536fe147eaf80f0ba0628959e3b.scope - libcontainer container 51256c3222afff1307e54021e2fda72d3bd77536fe147eaf80f0ba0628959e3b. Mar 17 20:31:22.059479 systemd[1]: cri-containerd-51256c3222afff1307e54021e2fda72d3bd77536fe147eaf80f0ba0628959e3b.scope: Deactivated successfully. Mar 17 20:31:22.064041 containerd[1529]: time="2025-03-17T20:31:22.063438943Z" level=info msg="StartContainer for \"51256c3222afff1307e54021e2fda72d3bd77536fe147eaf80f0ba0628959e3b\" returns successfully" Mar 17 20:31:22.114807 containerd[1529]: time="2025-03-17T20:31:22.114693465Z" level=info msg="shim disconnected" id=51256c3222afff1307e54021e2fda72d3bd77536fe147eaf80f0ba0628959e3b namespace=k8s.io Mar 17 20:31:22.115809 containerd[1529]: time="2025-03-17T20:31:22.115318006Z" level=warning msg="cleaning up after shim disconnected" id=51256c3222afff1307e54021e2fda72d3bd77536fe147eaf80f0ba0628959e3b namespace=k8s.io Mar 17 20:31:22.115809 containerd[1529]: time="2025-03-17T20:31:22.115346383Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 20:31:22.882625 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-51256c3222afff1307e54021e2fda72d3bd77536fe147eaf80f0ba0628959e3b-rootfs.mount: Deactivated successfully. Mar 17 20:31:22.938832 containerd[1529]: time="2025-03-17T20:31:22.938600984Z" level=info msg="CreateContainer within sandbox \"0304aab60bdd87b84d2ad7e9c13d38d87fdc3778df26e11cd4683f70625eddd2\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 20:31:22.963504 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount650117027.mount: Deactivated successfully. Mar 17 20:31:22.966686 containerd[1529]: time="2025-03-17T20:31:22.966414848Z" level=info msg="CreateContainer within sandbox \"0304aab60bdd87b84d2ad7e9c13d38d87fdc3778df26e11cd4683f70625eddd2\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"88c43f193218029bf44b4a609da47694ada87034666f5d238160fd40bb00be9b\"" Mar 17 20:31:22.967891 containerd[1529]: time="2025-03-17T20:31:22.967851462Z" level=info msg="StartContainer for \"88c43f193218029bf44b4a609da47694ada87034666f5d238160fd40bb00be9b\"" Mar 17 20:31:23.011874 systemd[1]: Started cri-containerd-88c43f193218029bf44b4a609da47694ada87034666f5d238160fd40bb00be9b.scope - libcontainer container 88c43f193218029bf44b4a609da47694ada87034666f5d238160fd40bb00be9b. Mar 17 20:31:23.055164 containerd[1529]: time="2025-03-17T20:31:23.055108871Z" level=info msg="StartContainer for \"88c43f193218029bf44b4a609da47694ada87034666f5d238160fd40bb00be9b\" returns successfully" Mar 17 20:31:23.770682 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Mar 17 20:31:24.455996 kubelet[2772]: I0317 20:31:24.455909 2772 setters.go:602] "Node became not ready" node="srv-24y52.gb1.brightbox.com" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-03-17T20:31:24Z","lastTransitionTime":"2025-03-17T20:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 17 20:31:24.472332 systemd[1]: run-containerd-runc-k8s.io-88c43f193218029bf44b4a609da47694ada87034666f5d238160fd40bb00be9b-runc.QBEBBf.mount: Deactivated successfully. Mar 17 20:31:27.597311 systemd-networkd[1454]: lxc_health: Link UP Mar 17 20:31:27.615492 systemd-networkd[1454]: lxc_health: Gained carrier Mar 17 20:31:28.913244 systemd[1]: run-containerd-runc-k8s.io-88c43f193218029bf44b4a609da47694ada87034666f5d238160fd40bb00be9b-runc.Rtdwkm.mount: Deactivated successfully. Mar 17 20:31:29.003767 kubelet[2772]: I0317 20:31:29.003667 2772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-kt6w4" podStartSLOduration=11.003622984 podStartE2EDuration="11.003622984s" podCreationTimestamp="2025-03-17 20:31:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 20:31:23.976532339 +0000 UTC m=+153.816798823" watchObservedRunningTime="2025-03-17 20:31:29.003622984 +0000 UTC m=+158.843889475" Mar 17 20:31:29.180818 systemd-networkd[1454]: lxc_health: Gained IPv6LL Mar 17 20:31:31.353386 systemd[1]: run-containerd-runc-k8s.io-88c43f193218029bf44b4a609da47694ada87034666f5d238160fd40bb00be9b-runc.z5ddRl.mount: Deactivated successfully. Mar 17 20:31:33.814705 sshd[4792]: Connection closed by 139.178.89.65 port 43842 Mar 17 20:31:33.816178 sshd-session[4733]: pam_unix(sshd:session): session closed for user core Mar 17 20:31:33.821675 systemd[1]: sshd@28-10.230.57.126:22-139.178.89.65:43842.service: Deactivated successfully. Mar 17 20:31:33.826927 systemd[1]: session-30.scope: Deactivated successfully. Mar 17 20:31:33.830406 systemd-logind[1507]: Session 30 logged out. Waiting for processes to exit. Mar 17 20:31:33.833391 systemd-logind[1507]: Removed session 30.