Apr 30 01:26:52.043946 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Apr 29 22:31:30 -00 2025 Apr 30 01:26:52.044016 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=079594ab73b0b9c3f57b251ae4a9c4ba48b1d8cf52fcc550cc89261eb22129fc Apr 30 01:26:52.044032 kernel: BIOS-provided physical RAM map: Apr 30 01:26:52.044047 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Apr 30 01:26:52.044056 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Apr 30 01:26:52.044065 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 30 01:26:52.044075 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Apr 30 01:26:52.044084 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Apr 30 01:26:52.044093 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 30 01:26:52.044102 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Apr 30 01:26:52.044111 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 30 01:26:52.044120 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 30 01:26:52.044141 kernel: NX (Execute Disable) protection: active Apr 30 01:26:52.044151 kernel: APIC: Static calls initialized Apr 30 01:26:52.044162 kernel: SMBIOS 2.8 present. Apr 30 01:26:52.044189 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.16.0-3.module_el8.7.0+3346+68867adb 04/01/2014 Apr 30 01:26:52.044200 kernel: Hypervisor detected: KVM Apr 30 01:26:52.044215 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 30 01:26:52.044226 kernel: kvm-clock: using sched offset of 5903015774 cycles Apr 30 01:26:52.044237 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 30 01:26:52.044248 kernel: tsc: Detected 2799.998 MHz processor Apr 30 01:26:52.044258 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 30 01:26:52.044269 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 30 01:26:52.045997 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Apr 30 01:26:52.046018 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Apr 30 01:26:52.046030 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 30 01:26:52.046048 kernel: Using GB pages for direct mapping Apr 30 01:26:52.046059 kernel: ACPI: Early table checksum verification disabled Apr 30 01:26:52.046083 kernel: ACPI: RSDP 0x00000000000F59E0 000014 (v00 BOCHS ) Apr 30 01:26:52.046093 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 01:26:52.046103 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 01:26:52.046114 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 01:26:52.046124 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Apr 30 01:26:52.046134 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 01:26:52.046144 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 01:26:52.046159 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 01:26:52.046182 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 01:26:52.046192 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Apr 30 01:26:52.046203 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Apr 30 01:26:52.046213 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Apr 30 01:26:52.046242 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Apr 30 01:26:52.046253 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Apr 30 01:26:52.046268 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Apr 30 01:26:52.046279 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Apr 30 01:26:52.046290 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Apr 30 01:26:52.046308 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Apr 30 01:26:52.046321 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Apr 30 01:26:52.046331 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Apr 30 01:26:52.046342 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Apr 30 01:26:52.046357 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Apr 30 01:26:52.046369 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Apr 30 01:26:52.046379 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Apr 30 01:26:52.046390 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Apr 30 01:26:52.046401 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Apr 30 01:26:52.046411 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Apr 30 01:26:52.046422 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Apr 30 01:26:52.046432 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Apr 30 01:26:52.046443 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Apr 30 01:26:52.046458 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Apr 30 01:26:52.046474 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Apr 30 01:26:52.046485 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Apr 30 01:26:52.046496 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Apr 30 01:26:52.046507 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Apr 30 01:26:52.046531 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Apr 30 01:26:52.046554 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Apr 30 01:26:52.046565 kernel: Zone ranges: Apr 30 01:26:52.046577 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 30 01:26:52.046588 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Apr 30 01:26:52.046604 kernel: Normal empty Apr 30 01:26:52.046615 kernel: Movable zone start for each node Apr 30 01:26:52.046639 kernel: Early memory node ranges Apr 30 01:26:52.046650 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 30 01:26:52.046661 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Apr 30 01:26:52.046671 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Apr 30 01:26:52.046682 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 30 01:26:52.046693 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 30 01:26:52.046709 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Apr 30 01:26:52.046721 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 30 01:26:52.046736 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 30 01:26:52.046759 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 30 01:26:52.046771 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 30 01:26:52.046782 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 30 01:26:52.046793 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 30 01:26:52.046826 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 30 01:26:52.046837 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 30 01:26:52.046848 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 30 01:26:52.046858 kernel: TSC deadline timer available Apr 30 01:26:52.046873 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Apr 30 01:26:52.046896 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 30 01:26:52.046907 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Apr 30 01:26:52.046917 kernel: Booting paravirtualized kernel on KVM Apr 30 01:26:52.046928 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 30 01:26:52.046952 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Apr 30 01:26:52.046963 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u262144 Apr 30 01:26:52.046974 kernel: pcpu-alloc: s197096 r8192 d32280 u262144 alloc=1*2097152 Apr 30 01:26:52.046984 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Apr 30 01:26:52.046999 kernel: kvm-guest: PV spinlocks enabled Apr 30 01:26:52.048990 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 30 01:26:52.049008 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=079594ab73b0b9c3f57b251ae4a9c4ba48b1d8cf52fcc550cc89261eb22129fc Apr 30 01:26:52.049021 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Apr 30 01:26:52.049032 kernel: random: crng init done Apr 30 01:26:52.049043 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 30 01:26:52.049055 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Apr 30 01:26:52.049073 kernel: Fallback order for Node 0: 0 Apr 30 01:26:52.049084 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Apr 30 01:26:52.049103 kernel: Policy zone: DMA32 Apr 30 01:26:52.049115 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 30 01:26:52.049127 kernel: software IO TLB: area num 16. Apr 30 01:26:52.049138 kernel: Memory: 1901532K/2096616K available (12288K kernel code, 2295K rwdata, 22748K rodata, 42992K init, 2200K bss, 194824K reserved, 0K cma-reserved) Apr 30 01:26:52.049150 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Apr 30 01:26:52.049161 kernel: Kernel/User page tables isolation: enabled Apr 30 01:26:52.049172 kernel: ftrace: allocating 37946 entries in 149 pages Apr 30 01:26:52.049188 kernel: ftrace: allocated 149 pages with 4 groups Apr 30 01:26:52.049200 kernel: Dynamic Preempt: voluntary Apr 30 01:26:52.049211 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 30 01:26:52.049223 kernel: rcu: RCU event tracing is enabled. Apr 30 01:26:52.049235 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Apr 30 01:26:52.049250 kernel: Trampoline variant of Tasks RCU enabled. Apr 30 01:26:52.049269 kernel: Rude variant of Tasks RCU enabled. Apr 30 01:26:52.049285 kernel: Tracing variant of Tasks RCU enabled. Apr 30 01:26:52.049297 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 30 01:26:52.049309 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Apr 30 01:26:52.049320 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Apr 30 01:26:52.049332 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 30 01:26:52.049347 kernel: Console: colour VGA+ 80x25 Apr 30 01:26:52.049359 kernel: printk: console [tty0] enabled Apr 30 01:26:52.049371 kernel: printk: console [ttyS0] enabled Apr 30 01:26:52.049383 kernel: ACPI: Core revision 20230628 Apr 30 01:26:52.049395 kernel: APIC: Switch to symmetric I/O mode setup Apr 30 01:26:52.049410 kernel: x2apic enabled Apr 30 01:26:52.049422 kernel: APIC: Switched APIC routing to: physical x2apic Apr 30 01:26:52.049440 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Apr 30 01:26:52.049453 kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998) Apr 30 01:26:52.049465 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 30 01:26:52.049477 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 30 01:26:52.049488 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 30 01:26:52.049500 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 30 01:26:52.049511 kernel: Spectre V2 : Mitigation: Retpolines Apr 30 01:26:52.049523 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Apr 30 01:26:52.049549 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Apr 30 01:26:52.049562 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Apr 30 01:26:52.049574 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Apr 30 01:26:52.049586 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Apr 30 01:26:52.049597 kernel: MDS: Mitigation: Clear CPU buffers Apr 30 01:26:52.049608 kernel: MMIO Stale Data: Unknown: No mitigations Apr 30 01:26:52.049620 kernel: SRBDS: Unknown: Dependent on hypervisor status Apr 30 01:26:52.049631 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 30 01:26:52.049643 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 30 01:26:52.049654 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 30 01:26:52.049666 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 30 01:26:52.049683 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Apr 30 01:26:52.049694 kernel: Freeing SMP alternatives memory: 32K Apr 30 01:26:52.049711 kernel: pid_max: default: 32768 minimum: 301 Apr 30 01:26:52.049730 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 30 01:26:52.049742 kernel: landlock: Up and running. Apr 30 01:26:52.049754 kernel: SELinux: Initializing. Apr 30 01:26:52.049766 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Apr 30 01:26:52.049777 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Apr 30 01:26:52.049789 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Apr 30 01:26:52.049801 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Apr 30 01:26:52.049813 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Apr 30 01:26:52.049830 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Apr 30 01:26:52.049843 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Apr 30 01:26:52.049854 kernel: signal: max sigframe size: 1776 Apr 30 01:26:52.049866 kernel: rcu: Hierarchical SRCU implementation. Apr 30 01:26:52.049878 kernel: rcu: Max phase no-delay instances is 400. Apr 30 01:26:52.049890 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 30 01:26:52.049902 kernel: smp: Bringing up secondary CPUs ... Apr 30 01:26:52.049913 kernel: smpboot: x86: Booting SMP configuration: Apr 30 01:26:52.049925 kernel: .... node #0, CPUs: #1 Apr 30 01:26:52.049941 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Apr 30 01:26:52.049953 kernel: smp: Brought up 1 node, 2 CPUs Apr 30 01:26:52.049991 kernel: smpboot: Max logical packages: 16 Apr 30 01:26:52.050004 kernel: smpboot: Total of 2 processors activated (11199.99 BogoMIPS) Apr 30 01:26:52.050016 kernel: devtmpfs: initialized Apr 30 01:26:52.050027 kernel: x86/mm: Memory block size: 128MB Apr 30 01:26:52.050039 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 30 01:26:52.050051 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Apr 30 01:26:52.050063 kernel: pinctrl core: initialized pinctrl subsystem Apr 30 01:26:52.050081 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 30 01:26:52.050093 kernel: audit: initializing netlink subsys (disabled) Apr 30 01:26:52.050105 kernel: audit: type=2000 audit(1745976409.929:1): state=initialized audit_enabled=0 res=1 Apr 30 01:26:52.050117 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 30 01:26:52.050128 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 30 01:26:52.050140 kernel: cpuidle: using governor menu Apr 30 01:26:52.050152 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 30 01:26:52.050164 kernel: dca service started, version 1.12.1 Apr 30 01:26:52.050175 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 30 01:26:52.050192 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 30 01:26:52.050204 kernel: PCI: Using configuration type 1 for base access Apr 30 01:26:52.050215 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 30 01:26:52.050227 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 30 01:26:52.050239 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 30 01:26:52.050251 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 30 01:26:52.050262 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 30 01:26:52.050274 kernel: ACPI: Added _OSI(Module Device) Apr 30 01:26:52.050285 kernel: ACPI: Added _OSI(Processor Device) Apr 30 01:26:52.050302 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Apr 30 01:26:52.050314 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 30 01:26:52.050325 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 30 01:26:52.050337 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 30 01:26:52.050348 kernel: ACPI: Interpreter enabled Apr 30 01:26:52.050360 kernel: ACPI: PM: (supports S0 S5) Apr 30 01:26:52.050372 kernel: ACPI: Using IOAPIC for interrupt routing Apr 30 01:26:52.050383 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 30 01:26:52.050395 kernel: PCI: Using E820 reservations for host bridge windows Apr 30 01:26:52.050411 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 30 01:26:52.050423 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 30 01:26:52.050706 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 30 01:26:52.050912 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Apr 30 01:26:52.051132 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Apr 30 01:26:52.051151 kernel: PCI host bridge to bus 0000:00 Apr 30 01:26:52.051342 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 30 01:26:52.051506 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 30 01:26:52.051679 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 30 01:26:52.051837 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Apr 30 01:26:52.055084 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 30 01:26:52.055284 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Apr 30 01:26:52.055444 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 30 01:26:52.055664 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 30 01:26:52.055862 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Apr 30 01:26:52.056077 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Apr 30 01:26:52.056250 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Apr 30 01:26:52.056442 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Apr 30 01:26:52.056627 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 30 01:26:52.056813 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Apr 30 01:26:52.061060 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Apr 30 01:26:52.061336 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Apr 30 01:26:52.061529 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Apr 30 01:26:52.061728 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Apr 30 01:26:52.061914 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Apr 30 01:26:52.062136 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Apr 30 01:26:52.062344 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Apr 30 01:26:52.062572 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Apr 30 01:26:52.062743 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Apr 30 01:26:52.062958 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Apr 30 01:26:52.065176 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Apr 30 01:26:52.065417 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Apr 30 01:26:52.065618 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Apr 30 01:26:52.065807 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Apr 30 01:26:52.066027 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Apr 30 01:26:52.066221 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Apr 30 01:26:52.067070 kernel: pci 0000:00:03.0: reg 0x10: [io 0xd0c0-0xd0df] Apr 30 01:26:52.067267 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Apr 30 01:26:52.067455 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Apr 30 01:26:52.067655 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Apr 30 01:26:52.067894 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Apr 30 01:26:52.069819 kernel: pci 0000:00:04.0: reg 0x10: [io 0xd000-0xd07f] Apr 30 01:26:52.070032 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Apr 30 01:26:52.070233 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Apr 30 01:26:52.070455 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 30 01:26:52.070642 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 30 01:26:52.070841 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 30 01:26:52.071029 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xd0e0-0xd0ff] Apr 30 01:26:52.071221 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Apr 30 01:26:52.071439 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 30 01:26:52.071626 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Apr 30 01:26:52.071823 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Apr 30 01:26:52.075120 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Apr 30 01:26:52.075330 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Apr 30 01:26:52.075512 kernel: pci 0000:00:02.0: bridge window [io 0xc000-0xcfff] Apr 30 01:26:52.075719 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Apr 30 01:26:52.075893 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Apr 30 01:26:52.077233 kernel: pci_bus 0000:02: extended config space not accessible Apr 30 01:26:52.077521 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Apr 30 01:26:52.077739 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Apr 30 01:26:52.077927 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Apr 30 01:26:52.080170 kernel: pci 0000:01:00.0: bridge window [io 0xc000-0xcfff] Apr 30 01:26:52.080366 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Apr 30 01:26:52.080576 kernel: pci 0000:01:00.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Apr 30 01:26:52.080764 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Apr 30 01:26:52.080962 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Apr 30 01:26:52.081159 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Apr 30 01:26:52.081328 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Apr 30 01:26:52.081504 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Apr 30 01:26:52.081734 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Apr 30 01:26:52.081935 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Apr 30 01:26:52.084161 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Apr 30 01:26:52.084354 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Apr 30 01:26:52.084529 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Apr 30 01:26:52.084721 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Apr 30 01:26:52.084893 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Apr 30 01:26:52.085112 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Apr 30 01:26:52.085284 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Apr 30 01:26:52.085461 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Apr 30 01:26:52.085652 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Apr 30 01:26:52.085842 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Apr 30 01:26:52.088087 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Apr 30 01:26:52.088287 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Apr 30 01:26:52.088469 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Apr 30 01:26:52.088658 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Apr 30 01:26:52.088824 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Apr 30 01:26:52.089057 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Apr 30 01:26:52.089227 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Apr 30 01:26:52.089402 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Apr 30 01:26:52.089421 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 30 01:26:52.089434 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 30 01:26:52.089446 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 30 01:26:52.089459 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 30 01:26:52.089471 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 30 01:26:52.089483 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 30 01:26:52.089495 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 30 01:26:52.089506 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 30 01:26:52.089525 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 30 01:26:52.089548 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 30 01:26:52.089561 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 30 01:26:52.089573 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 30 01:26:52.089585 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 30 01:26:52.089596 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 30 01:26:52.089608 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 30 01:26:52.089620 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 30 01:26:52.089632 kernel: iommu: Default domain type: Translated Apr 30 01:26:52.089651 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 30 01:26:52.089663 kernel: PCI: Using ACPI for IRQ routing Apr 30 01:26:52.089675 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 30 01:26:52.089687 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Apr 30 01:26:52.089699 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Apr 30 01:26:52.089869 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 30 01:26:52.096594 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 30 01:26:52.096802 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 30 01:26:52.096849 kernel: vgaarb: loaded Apr 30 01:26:52.096862 kernel: clocksource: Switched to clocksource kvm-clock Apr 30 01:26:52.096874 kernel: VFS: Disk quotas dquot_6.6.0 Apr 30 01:26:52.096886 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 30 01:26:52.096897 kernel: pnp: PnP ACPI init Apr 30 01:26:52.097185 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 30 01:26:52.097206 kernel: pnp: PnP ACPI: found 5 devices Apr 30 01:26:52.097219 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 30 01:26:52.097248 kernel: NET: Registered PF_INET protocol family Apr 30 01:26:52.097261 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 30 01:26:52.097273 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Apr 30 01:26:52.097285 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 30 01:26:52.097298 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 30 01:26:52.097320 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Apr 30 01:26:52.097332 kernel: TCP: Hash tables configured (established 16384 bind 16384) Apr 30 01:26:52.097356 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Apr 30 01:26:52.097368 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Apr 30 01:26:52.097385 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 30 01:26:52.097398 kernel: NET: Registered PF_XDP protocol family Apr 30 01:26:52.097608 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Apr 30 01:26:52.097786 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Apr 30 01:26:52.098072 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Apr 30 01:26:52.098269 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Apr 30 01:26:52.098459 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Apr 30 01:26:52.098670 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Apr 30 01:26:52.098857 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Apr 30 01:26:52.099084 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x1000-0x1fff] Apr 30 01:26:52.099265 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x2000-0x2fff] Apr 30 01:26:52.099458 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x3000-0x3fff] Apr 30 01:26:52.099669 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x4000-0x4fff] Apr 30 01:26:52.099845 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x5000-0x5fff] Apr 30 01:26:52.100142 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x6000-0x6fff] Apr 30 01:26:52.100387 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x7000-0x7fff] Apr 30 01:26:52.100609 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Apr 30 01:26:52.100784 kernel: pci 0000:01:00.0: bridge window [io 0xc000-0xcfff] Apr 30 01:26:52.100958 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Apr 30 01:26:52.101179 kernel: pci 0000:01:00.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Apr 30 01:26:52.101390 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Apr 30 01:26:52.101570 kernel: pci 0000:00:02.0: bridge window [io 0xc000-0xcfff] Apr 30 01:26:52.101738 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Apr 30 01:26:52.101919 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Apr 30 01:26:52.107320 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Apr 30 01:26:52.107573 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x1fff] Apr 30 01:26:52.107746 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Apr 30 01:26:52.107934 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Apr 30 01:26:52.108188 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Apr 30 01:26:52.108375 kernel: pci 0000:00:02.2: bridge window [io 0x2000-0x2fff] Apr 30 01:26:52.108571 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Apr 30 01:26:52.108750 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Apr 30 01:26:52.108960 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Apr 30 01:26:52.109175 kernel: pci 0000:00:02.3: bridge window [io 0x3000-0x3fff] Apr 30 01:26:52.109348 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Apr 30 01:26:52.109524 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Apr 30 01:26:52.109712 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Apr 30 01:26:52.109880 kernel: pci 0000:00:02.4: bridge window [io 0x4000-0x4fff] Apr 30 01:26:52.110136 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Apr 30 01:26:52.110308 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Apr 30 01:26:52.110530 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Apr 30 01:26:52.110715 kernel: pci 0000:00:02.5: bridge window [io 0x5000-0x5fff] Apr 30 01:26:52.110887 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Apr 30 01:26:52.117123 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Apr 30 01:26:52.117346 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Apr 30 01:26:52.117527 kernel: pci 0000:00:02.6: bridge window [io 0x6000-0x6fff] Apr 30 01:26:52.117720 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Apr 30 01:26:52.117889 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Apr 30 01:26:52.118089 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Apr 30 01:26:52.118258 kernel: pci 0000:00:02.7: bridge window [io 0x7000-0x7fff] Apr 30 01:26:52.118439 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Apr 30 01:26:52.118630 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Apr 30 01:26:52.118825 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 30 01:26:52.119022 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 30 01:26:52.119187 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 30 01:26:52.119342 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Apr 30 01:26:52.119500 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 30 01:26:52.119675 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Apr 30 01:26:52.119873 kernel: pci_bus 0000:01: resource 0 [io 0xc000-0xcfff] Apr 30 01:26:52.120092 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Apr 30 01:26:52.120279 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Apr 30 01:26:52.120465 kernel: pci_bus 0000:02: resource 0 [io 0xc000-0xcfff] Apr 30 01:26:52.120664 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Apr 30 01:26:52.120835 kernel: pci_bus 0000:02: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Apr 30 01:26:52.122114 kernel: pci_bus 0000:03: resource 0 [io 0x1000-0x1fff] Apr 30 01:26:52.122300 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Apr 30 01:26:52.122454 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Apr 30 01:26:52.122651 kernel: pci_bus 0000:04: resource 0 [io 0x2000-0x2fff] Apr 30 01:26:52.122825 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Apr 30 01:26:52.122993 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Apr 30 01:26:52.123233 kernel: pci_bus 0000:05: resource 0 [io 0x3000-0x3fff] Apr 30 01:26:52.123399 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Apr 30 01:26:52.123574 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Apr 30 01:26:52.123772 kernel: pci_bus 0000:06: resource 0 [io 0x4000-0x4fff] Apr 30 01:26:52.123934 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Apr 30 01:26:52.127131 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Apr 30 01:26:52.127414 kernel: pci_bus 0000:07: resource 0 [io 0x5000-0x5fff] Apr 30 01:26:52.127626 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Apr 30 01:26:52.127788 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Apr 30 01:26:52.128082 kernel: pci_bus 0000:08: resource 0 [io 0x6000-0x6fff] Apr 30 01:26:52.128260 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Apr 30 01:26:52.128430 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Apr 30 01:26:52.128620 kernel: pci_bus 0000:09: resource 0 [io 0x7000-0x7fff] Apr 30 01:26:52.128780 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Apr 30 01:26:52.128942 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Apr 30 01:26:52.128984 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 30 01:26:52.128998 kernel: PCI: CLS 0 bytes, default 64 Apr 30 01:26:52.129012 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Apr 30 01:26:52.129025 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Apr 30 01:26:52.129038 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 30 01:26:52.129057 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Apr 30 01:26:52.129070 kernel: Initialise system trusted keyrings Apr 30 01:26:52.129083 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Apr 30 01:26:52.129096 kernel: Key type asymmetric registered Apr 30 01:26:52.129113 kernel: Asymmetric key parser 'x509' registered Apr 30 01:26:52.129127 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 30 01:26:52.129140 kernel: io scheduler mq-deadline registered Apr 30 01:26:52.129152 kernel: io scheduler kyber registered Apr 30 01:26:52.129165 kernel: io scheduler bfq registered Apr 30 01:26:52.129343 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Apr 30 01:26:52.129520 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Apr 30 01:26:52.129709 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 30 01:26:52.129886 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Apr 30 01:26:52.130092 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Apr 30 01:26:52.130266 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 30 01:26:52.130473 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Apr 30 01:26:52.130672 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Apr 30 01:26:52.130854 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 30 01:26:52.132277 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Apr 30 01:26:52.132480 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Apr 30 01:26:52.132679 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 30 01:26:52.132860 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Apr 30 01:26:52.133080 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Apr 30 01:26:52.133261 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 30 01:26:52.133436 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Apr 30 01:26:52.133647 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Apr 30 01:26:52.133815 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 30 01:26:52.134044 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Apr 30 01:26:52.134250 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Apr 30 01:26:52.134447 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 30 01:26:52.134630 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Apr 30 01:26:52.134805 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Apr 30 01:26:52.135009 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 30 01:26:52.135030 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 30 01:26:52.135044 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 30 01:26:52.135057 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 30 01:26:52.135082 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 30 01:26:52.135095 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 30 01:26:52.135114 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 30 01:26:52.135127 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 30 01:26:52.135140 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 30 01:26:52.135343 kernel: rtc_cmos 00:03: RTC can wake from S4 Apr 30 01:26:52.135363 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 30 01:26:52.135548 kernel: rtc_cmos 00:03: registered as rtc0 Apr 30 01:26:52.135713 kernel: rtc_cmos 00:03: setting system clock to 2025-04-30T01:26:51 UTC (1745976411) Apr 30 01:26:52.135882 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Apr 30 01:26:52.135918 kernel: intel_pstate: CPU model not supported Apr 30 01:26:52.135931 kernel: NET: Registered PF_INET6 protocol family Apr 30 01:26:52.135943 kernel: Segment Routing with IPv6 Apr 30 01:26:52.135954 kernel: In-situ OAM (IOAM) with IPv6 Apr 30 01:26:52.135989 kernel: NET: Registered PF_PACKET protocol family Apr 30 01:26:52.136034 kernel: Key type dns_resolver registered Apr 30 01:26:52.136055 kernel: IPI shorthand broadcast: enabled Apr 30 01:26:52.136068 kernel: sched_clock: Marking stable (1379003549, 231171351)->(1871891416, -261716516) Apr 30 01:26:52.136080 kernel: registered taskstats version 1 Apr 30 01:26:52.136117 kernel: Loading compiled-in X.509 certificates Apr 30 01:26:52.136131 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: eb8928891d93dabd1aa89590482110d196038597' Apr 30 01:26:52.136143 kernel: Key type .fscrypt registered Apr 30 01:26:52.136155 kernel: Key type fscrypt-provisioning registered Apr 30 01:26:52.136169 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 30 01:26:52.136181 kernel: ima: Allocated hash algorithm: sha1 Apr 30 01:26:52.136194 kernel: ima: No architecture policies found Apr 30 01:26:52.136207 kernel: clk: Disabling unused clocks Apr 30 01:26:52.136219 kernel: Freeing unused kernel image (initmem) memory: 42992K Apr 30 01:26:52.136237 kernel: Write protecting the kernel read-only data: 36864k Apr 30 01:26:52.136251 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Apr 30 01:26:52.136263 kernel: Run /init as init process Apr 30 01:26:52.136280 kernel: with arguments: Apr 30 01:26:52.136294 kernel: /init Apr 30 01:26:52.136318 kernel: with environment: Apr 30 01:26:52.136330 kernel: HOME=/ Apr 30 01:26:52.136342 kernel: TERM=linux Apr 30 01:26:52.136354 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Apr 30 01:26:52.136387 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 01:26:52.136402 systemd[1]: Detected virtualization kvm. Apr 30 01:26:52.136415 systemd[1]: Detected architecture x86-64. Apr 30 01:26:52.136440 systemd[1]: Running in initrd. Apr 30 01:26:52.136453 systemd[1]: No hostname configured, using default hostname. Apr 30 01:26:52.136466 systemd[1]: Hostname set to . Apr 30 01:26:52.136480 systemd[1]: Initializing machine ID from VM UUID. Apr 30 01:26:52.136498 systemd[1]: Queued start job for default target initrd.target. Apr 30 01:26:52.136511 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 01:26:52.136525 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 01:26:52.136549 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 30 01:26:52.136563 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 01:26:52.136577 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 30 01:26:52.136590 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 30 01:26:52.136611 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 30 01:26:52.136625 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 30 01:26:52.136639 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 01:26:52.136653 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 01:26:52.136666 systemd[1]: Reached target paths.target - Path Units. Apr 30 01:26:52.136679 systemd[1]: Reached target slices.target - Slice Units. Apr 30 01:26:52.136692 systemd[1]: Reached target swap.target - Swaps. Apr 30 01:26:52.136706 systemd[1]: Reached target timers.target - Timer Units. Apr 30 01:26:52.136724 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 01:26:52.136738 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 01:26:52.136751 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 30 01:26:52.136765 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 30 01:26:52.136778 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 01:26:52.136791 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 01:26:52.136805 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 01:26:52.136831 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 01:26:52.136843 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 30 01:26:52.136861 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 01:26:52.136887 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 30 01:26:52.136899 systemd[1]: Starting systemd-fsck-usr.service... Apr 30 01:26:52.136911 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 01:26:52.136923 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 01:26:52.136936 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 01:26:52.136971 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 30 01:26:52.136984 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 01:26:52.137176 systemd-journald[202]: Collecting audit messages is disabled. Apr 30 01:26:52.137209 systemd[1]: Finished systemd-fsck-usr.service. Apr 30 01:26:52.137242 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 01:26:52.137256 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 01:26:52.137269 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 30 01:26:52.137282 kernel: Bridge firewalling registered Apr 30 01:26:52.137296 systemd-journald[202]: Journal started Apr 30 01:26:52.137339 systemd-journald[202]: Runtime Journal (/run/log/journal/0afd8702ecd5454d9899d3b0af4b4370) is 4.7M, max 38.0M, 33.2M free. Apr 30 01:26:52.064986 systemd-modules-load[203]: Inserted module 'overlay' Apr 30 01:26:52.131297 systemd-modules-load[203]: Inserted module 'br_netfilter' Apr 30 01:26:52.155167 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 01:26:52.156581 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 01:26:52.157599 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 01:26:52.166166 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 01:26:52.177266 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 01:26:52.184633 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 01:26:52.194427 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 01:26:52.200381 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 01:26:52.209860 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 01:26:52.213396 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 01:26:52.222157 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 30 01:26:52.223247 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 01:26:52.228199 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 01:26:52.244374 dracut-cmdline[235]: dracut-dracut-053 Apr 30 01:26:52.249348 dracut-cmdline[235]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=079594ab73b0b9c3f57b251ae4a9c4ba48b1d8cf52fcc550cc89261eb22129fc Apr 30 01:26:52.281833 systemd-resolved[237]: Positive Trust Anchors: Apr 30 01:26:52.281855 systemd-resolved[237]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 01:26:52.281899 systemd-resolved[237]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 01:26:52.300040 systemd-resolved[237]: Defaulting to hostname 'linux'. Apr 30 01:26:52.303061 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 01:26:52.309707 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 01:26:52.385043 kernel: SCSI subsystem initialized Apr 30 01:26:52.397024 kernel: Loading iSCSI transport class v2.0-870. Apr 30 01:26:52.410011 kernel: iscsi: registered transport (tcp) Apr 30 01:26:52.436036 kernel: iscsi: registered transport (qla4xxx) Apr 30 01:26:52.436157 kernel: QLogic iSCSI HBA Driver Apr 30 01:26:52.494193 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 30 01:26:52.501279 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 30 01:26:52.548242 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 30 01:26:52.548352 kernel: device-mapper: uevent: version 1.0.3 Apr 30 01:26:52.550212 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 30 01:26:52.601017 kernel: raid6: sse2x4 gen() 13366 MB/s Apr 30 01:26:52.617008 kernel: raid6: sse2x2 gen() 9578 MB/s Apr 30 01:26:52.635686 kernel: raid6: sse2x1 gen() 10139 MB/s Apr 30 01:26:52.635737 kernel: raid6: using algorithm sse2x4 gen() 13366 MB/s Apr 30 01:26:52.654562 kernel: raid6: .... xor() 7757 MB/s, rmw enabled Apr 30 01:26:52.654608 kernel: raid6: using ssse3x2 recovery algorithm Apr 30 01:26:52.680023 kernel: xor: automatically using best checksumming function avx Apr 30 01:26:52.869086 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 30 01:26:52.884076 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 30 01:26:52.893243 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 01:26:52.914149 systemd-udevd[420]: Using default interface naming scheme 'v255'. Apr 30 01:26:52.922082 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 01:26:52.932339 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 30 01:26:52.954097 dracut-pre-trigger[426]: rd.md=0: removing MD RAID activation Apr 30 01:26:52.996845 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 01:26:53.005150 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 01:26:53.127712 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 01:26:53.135889 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 30 01:26:53.170003 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 30 01:26:53.172303 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 01:26:53.175352 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 01:26:53.177125 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 01:26:53.186444 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 30 01:26:53.215190 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 30 01:26:53.255644 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Apr 30 01:26:53.330091 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Apr 30 01:26:53.330344 kernel: cryptd: max_cpu_qlen set to 1000 Apr 30 01:26:53.330374 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 30 01:26:53.330393 kernel: GPT:17805311 != 125829119 Apr 30 01:26:53.330409 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 30 01:26:53.330425 kernel: GPT:17805311 != 125829119 Apr 30 01:26:53.330441 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 30 01:26:53.330457 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 30 01:26:53.330473 kernel: AVX version of gcm_enc/dec engaged. Apr 30 01:26:53.330489 kernel: AES CTR mode by8 optimization enabled Apr 30 01:26:53.331988 kernel: ACPI: bus type USB registered Apr 30 01:26:53.332264 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 01:26:53.343132 kernel: usbcore: registered new interface driver usbfs Apr 30 01:26:53.343158 kernel: usbcore: registered new interface driver hub Apr 30 01:26:53.343175 kernel: usbcore: registered new device driver usb Apr 30 01:26:53.332516 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 01:26:53.346055 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 01:26:53.351866 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 01:26:53.355000 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 01:26:53.357845 kernel: libata version 3.00 loaded. Apr 30 01:26:53.355815 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 01:26:53.364280 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 01:26:53.398014 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (476) Apr 30 01:26:53.417028 kernel: BTRFS: device fsid 4a916ed5-00fd-4e52-b8e2-9fed6d007e9f devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (463) Apr 30 01:26:53.419652 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 30 01:26:53.423657 kernel: ahci 0000:00:1f.2: version 3.0 Apr 30 01:26:53.463782 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 30 01:26:53.463823 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 30 01:26:53.464128 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 30 01:26:53.464350 kernel: scsi host0: ahci Apr 30 01:26:53.464580 kernel: scsi host1: ahci Apr 30 01:26:53.464798 kernel: scsi host2: ahci Apr 30 01:26:53.465025 kernel: scsi host3: ahci Apr 30 01:26:53.465266 kernel: scsi host4: ahci Apr 30 01:26:53.465459 kernel: scsi host5: ahci Apr 30 01:26:53.465681 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 38 Apr 30 01:26:53.465702 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 38 Apr 30 01:26:53.465720 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 38 Apr 30 01:26:53.465737 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 38 Apr 30 01:26:53.465753 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 38 Apr 30 01:26:53.465770 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 38 Apr 30 01:26:53.448740 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 30 01:26:53.545933 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Apr 30 01:26:53.546303 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Apr 30 01:26:53.546559 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Apr 30 01:26:53.546768 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Apr 30 01:26:53.547050 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Apr 30 01:26:53.547351 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Apr 30 01:26:53.547588 kernel: hub 1-0:1.0: USB hub found Apr 30 01:26:53.547892 kernel: hub 1-0:1.0: 4 ports detected Apr 30 01:26:53.548184 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Apr 30 01:26:53.548438 kernel: hub 2-0:1.0: USB hub found Apr 30 01:26:53.548701 kernel: hub 2-0:1.0: 4 ports detected Apr 30 01:26:53.546961 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 01:26:53.555483 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 30 01:26:53.566079 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 30 01:26:53.566961 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 30 01:26:53.574168 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 30 01:26:53.576881 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 01:26:53.584985 disk-uuid[562]: Primary Header is updated. Apr 30 01:26:53.584985 disk-uuid[562]: Secondary Entries is updated. Apr 30 01:26:53.584985 disk-uuid[562]: Secondary Header is updated. Apr 30 01:26:53.591021 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 30 01:26:53.599473 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 30 01:26:53.612779 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 01:26:53.712992 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Apr 30 01:26:53.773239 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 30 01:26:53.773362 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 30 01:26:53.774161 kernel: ata3: SATA link down (SStatus 0 SControl 300) Apr 30 01:26:53.776173 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 30 01:26:53.778887 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 30 01:26:53.780997 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 30 01:26:53.859990 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 30 01:26:53.867080 kernel: usbcore: registered new interface driver usbhid Apr 30 01:26:53.867114 kernel: usbhid: USB HID core driver Apr 30 01:26:53.874546 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Apr 30 01:26:53.874603 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Apr 30 01:26:54.609014 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 30 01:26:54.611125 disk-uuid[563]: The operation has completed successfully. Apr 30 01:26:54.665557 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 30 01:26:54.665722 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 30 01:26:54.685175 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 30 01:26:54.692424 sh[584]: Success Apr 30 01:26:54.709994 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Apr 30 01:26:54.775399 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 30 01:26:54.785144 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 30 01:26:54.788416 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 30 01:26:54.822499 kernel: BTRFS info (device dm-0): first mount of filesystem 4a916ed5-00fd-4e52-b8e2-9fed6d007e9f Apr 30 01:26:54.822560 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 30 01:26:54.824626 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 30 01:26:54.828056 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 30 01:26:54.828094 kernel: BTRFS info (device dm-0): using free space tree Apr 30 01:26:54.838637 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 30 01:26:54.840175 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 30 01:26:54.854323 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 30 01:26:54.859187 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 30 01:26:54.871672 kernel: BTRFS info (device vda6): first mount of filesystem e6cdb381-7cd1-4e2a-87c4-f7bcb12f058c Apr 30 01:26:54.871735 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 01:26:54.873230 kernel: BTRFS info (device vda6): using free space tree Apr 30 01:26:54.881135 kernel: BTRFS info (device vda6): auto enabling async discard Apr 30 01:26:54.894051 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 30 01:26:54.896569 kernel: BTRFS info (device vda6): last unmount of filesystem e6cdb381-7cd1-4e2a-87c4-f7bcb12f058c Apr 30 01:26:54.902556 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 30 01:26:54.910171 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 30 01:26:55.091505 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 01:26:55.100256 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 01:26:55.120510 ignition[673]: Ignition 2.20.0 Apr 30 01:26:55.123416 ignition[673]: Stage: fetch-offline Apr 30 01:26:55.123524 ignition[673]: no configs at "/usr/lib/ignition/base.d" Apr 30 01:26:55.123545 ignition[673]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Apr 30 01:26:55.123688 ignition[673]: parsed url from cmdline: "" Apr 30 01:26:55.123695 ignition[673]: no config URL provided Apr 30 01:26:55.123704 ignition[673]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 01:26:55.123721 ignition[673]: no config at "/usr/lib/ignition/user.ign" Apr 30 01:26:55.123729 ignition[673]: failed to fetch config: resource requires networking Apr 30 01:26:55.126239 ignition[673]: Ignition finished successfully Apr 30 01:26:55.129566 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 01:26:55.136543 systemd-networkd[770]: lo: Link UP Apr 30 01:26:55.136558 systemd-networkd[770]: lo: Gained carrier Apr 30 01:26:55.139516 systemd-networkd[770]: Enumeration completed Apr 30 01:26:55.140294 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 01:26:55.140364 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 01:26:55.140370 systemd-networkd[770]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 01:26:55.142090 systemd-networkd[770]: eth0: Link UP Apr 30 01:26:55.142101 systemd-networkd[770]: eth0: Gained carrier Apr 30 01:26:55.142116 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 01:26:55.143096 systemd[1]: Reached target network.target - Network. Apr 30 01:26:55.159190 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 30 01:26:55.195079 systemd-networkd[770]: eth0: DHCPv4 address 10.243.74.78/30, gateway 10.243.74.77 acquired from 10.243.74.77 Apr 30 01:26:55.198327 ignition[774]: Ignition 2.20.0 Apr 30 01:26:55.198345 ignition[774]: Stage: fetch Apr 30 01:26:55.198583 ignition[774]: no configs at "/usr/lib/ignition/base.d" Apr 30 01:26:55.198603 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Apr 30 01:26:55.198723 ignition[774]: parsed url from cmdline: "" Apr 30 01:26:55.198742 ignition[774]: no config URL provided Apr 30 01:26:55.198752 ignition[774]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 01:26:55.198767 ignition[774]: no config at "/usr/lib/ignition/user.ign" Apr 30 01:26:55.198890 ignition[774]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Apr 30 01:26:55.199988 ignition[774]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Apr 30 01:26:55.200021 ignition[774]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Apr 30 01:26:55.216230 ignition[774]: GET result: OK Apr 30 01:26:55.216864 ignition[774]: parsing config with SHA512: 1850697252de9394ab29b84ab4d1838e4e7100ac9152c678775e3cd1f2037fc693d64b9a1ea2b20af3d7604d9ced6f380da5d0d5ba81211bbbb4b6d618bfae24 Apr 30 01:26:55.223404 unknown[774]: fetched base config from "system" Apr 30 01:26:55.223427 unknown[774]: fetched base config from "system" Apr 30 01:26:55.223892 ignition[774]: fetch: fetch complete Apr 30 01:26:55.223436 unknown[774]: fetched user config from "openstack" Apr 30 01:26:55.223900 ignition[774]: fetch: fetch passed Apr 30 01:26:55.226127 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 30 01:26:55.223977 ignition[774]: Ignition finished successfully Apr 30 01:26:55.233146 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 30 01:26:55.252379 ignition[781]: Ignition 2.20.0 Apr 30 01:26:55.252405 ignition[781]: Stage: kargs Apr 30 01:26:55.252667 ignition[781]: no configs at "/usr/lib/ignition/base.d" Apr 30 01:26:55.252686 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Apr 30 01:26:55.256494 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 30 01:26:55.253880 ignition[781]: kargs: kargs passed Apr 30 01:26:55.253949 ignition[781]: Ignition finished successfully Apr 30 01:26:55.273769 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 30 01:26:55.299383 ignition[787]: Ignition 2.20.0 Apr 30 01:26:55.299398 ignition[787]: Stage: disks Apr 30 01:26:55.299682 ignition[787]: no configs at "/usr/lib/ignition/base.d" Apr 30 01:26:55.302322 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 30 01:26:55.299704 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Apr 30 01:26:55.303937 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 30 01:26:55.300851 ignition[787]: disks: disks passed Apr 30 01:26:55.305838 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 30 01:26:55.300934 ignition[787]: Ignition finished successfully Apr 30 01:26:55.307516 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 01:26:55.308746 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 01:26:55.310261 systemd[1]: Reached target basic.target - Basic System. Apr 30 01:26:55.328310 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 30 01:26:55.348426 systemd-fsck[795]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Apr 30 01:26:55.351446 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 30 01:26:55.360103 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 30 01:26:55.480982 kernel: EXT4-fs (vda9): mounted filesystem 21480c83-ef05-4682-ad3b-f751980943a0 r/w with ordered data mode. Quota mode: none. Apr 30 01:26:55.481683 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 30 01:26:55.483075 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 30 01:26:55.494121 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 01:26:55.496945 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 30 01:26:55.499386 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 30 01:26:55.508987 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (803) Apr 30 01:26:55.509030 kernel: BTRFS info (device vda6): first mount of filesystem e6cdb381-7cd1-4e2a-87c4-f7bcb12f058c Apr 30 01:26:55.510425 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Apr 30 01:26:55.516805 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 01:26:55.516832 kernel: BTRFS info (device vda6): using free space tree Apr 30 01:26:55.511246 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 30 01:26:55.511289 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 01:26:55.520269 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 30 01:26:55.524876 kernel: BTRFS info (device vda6): auto enabling async discard Apr 30 01:26:55.527055 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 01:26:55.536210 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 30 01:26:55.609346 initrd-setup-root[832]: cut: /sysroot/etc/passwd: No such file or directory Apr 30 01:26:55.616535 initrd-setup-root[839]: cut: /sysroot/etc/group: No such file or directory Apr 30 01:26:55.624674 initrd-setup-root[846]: cut: /sysroot/etc/shadow: No such file or directory Apr 30 01:26:55.633010 initrd-setup-root[853]: cut: /sysroot/etc/gshadow: No such file or directory Apr 30 01:26:55.737780 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 30 01:26:55.744124 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 30 01:26:55.759257 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 30 01:26:55.771018 kernel: BTRFS info (device vda6): last unmount of filesystem e6cdb381-7cd1-4e2a-87c4-f7bcb12f058c Apr 30 01:26:55.792015 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 30 01:26:55.803353 ignition[924]: INFO : Ignition 2.20.0 Apr 30 01:26:55.803353 ignition[924]: INFO : Stage: mount Apr 30 01:26:55.805215 ignition[924]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 01:26:55.805215 ignition[924]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Apr 30 01:26:55.805215 ignition[924]: INFO : mount: mount passed Apr 30 01:26:55.805215 ignition[924]: INFO : Ignition finished successfully Apr 30 01:26:55.805857 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 30 01:26:55.820503 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 30 01:26:57.067321 systemd-networkd[770]: eth0: Gained IPv6LL Apr 30 01:26:58.575186 systemd-networkd[770]: eth0: Ignoring DHCPv6 address 2a02:1348:17c:d293:24:19ff:fef3:4a4e/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17c:d293:24:19ff:fef3:4a4e/64 assigned by NDisc. Apr 30 01:26:58.575202 systemd-networkd[770]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Apr 30 01:27:02.691466 coreos-metadata[805]: Apr 30 01:27:02.691 WARN failed to locate config-drive, using the metadata service API instead Apr 30 01:27:02.716224 coreos-metadata[805]: Apr 30 01:27:02.716 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Apr 30 01:27:02.727626 coreos-metadata[805]: Apr 30 01:27:02.727 INFO Fetch successful Apr 30 01:27:02.728523 coreos-metadata[805]: Apr 30 01:27:02.728 INFO wrote hostname srv-tw98u.gb1.brightbox.com to /sysroot/etc/hostname Apr 30 01:27:02.730673 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Apr 30 01:27:02.732143 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Apr 30 01:27:02.742142 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 30 01:27:02.759203 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 01:27:02.770999 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (940) Apr 30 01:27:02.777020 kernel: BTRFS info (device vda6): first mount of filesystem e6cdb381-7cd1-4e2a-87c4-f7bcb12f058c Apr 30 01:27:02.777086 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 01:27:02.777116 kernel: BTRFS info (device vda6): using free space tree Apr 30 01:27:02.783000 kernel: BTRFS info (device vda6): auto enabling async discard Apr 30 01:27:02.786483 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 01:27:02.815776 ignition[958]: INFO : Ignition 2.20.0 Apr 30 01:27:02.815776 ignition[958]: INFO : Stage: files Apr 30 01:27:02.817752 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 01:27:02.817752 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Apr 30 01:27:02.817752 ignition[958]: DEBUG : files: compiled without relabeling support, skipping Apr 30 01:27:02.820549 ignition[958]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 30 01:27:02.820549 ignition[958]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 30 01:27:02.822474 ignition[958]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 30 01:27:02.823526 ignition[958]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 30 01:27:02.823526 ignition[958]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 30 01:27:02.823290 unknown[958]: wrote ssh authorized keys file for user: core Apr 30 01:27:02.826406 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Apr 30 01:27:02.826406 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Apr 30 01:27:03.061642 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 30 01:27:04.205901 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Apr 30 01:27:04.215488 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 30 01:27:04.215488 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Apr 30 01:27:04.772348 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 30 01:27:05.080649 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 30 01:27:05.082904 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 30 01:27:05.082904 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 30 01:27:05.082904 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 30 01:27:05.082904 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 30 01:27:05.082904 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 01:27:05.082904 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 01:27:05.082904 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 01:27:05.090734 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 01:27:05.090734 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 01:27:05.090734 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 01:27:05.090734 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Apr 30 01:27:05.090734 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Apr 30 01:27:05.090734 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Apr 30 01:27:05.090734 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Apr 30 01:27:05.574653 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 30 01:27:06.608235 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Apr 30 01:27:06.608235 ignition[958]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Apr 30 01:27:06.614422 ignition[958]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 01:27:06.615916 ignition[958]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 01:27:06.615916 ignition[958]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Apr 30 01:27:06.615916 ignition[958]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Apr 30 01:27:06.615916 ignition[958]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Apr 30 01:27:06.624421 ignition[958]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 30 01:27:06.624421 ignition[958]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 30 01:27:06.624421 ignition[958]: INFO : files: files passed Apr 30 01:27:06.624421 ignition[958]: INFO : Ignition finished successfully Apr 30 01:27:06.620568 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 30 01:27:06.641349 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 30 01:27:06.647228 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 30 01:27:06.649662 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 30 01:27:06.649805 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 30 01:27:06.676111 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 01:27:06.679244 initrd-setup-root-after-ignition[986]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 30 01:27:06.680310 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 01:27:06.681318 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 01:27:06.682893 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 30 01:27:06.689274 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 30 01:27:06.732736 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 30 01:27:06.732915 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 30 01:27:06.734683 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 30 01:27:06.736028 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 30 01:27:06.737569 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 30 01:27:06.742147 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 30 01:27:06.761391 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 01:27:06.768178 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 30 01:27:06.783932 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 30 01:27:06.784905 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 01:27:06.786595 systemd[1]: Stopped target timers.target - Timer Units. Apr 30 01:27:06.788049 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 30 01:27:06.788230 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 01:27:06.790124 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 30 01:27:06.791043 systemd[1]: Stopped target basic.target - Basic System. Apr 30 01:27:06.792426 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 30 01:27:06.793824 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 01:27:06.795247 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 30 01:27:06.796746 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 30 01:27:06.798304 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 01:27:06.799946 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 30 01:27:06.801435 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 30 01:27:06.803043 systemd[1]: Stopped target swap.target - Swaps. Apr 30 01:27:06.804368 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 30 01:27:06.804595 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 30 01:27:06.806279 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 30 01:27:06.807129 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 01:27:06.808550 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 30 01:27:06.808730 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 01:27:06.810218 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 30 01:27:06.810400 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 30 01:27:06.812356 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 30 01:27:06.812521 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 01:27:06.814265 systemd[1]: ignition-files.service: Deactivated successfully. Apr 30 01:27:06.814418 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 30 01:27:06.824342 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 30 01:27:06.830805 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 30 01:27:06.831121 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 01:27:06.846346 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 30 01:27:06.847109 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 30 01:27:06.847393 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 01:27:06.848813 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 30 01:27:06.850093 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 01:27:06.859607 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 30 01:27:06.859779 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 30 01:27:06.885210 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 30 01:27:06.889395 ignition[1010]: INFO : Ignition 2.20.0 Apr 30 01:27:06.889395 ignition[1010]: INFO : Stage: umount Apr 30 01:27:06.889395 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 01:27:06.889395 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Apr 30 01:27:06.894253 ignition[1010]: INFO : umount: umount passed Apr 30 01:27:06.894253 ignition[1010]: INFO : Ignition finished successfully Apr 30 01:27:06.890743 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 30 01:27:06.891646 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 30 01:27:06.893593 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 30 01:27:06.893740 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 30 01:27:06.895972 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 30 01:27:06.896142 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 30 01:27:06.897250 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 30 01:27:06.897319 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 30 01:27:06.898702 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 30 01:27:06.898781 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 30 01:27:06.900082 systemd[1]: Stopped target network.target - Network. Apr 30 01:27:06.901347 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 30 01:27:06.901444 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 01:27:06.902815 systemd[1]: Stopped target paths.target - Path Units. Apr 30 01:27:06.904106 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 30 01:27:06.906134 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 01:27:06.907238 systemd[1]: Stopped target slices.target - Slice Units. Apr 30 01:27:06.908699 systemd[1]: Stopped target sockets.target - Socket Units. Apr 30 01:27:06.910080 systemd[1]: iscsid.socket: Deactivated successfully. Apr 30 01:27:06.910149 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 01:27:06.911420 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 30 01:27:06.911481 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 01:27:06.912846 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 30 01:27:06.912921 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 30 01:27:06.914431 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 30 01:27:06.914506 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 30 01:27:06.915750 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 30 01:27:06.915816 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 30 01:27:06.917541 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 30 01:27:06.920366 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 30 01:27:06.921139 systemd-networkd[770]: eth0: DHCPv6 lease lost Apr 30 01:27:06.924586 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 30 01:27:06.924828 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 30 01:27:06.929361 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 30 01:27:06.929484 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 30 01:27:06.939142 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 30 01:27:06.940614 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 30 01:27:06.940700 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 01:27:06.943161 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 01:27:06.946761 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 30 01:27:06.946972 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 30 01:27:06.961522 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 30 01:27:06.962821 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 01:27:06.964432 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 30 01:27:06.964590 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 30 01:27:06.968388 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 30 01:27:06.968493 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 30 01:27:06.970073 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 30 01:27:06.970128 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 01:27:06.970840 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 30 01:27:06.970912 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 30 01:27:06.973053 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 30 01:27:06.973132 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 30 01:27:06.974531 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 01:27:06.974603 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 01:27:06.985330 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 30 01:27:06.986544 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 01:27:06.986642 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 01:27:06.987400 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 30 01:27:06.987477 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 30 01:27:06.988214 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 30 01:27:06.988278 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 01:27:06.989821 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 30 01:27:06.989884 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 01:27:06.992224 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 01:27:06.992321 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 01:27:07.003389 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 30 01:27:07.003764 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 30 01:27:07.006516 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 30 01:27:07.020257 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 30 01:27:07.030371 systemd[1]: Switching root. Apr 30 01:27:07.070037 systemd-journald[202]: Journal stopped Apr 30 01:27:08.696778 systemd-journald[202]: Received SIGTERM from PID 1 (systemd). Apr 30 01:27:08.696910 kernel: SELinux: policy capability network_peer_controls=1 Apr 30 01:27:08.696953 kernel: SELinux: policy capability open_perms=1 Apr 30 01:27:08.699052 kernel: SELinux: policy capability extended_socket_class=1 Apr 30 01:27:08.699077 kernel: SELinux: policy capability always_check_network=0 Apr 30 01:27:08.699102 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 30 01:27:08.699128 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 30 01:27:08.699177 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 30 01:27:08.699198 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 30 01:27:08.699222 kernel: audit: type=1403 audit(1745976427.424:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 30 01:27:08.699254 systemd[1]: Successfully loaded SELinux policy in 57.808ms. Apr 30 01:27:08.699290 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 20.336ms. Apr 30 01:27:08.699314 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 01:27:08.699341 systemd[1]: Detected virtualization kvm. Apr 30 01:27:08.699361 systemd[1]: Detected architecture x86-64. Apr 30 01:27:08.699394 systemd[1]: Detected first boot. Apr 30 01:27:08.699416 systemd[1]: Hostname set to . Apr 30 01:27:08.699440 systemd[1]: Initializing machine ID from VM UUID. Apr 30 01:27:08.699460 zram_generator::config[1056]: No configuration found. Apr 30 01:27:08.699480 systemd[1]: Populated /etc with preset unit settings. Apr 30 01:27:08.699505 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 30 01:27:08.699524 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 30 01:27:08.699543 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 30 01:27:08.699576 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 30 01:27:08.699598 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 30 01:27:08.699618 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 30 01:27:08.699645 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 30 01:27:08.699665 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 30 01:27:08.699684 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 30 01:27:08.699703 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 30 01:27:08.699728 systemd[1]: Created slice user.slice - User and Session Slice. Apr 30 01:27:08.699766 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 01:27:08.699805 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 01:27:08.699825 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 30 01:27:08.699843 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 30 01:27:08.699867 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 30 01:27:08.699887 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 01:27:08.699917 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 30 01:27:08.699945 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 01:27:08.699963 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 30 01:27:08.700025 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 30 01:27:08.700061 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 30 01:27:08.700079 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 30 01:27:08.700098 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 01:27:08.700150 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 01:27:08.700173 systemd[1]: Reached target slices.target - Slice Units. Apr 30 01:27:08.700192 systemd[1]: Reached target swap.target - Swaps. Apr 30 01:27:08.700211 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 30 01:27:08.700230 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 30 01:27:08.700249 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 01:27:08.700267 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 01:27:08.700287 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 01:27:08.700306 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 30 01:27:08.700337 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 30 01:27:08.700369 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 30 01:27:08.700389 systemd[1]: Mounting media.mount - External Media Directory... Apr 30 01:27:08.700417 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 01:27:08.700439 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 30 01:27:08.700457 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 30 01:27:08.700477 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 30 01:27:08.700504 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 30 01:27:08.700524 systemd[1]: Reached target machines.target - Containers. Apr 30 01:27:08.700566 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 30 01:27:08.700612 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 01:27:08.700632 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 01:27:08.700650 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 30 01:27:08.700668 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 01:27:08.700686 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 01:27:08.700703 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 01:27:08.700731 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 30 01:27:08.700760 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 01:27:08.700780 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 30 01:27:08.700798 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 30 01:27:08.700815 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 30 01:27:08.700845 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 30 01:27:08.700861 systemd[1]: Stopped systemd-fsck-usr.service. Apr 30 01:27:08.700878 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 01:27:08.700895 kernel: loop: module loaded Apr 30 01:27:08.700912 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 01:27:08.700939 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 30 01:27:08.701181 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 30 01:27:08.701215 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 01:27:08.701236 systemd[1]: verity-setup.service: Deactivated successfully. Apr 30 01:27:08.701256 systemd[1]: Stopped verity-setup.service. Apr 30 01:27:08.701275 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 01:27:08.701294 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 30 01:27:08.701313 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 30 01:27:08.701332 systemd[1]: Mounted media.mount - External Media Directory. Apr 30 01:27:08.701372 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 30 01:27:08.701395 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 30 01:27:08.701423 kernel: ACPI: bus type drm_connector registered Apr 30 01:27:08.701444 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 30 01:27:08.701474 kernel: fuse: init (API version 7.39) Apr 30 01:27:08.701495 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 01:27:08.701514 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 30 01:27:08.701560 systemd-journald[1146]: Collecting audit messages is disabled. Apr 30 01:27:08.701596 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 30 01:27:08.701624 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 01:27:08.701670 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 01:27:08.701702 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 01:27:08.701723 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 01:27:08.701759 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 01:27:08.701781 systemd-journald[1146]: Journal started Apr 30 01:27:08.701821 systemd-journald[1146]: Runtime Journal (/run/log/journal/0afd8702ecd5454d9899d3b0af4b4370) is 4.7M, max 38.0M, 33.2M free. Apr 30 01:27:08.267173 systemd[1]: Queued start job for default target multi-user.target. Apr 30 01:27:08.294898 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 30 01:27:08.295578 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 30 01:27:08.707005 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 01:27:08.711019 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 01:27:08.710954 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 30 01:27:08.712164 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 30 01:27:08.712368 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 30 01:27:08.713530 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 01:27:08.713744 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 01:27:08.714986 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 01:27:08.716220 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 30 01:27:08.717298 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 30 01:27:08.733331 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 30 01:27:08.742820 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 30 01:27:08.752073 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 30 01:27:08.753064 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 30 01:27:08.753235 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 01:27:08.755300 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 30 01:27:08.762784 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 30 01:27:08.767120 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 30 01:27:08.768215 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 01:27:08.775165 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 30 01:27:08.784156 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 30 01:27:08.785825 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 01:27:08.793903 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 30 01:27:08.795277 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 01:27:08.800460 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 01:27:08.810191 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 30 01:27:08.824171 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 30 01:27:08.829509 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 30 01:27:08.832190 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 30 01:27:08.833317 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 30 01:27:08.849190 systemd-journald[1146]: Time spent on flushing to /var/log/journal/0afd8702ecd5454d9899d3b0af4b4370 is 137.331ms for 1147 entries. Apr 30 01:27:08.849190 systemd-journald[1146]: System Journal (/var/log/journal/0afd8702ecd5454d9899d3b0af4b4370) is 8.0M, max 584.8M, 576.8M free. Apr 30 01:27:09.058390 systemd-journald[1146]: Received client request to flush runtime journal. Apr 30 01:27:09.058448 kernel: loop0: detected capacity change from 0 to 218376 Apr 30 01:27:09.058487 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 30 01:27:09.058510 kernel: loop1: detected capacity change from 0 to 140992 Apr 30 01:27:08.854501 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 30 01:27:08.856325 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 30 01:27:08.870211 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 30 01:27:08.966155 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 30 01:27:08.968288 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 30 01:27:08.986540 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 01:27:08.999142 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 30 01:27:09.000449 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 01:27:09.015623 udevadm[1202]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 30 01:27:09.058659 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 30 01:27:09.076232 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 01:27:09.079024 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 30 01:27:09.109988 kernel: loop2: detected capacity change from 0 to 138184 Apr 30 01:27:09.168048 systemd-tmpfiles[1207]: ACLs are not supported, ignoring. Apr 30 01:27:09.168074 systemd-tmpfiles[1207]: ACLs are not supported, ignoring. Apr 30 01:27:09.219780 kernel: loop3: detected capacity change from 0 to 8 Apr 30 01:27:09.236181 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 01:27:09.257009 kernel: loop4: detected capacity change from 0 to 218376 Apr 30 01:27:09.285004 kernel: loop5: detected capacity change from 0 to 140992 Apr 30 01:27:09.315493 kernel: loop6: detected capacity change from 0 to 138184 Apr 30 01:27:09.345992 kernel: loop7: detected capacity change from 0 to 8 Apr 30 01:27:09.356422 (sd-merge)[1214]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Apr 30 01:27:09.360173 (sd-merge)[1214]: Merged extensions into '/usr'. Apr 30 01:27:09.376158 systemd[1]: Reloading requested from client PID 1189 ('systemd-sysext') (unit systemd-sysext.service)... Apr 30 01:27:09.376181 systemd[1]: Reloading... Apr 30 01:27:09.597019 zram_generator::config[1240]: No configuration found. Apr 30 01:27:09.790264 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 01:27:09.850997 ldconfig[1184]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 30 01:27:09.882641 systemd[1]: Reloading finished in 505 ms. Apr 30 01:27:09.915505 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 30 01:27:09.918564 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 30 01:27:09.934334 systemd[1]: Starting ensure-sysext.service... Apr 30 01:27:09.941137 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 01:27:09.978039 systemd[1]: Reloading requested from client PID 1297 ('systemctl') (unit ensure-sysext.service)... Apr 30 01:27:09.978072 systemd[1]: Reloading... Apr 30 01:27:10.046028 systemd-tmpfiles[1298]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 30 01:27:10.046600 systemd-tmpfiles[1298]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 30 01:27:10.050136 systemd-tmpfiles[1298]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 30 01:27:10.050538 systemd-tmpfiles[1298]: ACLs are not supported, ignoring. Apr 30 01:27:10.050642 systemd-tmpfiles[1298]: ACLs are not supported, ignoring. Apr 30 01:27:10.061086 systemd-tmpfiles[1298]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 01:27:10.061115 systemd-tmpfiles[1298]: Skipping /boot Apr 30 01:27:10.105405 systemd-tmpfiles[1298]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 01:27:10.105425 systemd-tmpfiles[1298]: Skipping /boot Apr 30 01:27:10.158027 zram_generator::config[1333]: No configuration found. Apr 30 01:27:10.314221 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 01:27:10.379432 systemd[1]: Reloading finished in 400 ms. Apr 30 01:27:10.403898 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 30 01:27:10.411774 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 01:27:10.434480 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 30 01:27:10.458661 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 30 01:27:10.469322 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 30 01:27:10.481327 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 01:27:10.486748 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 01:27:10.502315 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 30 01:27:10.507360 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 01:27:10.507641 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 01:27:10.521411 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 01:27:10.534738 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 01:27:10.544369 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 01:27:10.545294 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 01:27:10.545441 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 01:27:10.546975 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 30 01:27:10.563448 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 30 01:27:10.570343 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 01:27:10.570626 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 01:27:10.570875 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 01:27:10.584322 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 30 01:27:10.585089 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 01:27:10.597308 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 01:27:10.597742 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 01:27:10.606317 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 01:27:10.607262 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 01:27:10.607464 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 01:27:10.609727 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 30 01:27:10.613053 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 01:27:10.613304 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 01:27:10.614915 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 01:27:10.615181 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 01:27:10.615630 systemd-udevd[1392]: Using default interface naming scheme 'v255'. Apr 30 01:27:10.620037 systemd[1]: Finished ensure-sysext.service. Apr 30 01:27:10.629728 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 01:27:10.631064 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 01:27:10.652362 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 01:27:10.652687 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 01:27:10.664173 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 30 01:27:10.678275 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 01:27:10.678562 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 01:27:10.696920 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 30 01:27:10.698000 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 01:27:10.710258 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 01:27:10.711828 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 30 01:27:10.727922 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 30 01:27:10.735218 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 30 01:27:10.774798 augenrules[1441]: No rules Apr 30 01:27:10.774449 systemd[1]: audit-rules.service: Deactivated successfully. Apr 30 01:27:10.776043 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 30 01:27:10.929530 systemd-resolved[1391]: Positive Trust Anchors: Apr 30 01:27:10.929554 systemd-resolved[1391]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 01:27:10.929597 systemd-resolved[1391]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 01:27:10.942792 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 30 01:27:10.943851 systemd[1]: Reached target time-set.target - System Time Set. Apr 30 01:27:10.945341 systemd-networkd[1425]: lo: Link UP Apr 30 01:27:10.945752 systemd-networkd[1425]: lo: Gained carrier Apr 30 01:27:10.949252 systemd-networkd[1425]: Enumeration completed Apr 30 01:27:10.949483 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 01:27:10.950465 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 30 01:27:10.957433 systemd-resolved[1391]: Using system hostname 'srv-tw98u.gb1.brightbox.com'. Apr 30 01:27:10.960186 systemd-timesyncd[1416]: No network connectivity, watching for changes. Apr 30 01:27:10.960691 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 30 01:27:10.964080 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 01:27:10.966167 systemd[1]: Reached target network.target - Network. Apr 30 01:27:10.966810 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 01:27:11.024153 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1424) Apr 30 01:27:11.141331 systemd-networkd[1425]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 01:27:11.141605 systemd-networkd[1425]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 01:27:11.145426 systemd-networkd[1425]: eth0: Link UP Apr 30 01:27:11.145551 systemd-networkd[1425]: eth0: Gained carrier Apr 30 01:27:11.145660 systemd-networkd[1425]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 01:27:11.188486 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 30 01:27:11.191800 systemd-networkd[1425]: eth0: DHCPv4 address 10.243.74.78/30, gateway 10.243.74.77 acquired from 10.243.74.77 Apr 30 01:27:11.196364 systemd-timesyncd[1416]: Network configuration changed, trying to establish connection. Apr 30 01:27:11.204012 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Apr 30 01:27:11.206855 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 30 01:27:11.218993 kernel: ACPI: button: Power Button [PWRF] Apr 30 01:27:11.223584 kernel: mousedev: PS/2 mouse device common for all mice Apr 30 01:27:11.238180 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 30 01:27:11.251047 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Apr 30 01:27:11.264998 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 30 01:27:11.272286 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 30 01:27:11.284137 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 30 01:27:11.359497 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 01:27:11.559756 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 30 01:27:11.600435 systemd-timesyncd[1416]: Contacted time server 131.111.8.60:123 (1.flatcar.pool.ntp.org). Apr 30 01:27:11.600572 systemd-timesyncd[1416]: Initial clock synchronization to Wed 2025-04-30 01:27:11.678186 UTC. Apr 30 01:27:11.611035 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 01:27:11.618236 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 30 01:27:11.646016 lvm[1474]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 01:27:11.680662 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 30 01:27:11.682622 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 01:27:11.683416 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 01:27:11.684283 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 30 01:27:11.685308 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 30 01:27:11.686469 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 30 01:27:11.687391 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 30 01:27:11.688183 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 30 01:27:11.688920 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 30 01:27:11.688986 systemd[1]: Reached target paths.target - Path Units. Apr 30 01:27:11.689615 systemd[1]: Reached target timers.target - Timer Units. Apr 30 01:27:11.691818 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 30 01:27:11.694415 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 30 01:27:11.701249 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 30 01:27:11.703953 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 30 01:27:11.705471 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 30 01:27:11.706395 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 01:27:11.707139 systemd[1]: Reached target basic.target - Basic System. Apr 30 01:27:11.707840 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 30 01:27:11.707914 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 30 01:27:11.711136 systemd[1]: Starting containerd.service - containerd container runtime... Apr 30 01:27:11.716254 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 30 01:27:11.718011 lvm[1478]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 01:27:11.725219 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 30 01:27:11.727568 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 30 01:27:11.743214 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 30 01:27:11.744961 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 30 01:27:11.755839 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 30 01:27:11.767105 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 30 01:27:11.773217 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 30 01:27:11.777169 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 30 01:27:11.778230 jq[1482]: false Apr 30 01:27:11.795250 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 30 01:27:11.796955 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 30 01:27:11.798786 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 30 01:27:11.806204 systemd[1]: Starting update-engine.service - Update Engine... Apr 30 01:27:11.808563 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 30 01:27:11.814036 extend-filesystems[1483]: Found loop4 Apr 30 01:27:11.814036 extend-filesystems[1483]: Found loop5 Apr 30 01:27:11.814036 extend-filesystems[1483]: Found loop6 Apr 30 01:27:11.814036 extend-filesystems[1483]: Found loop7 Apr 30 01:27:11.814036 extend-filesystems[1483]: Found vda Apr 30 01:27:11.814036 extend-filesystems[1483]: Found vda1 Apr 30 01:27:11.814036 extend-filesystems[1483]: Found vda2 Apr 30 01:27:11.814036 extend-filesystems[1483]: Found vda3 Apr 30 01:27:11.814036 extend-filesystems[1483]: Found usr Apr 30 01:27:11.814036 extend-filesystems[1483]: Found vda4 Apr 30 01:27:11.814036 extend-filesystems[1483]: Found vda6 Apr 30 01:27:11.814036 extend-filesystems[1483]: Found vda7 Apr 30 01:27:11.814036 extend-filesystems[1483]: Found vda9 Apr 30 01:27:11.814036 extend-filesystems[1483]: Checking size of /dev/vda9 Apr 30 01:27:11.812247 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 30 01:27:11.818561 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 30 01:27:11.818825 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 30 01:27:11.855323 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 30 01:27:11.879698 jq[1494]: true Apr 30 01:27:11.885706 update_engine[1493]: I20250430 01:27:11.885553 1493 main.cc:92] Flatcar Update Engine starting Apr 30 01:27:11.887766 dbus-daemon[1481]: [system] SELinux support is enabled Apr 30 01:27:11.888087 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 30 01:27:11.894929 extend-filesystems[1483]: Resized partition /dev/vda9 Apr 30 01:27:11.901504 dbus-daemon[1481]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1425 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Apr 30 01:27:11.902624 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 30 01:27:11.903050 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 30 01:27:11.909768 extend-filesystems[1518]: resize2fs 1.47.1 (20-May-2024) Apr 30 01:27:11.908383 systemd[1]: Started update-engine.service - Update Engine. Apr 30 01:27:11.914535 update_engine[1493]: I20250430 01:27:11.910359 1493 update_check_scheduler.cc:74] Next update check in 10m12s Apr 30 01:27:11.910861 dbus-daemon[1481]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 30 01:27:11.909660 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 30 01:27:11.909837 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 30 01:27:11.913423 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 30 01:27:11.913456 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 30 01:27:11.915285 (ntainerd)[1508]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 30 01:27:11.927862 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Apr 30 01:27:11.929199 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 30 01:27:11.930564 systemd[1]: motdgen.service: Deactivated successfully. Apr 30 01:27:11.930803 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 30 01:27:11.957279 tar[1497]: linux-amd64/LICENSE Apr 30 01:27:11.957259 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Apr 30 01:27:11.970467 jq[1517]: true Apr 30 01:27:11.970649 tar[1497]: linux-amd64/helm Apr 30 01:27:11.995916 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1423) Apr 30 01:27:12.178761 systemd-logind[1491]: Watching system buttons on /dev/input/event2 (Power Button) Apr 30 01:27:12.231144 systemd-logind[1491]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 30 01:27:12.246219 systemd-logind[1491]: New seat seat0. Apr 30 01:27:12.250245 systemd[1]: Started systemd-logind.service - User Login Management. Apr 30 01:27:12.363668 systemd-networkd[1425]: eth0: Gained IPv6LL Apr 30 01:27:12.395083 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 30 01:27:12.406670 systemd[1]: Reached target network-online.target - Network is Online. Apr 30 01:27:12.442204 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 01:27:12.453498 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 30 01:27:12.489482 bash[1546]: Updated "/home/core/.ssh/authorized_keys" Apr 30 01:27:12.491051 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 30 01:27:12.508303 systemd[1]: Starting sshkeys.service... Apr 30 01:27:12.543622 dbus-daemon[1481]: [system] Successfully activated service 'org.freedesktop.hostname1' Apr 30 01:27:12.543838 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Apr 30 01:27:12.544393 dbus-daemon[1481]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1523 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Apr 30 01:27:12.552100 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Apr 30 01:27:12.565361 systemd[1]: Starting polkit.service - Authorization Manager... Apr 30 01:27:12.586912 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 30 01:27:12.588893 locksmithd[1521]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 30 01:27:12.605660 extend-filesystems[1518]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 30 01:27:12.605660 extend-filesystems[1518]: old_desc_blocks = 1, new_desc_blocks = 8 Apr 30 01:27:12.605660 extend-filesystems[1518]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Apr 30 01:27:12.594738 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 30 01:27:12.628045 extend-filesystems[1483]: Resized filesystem in /dev/vda9 Apr 30 01:27:12.612105 polkitd[1557]: Started polkitd version 121 Apr 30 01:27:12.602801 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 30 01:27:12.624132 polkitd[1557]: Loading rules from directory /etc/polkit-1/rules.d Apr 30 01:27:12.603127 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 30 01:27:12.624233 polkitd[1557]: Loading rules from directory /usr/share/polkit-1/rules.d Apr 30 01:27:12.629682 systemd[1]: Started polkit.service - Authorization Manager. Apr 30 01:27:12.627033 polkitd[1557]: Finished loading, compiling and executing 2 rules Apr 30 01:27:12.629413 dbus-daemon[1481]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Apr 30 01:27:12.629811 polkitd[1557]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Apr 30 01:27:12.665932 systemd-hostnamed[1523]: Hostname set to (static) Apr 30 01:27:12.677722 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 30 01:27:12.692714 containerd[1508]: time="2025-04-30T01:27:12.690441341Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Apr 30 01:27:12.806685 containerd[1508]: time="2025-04-30T01:27:12.806147275Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 30 01:27:12.810999 containerd[1508]: time="2025-04-30T01:27:12.810932042Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 30 01:27:12.810999 containerd[1508]: time="2025-04-30T01:27:12.810994894Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 30 01:27:12.811089 containerd[1508]: time="2025-04-30T01:27:12.811023408Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 30 01:27:12.811346 containerd[1508]: time="2025-04-30T01:27:12.811309853Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 30 01:27:12.811346 containerd[1508]: time="2025-04-30T01:27:12.811343577Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 30 01:27:12.811880 containerd[1508]: time="2025-04-30T01:27:12.811465609Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 01:27:12.811880 containerd[1508]: time="2025-04-30T01:27:12.811498266Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 30 01:27:12.811880 containerd[1508]: time="2025-04-30T01:27:12.811721558Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 01:27:12.811880 containerd[1508]: time="2025-04-30T01:27:12.811745211Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 30 01:27:12.811880 containerd[1508]: time="2025-04-30T01:27:12.811772496Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 01:27:12.811880 containerd[1508]: time="2025-04-30T01:27:12.811790410Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 30 01:27:12.815601 containerd[1508]: time="2025-04-30T01:27:12.813249026Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 30 01:27:12.815601 containerd[1508]: time="2025-04-30T01:27:12.813661225Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 30 01:27:12.815601 containerd[1508]: time="2025-04-30T01:27:12.813812377Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 01:27:12.815601 containerd[1508]: time="2025-04-30T01:27:12.813837982Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 30 01:27:12.815601 containerd[1508]: time="2025-04-30T01:27:12.814065445Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 30 01:27:12.815601 containerd[1508]: time="2025-04-30T01:27:12.814155542Z" level=info msg="metadata content store policy set" policy=shared Apr 30 01:27:12.828244 containerd[1508]: time="2025-04-30T01:27:12.828150661Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 30 01:27:12.829990 containerd[1508]: time="2025-04-30T01:27:12.829266636Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 30 01:27:12.829990 containerd[1508]: time="2025-04-30T01:27:12.829316034Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 30 01:27:12.829990 containerd[1508]: time="2025-04-30T01:27:12.829344466Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 30 01:27:12.829990 containerd[1508]: time="2025-04-30T01:27:12.829372704Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 30 01:27:12.829990 containerd[1508]: time="2025-04-30T01:27:12.829702195Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 30 01:27:12.830309 containerd[1508]: time="2025-04-30T01:27:12.830283124Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 30 01:27:12.830569 containerd[1508]: time="2025-04-30T01:27:12.830534542Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 30 01:27:12.830759 containerd[1508]: time="2025-04-30T01:27:12.830733923Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 30 01:27:12.830912 containerd[1508]: time="2025-04-30T01:27:12.830879489Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 30 01:27:12.831044 containerd[1508]: time="2025-04-30T01:27:12.831014560Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 30 01:27:12.831647 containerd[1508]: time="2025-04-30T01:27:12.831622396Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 30 01:27:12.831760 containerd[1508]: time="2025-04-30T01:27:12.831736543Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 30 01:27:12.831872 containerd[1508]: time="2025-04-30T01:27:12.831848501Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 30 01:27:12.832599 containerd[1508]: time="2025-04-30T01:27:12.831943093Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 30 01:27:12.832599 containerd[1508]: time="2025-04-30T01:27:12.831989174Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 30 01:27:12.832599 containerd[1508]: time="2025-04-30T01:27:12.832024041Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 30 01:27:12.832599 containerd[1508]: time="2025-04-30T01:27:12.832070768Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 30 01:27:12.832599 containerd[1508]: time="2025-04-30T01:27:12.832104678Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 30 01:27:12.832599 containerd[1508]: time="2025-04-30T01:27:12.832126412Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 30 01:27:12.832599 containerd[1508]: time="2025-04-30T01:27:12.832146238Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 30 01:27:12.832599 containerd[1508]: time="2025-04-30T01:27:12.832172314Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 30 01:27:12.832599 containerd[1508]: time="2025-04-30T01:27:12.832199165Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 30 01:27:12.832599 containerd[1508]: time="2025-04-30T01:27:12.832224882Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 30 01:27:12.832599 containerd[1508]: time="2025-04-30T01:27:12.832245168Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 30 01:27:12.832599 containerd[1508]: time="2025-04-30T01:27:12.832264235Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 30 01:27:12.832599 containerd[1508]: time="2025-04-30T01:27:12.832284183Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 30 01:27:12.832599 containerd[1508]: time="2025-04-30T01:27:12.832313827Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 30 01:27:12.833122 containerd[1508]: time="2025-04-30T01:27:12.832334414Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 30 01:27:12.833122 containerd[1508]: time="2025-04-30T01:27:12.832355633Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 30 01:27:12.833122 containerd[1508]: time="2025-04-30T01:27:12.832381717Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 30 01:27:12.833122 containerd[1508]: time="2025-04-30T01:27:12.832409935Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 30 01:27:12.833122 containerd[1508]: time="2025-04-30T01:27:12.832445865Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 30 01:27:12.833122 containerd[1508]: time="2025-04-30T01:27:12.832483254Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 30 01:27:12.833122 containerd[1508]: time="2025-04-30T01:27:12.832501139Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 30 01:27:12.834992 containerd[1508]: time="2025-04-30T01:27:12.833401032Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 30 01:27:12.834992 containerd[1508]: time="2025-04-30T01:27:12.833439696Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 30 01:27:12.834992 containerd[1508]: time="2025-04-30T01:27:12.833458233Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 30 01:27:12.834992 containerd[1508]: time="2025-04-30T01:27:12.833476865Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 30 01:27:12.834992 containerd[1508]: time="2025-04-30T01:27:12.833492134Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 30 01:27:12.834992 containerd[1508]: time="2025-04-30T01:27:12.833509907Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 30 01:27:12.834992 containerd[1508]: time="2025-04-30T01:27:12.833534782Z" level=info msg="NRI interface is disabled by configuration." Apr 30 01:27:12.834992 containerd[1508]: time="2025-04-30T01:27:12.833554993Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 30 01:27:12.835475 containerd[1508]: time="2025-04-30T01:27:12.833930846Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 30 01:27:12.835475 containerd[1508]: time="2025-04-30T01:27:12.834031548Z" level=info msg="Connect containerd service" Apr 30 01:27:12.835475 containerd[1508]: time="2025-04-30T01:27:12.834096012Z" level=info msg="using legacy CRI server" Apr 30 01:27:12.835475 containerd[1508]: time="2025-04-30T01:27:12.834113182Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 30 01:27:12.835475 containerd[1508]: time="2025-04-30T01:27:12.834255092Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 30 01:27:12.836235 containerd[1508]: time="2025-04-30T01:27:12.836204243Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 01:27:12.836718 containerd[1508]: time="2025-04-30T01:27:12.836619942Z" level=info msg="Start subscribing containerd event" Apr 30 01:27:12.836718 containerd[1508]: time="2025-04-30T01:27:12.836690942Z" level=info msg="Start recovering state" Apr 30 01:27:12.836824 containerd[1508]: time="2025-04-30T01:27:12.836801794Z" level=info msg="Start event monitor" Apr 30 01:27:12.836864 containerd[1508]: time="2025-04-30T01:27:12.836833651Z" level=info msg="Start snapshots syncer" Apr 30 01:27:12.836864 containerd[1508]: time="2025-04-30T01:27:12.836851208Z" level=info msg="Start cni network conf syncer for default" Apr 30 01:27:12.836936 containerd[1508]: time="2025-04-30T01:27:12.836863484Z" level=info msg="Start streaming server" Apr 30 01:27:12.837368 containerd[1508]: time="2025-04-30T01:27:12.837341028Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 30 01:27:12.837642 containerd[1508]: time="2025-04-30T01:27:12.837609268Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 30 01:27:12.839060 containerd[1508]: time="2025-04-30T01:27:12.839035690Z" level=info msg="containerd successfully booted in 0.154702s" Apr 30 01:27:12.839150 systemd[1]: Started containerd.service - containerd container runtime. Apr 30 01:27:13.099841 systemd-networkd[1425]: eth0: Ignoring DHCPv6 address 2a02:1348:17c:d293:24:19ff:fef3:4a4e/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17c:d293:24:19ff:fef3:4a4e/64 assigned by NDisc. Apr 30 01:27:13.099853 systemd-networkd[1425]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Apr 30 01:27:13.360117 sshd_keygen[1520]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 30 01:27:13.479262 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 30 01:27:13.491166 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 30 01:27:13.499386 systemd[1]: Started sshd@0-10.243.74.78:22-139.178.68.195:41810.service - OpenSSH per-connection server daemon (139.178.68.195:41810). Apr 30 01:27:13.515672 systemd[1]: issuegen.service: Deactivated successfully. Apr 30 01:27:13.516556 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 30 01:27:13.534236 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 30 01:27:13.607781 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 30 01:27:13.617494 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 30 01:27:13.632649 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 30 01:27:13.636190 systemd[1]: Reached target getty.target - Login Prompts. Apr 30 01:27:13.724148 tar[1497]: linux-amd64/README.md Apr 30 01:27:13.757432 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 30 01:27:14.310339 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 01:27:14.321515 (kubelet)[1609]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 01:27:14.459659 sshd[1591]: Accepted publickey for core from 139.178.68.195 port 41810 ssh2: RSA SHA256:MfEzgEaq/hy6que95IR+2Hjd2kD3vXBLQEa1zk00RtE Apr 30 01:27:14.462699 sshd-session[1591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:27:14.484475 systemd-logind[1491]: New session 1 of user core. Apr 30 01:27:14.486529 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 30 01:27:14.494415 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 30 01:27:14.536563 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 30 01:27:14.553482 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 30 01:27:14.568233 (systemd)[1616]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 30 01:27:14.750097 systemd[1616]: Queued start job for default target default.target. Apr 30 01:27:14.768519 systemd[1616]: Created slice app.slice - User Application Slice. Apr 30 01:27:14.768562 systemd[1616]: Reached target paths.target - Paths. Apr 30 01:27:14.768585 systemd[1616]: Reached target timers.target - Timers. Apr 30 01:27:14.773157 systemd[1616]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 30 01:27:14.795651 systemd[1616]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 30 01:27:14.796511 systemd[1616]: Reached target sockets.target - Sockets. Apr 30 01:27:14.796545 systemd[1616]: Reached target basic.target - Basic System. Apr 30 01:27:14.796634 systemd[1616]: Reached target default.target - Main User Target. Apr 30 01:27:14.796710 systemd[1616]: Startup finished in 211ms. Apr 30 01:27:14.797650 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 30 01:27:14.806387 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 30 01:27:15.051357 kubelet[1609]: E0430 01:27:15.051249 1609 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 01:27:15.054713 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 01:27:15.055102 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 01:27:15.055778 systemd[1]: kubelet.service: Consumed 1.408s CPU time. Apr 30 01:27:15.456455 systemd[1]: Started sshd@1-10.243.74.78:22-139.178.68.195:49118.service - OpenSSH per-connection server daemon (139.178.68.195:49118). Apr 30 01:27:16.355421 sshd[1630]: Accepted publickey for core from 139.178.68.195 port 49118 ssh2: RSA SHA256:MfEzgEaq/hy6que95IR+2Hjd2kD3vXBLQEa1zk00RtE Apr 30 01:27:16.357479 sshd-session[1630]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:27:16.364200 systemd-logind[1491]: New session 2 of user core. Apr 30 01:27:16.371267 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 30 01:27:16.982634 sshd[1633]: Connection closed by 139.178.68.195 port 49118 Apr 30 01:27:16.983770 sshd-session[1630]: pam_unix(sshd:session): session closed for user core Apr 30 01:27:16.988916 systemd[1]: sshd@1-10.243.74.78:22-139.178.68.195:49118.service: Deactivated successfully. Apr 30 01:27:16.991704 systemd[1]: session-2.scope: Deactivated successfully. Apr 30 01:27:16.992843 systemd-logind[1491]: Session 2 logged out. Waiting for processes to exit. Apr 30 01:27:16.994704 systemd-logind[1491]: Removed session 2. Apr 30 01:27:17.148506 systemd[1]: Started sshd@2-10.243.74.78:22-139.178.68.195:49122.service - OpenSSH per-connection server daemon (139.178.68.195:49122). Apr 30 01:27:18.045419 sshd[1638]: Accepted publickey for core from 139.178.68.195 port 49122 ssh2: RSA SHA256:MfEzgEaq/hy6que95IR+2Hjd2kD3vXBLQEa1zk00RtE Apr 30 01:27:18.047674 sshd-session[1638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:27:18.054397 systemd-logind[1491]: New session 3 of user core. Apr 30 01:27:18.065283 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 30 01:27:18.668005 sshd[1640]: Connection closed by 139.178.68.195 port 49122 Apr 30 01:27:18.668246 sshd-session[1638]: pam_unix(sshd:session): session closed for user core Apr 30 01:27:18.675707 systemd-logind[1491]: Session 3 logged out. Waiting for processes to exit. Apr 30 01:27:18.676320 systemd[1]: sshd@2-10.243.74.78:22-139.178.68.195:49122.service: Deactivated successfully. Apr 30 01:27:18.678891 systemd[1]: session-3.scope: Deactivated successfully. Apr 30 01:27:18.681497 systemd-logind[1491]: Removed session 3. Apr 30 01:27:18.703307 login[1598]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Apr 30 01:27:18.708918 login[1599]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Apr 30 01:27:18.711499 systemd-logind[1491]: New session 4 of user core. Apr 30 01:27:18.723331 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 30 01:27:18.728993 systemd-logind[1491]: New session 5 of user core. Apr 30 01:27:18.733303 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 30 01:27:18.926916 coreos-metadata[1480]: Apr 30 01:27:18.926 WARN failed to locate config-drive, using the metadata service API instead Apr 30 01:27:18.952921 coreos-metadata[1480]: Apr 30 01:27:18.952 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Apr 30 01:27:18.958333 coreos-metadata[1480]: Apr 30 01:27:18.958 INFO Fetch failed with 404: resource not found Apr 30 01:27:18.958413 coreos-metadata[1480]: Apr 30 01:27:18.958 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Apr 30 01:27:18.959121 coreos-metadata[1480]: Apr 30 01:27:18.959 INFO Fetch successful Apr 30 01:27:18.959282 coreos-metadata[1480]: Apr 30 01:27:18.959 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Apr 30 01:27:18.975398 coreos-metadata[1480]: Apr 30 01:27:18.975 INFO Fetch successful Apr 30 01:27:18.975589 coreos-metadata[1480]: Apr 30 01:27:18.975 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Apr 30 01:27:18.988653 coreos-metadata[1480]: Apr 30 01:27:18.988 INFO Fetch successful Apr 30 01:27:18.988983 coreos-metadata[1480]: Apr 30 01:27:18.988 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Apr 30 01:27:19.003141 coreos-metadata[1480]: Apr 30 01:27:19.003 INFO Fetch successful Apr 30 01:27:19.003339 coreos-metadata[1480]: Apr 30 01:27:19.003 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Apr 30 01:27:19.020087 coreos-metadata[1480]: Apr 30 01:27:19.020 INFO Fetch successful Apr 30 01:27:19.059363 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 30 01:27:19.061224 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 30 01:27:19.840192 coreos-metadata[1559]: Apr 30 01:27:19.840 WARN failed to locate config-drive, using the metadata service API instead Apr 30 01:27:19.861740 coreos-metadata[1559]: Apr 30 01:27:19.861 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Apr 30 01:27:19.882856 coreos-metadata[1559]: Apr 30 01:27:19.882 INFO Fetch successful Apr 30 01:27:19.883027 coreos-metadata[1559]: Apr 30 01:27:19.882 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Apr 30 01:27:19.909420 coreos-metadata[1559]: Apr 30 01:27:19.909 INFO Fetch successful Apr 30 01:27:19.911148 unknown[1559]: wrote ssh authorized keys file for user: core Apr 30 01:27:19.930798 update-ssh-keys[1680]: Updated "/home/core/.ssh/authorized_keys" Apr 30 01:27:19.932737 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 30 01:27:19.934756 systemd[1]: Finished sshkeys.service. Apr 30 01:27:19.937532 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 30 01:27:19.938109 systemd[1]: Startup finished in 1.557s (kernel) + 15.659s (initrd) + 12.569s (userspace) = 29.786s. Apr 30 01:27:25.090819 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 30 01:27:25.098244 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 01:27:25.268377 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 01:27:25.280473 (kubelet)[1691]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 01:27:25.332751 kubelet[1691]: E0430 01:27:25.332688 1691 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 01:27:25.336676 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 01:27:25.336955 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 01:27:28.843590 systemd[1]: Started sshd@3-10.243.74.78:22-139.178.68.195:38422.service - OpenSSH per-connection server daemon (139.178.68.195:38422). Apr 30 01:27:29.742644 sshd[1699]: Accepted publickey for core from 139.178.68.195 port 38422 ssh2: RSA SHA256:MfEzgEaq/hy6que95IR+2Hjd2kD3vXBLQEa1zk00RtE Apr 30 01:27:29.744667 sshd-session[1699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:27:29.753403 systemd-logind[1491]: New session 6 of user core. Apr 30 01:27:29.756256 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 30 01:27:30.358255 sshd[1701]: Connection closed by 139.178.68.195 port 38422 Apr 30 01:27:30.359276 sshd-session[1699]: pam_unix(sshd:session): session closed for user core Apr 30 01:27:30.363012 systemd-logind[1491]: Session 6 logged out. Waiting for processes to exit. Apr 30 01:27:30.363590 systemd[1]: sshd@3-10.243.74.78:22-139.178.68.195:38422.service: Deactivated successfully. Apr 30 01:27:30.365713 systemd[1]: session-6.scope: Deactivated successfully. Apr 30 01:27:30.367941 systemd-logind[1491]: Removed session 6. Apr 30 01:27:30.526302 systemd[1]: Started sshd@4-10.243.74.78:22-139.178.68.195:38438.service - OpenSSH per-connection server daemon (139.178.68.195:38438). Apr 30 01:27:31.417884 sshd[1706]: Accepted publickey for core from 139.178.68.195 port 38438 ssh2: RSA SHA256:MfEzgEaq/hy6que95IR+2Hjd2kD3vXBLQEa1zk00RtE Apr 30 01:27:31.419760 sshd-session[1706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:27:31.426069 systemd-logind[1491]: New session 7 of user core. Apr 30 01:27:31.437254 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 30 01:27:32.031703 sshd[1708]: Connection closed by 139.178.68.195 port 38438 Apr 30 01:27:32.032548 sshd-session[1706]: pam_unix(sshd:session): session closed for user core Apr 30 01:27:32.036475 systemd[1]: sshd@4-10.243.74.78:22-139.178.68.195:38438.service: Deactivated successfully. Apr 30 01:27:32.038680 systemd[1]: session-7.scope: Deactivated successfully. Apr 30 01:27:32.040439 systemd-logind[1491]: Session 7 logged out. Waiting for processes to exit. Apr 30 01:27:32.041736 systemd-logind[1491]: Removed session 7. Apr 30 01:27:32.197357 systemd[1]: Started sshd@5-10.243.74.78:22-139.178.68.195:38452.service - OpenSSH per-connection server daemon (139.178.68.195:38452). Apr 30 01:27:33.086935 sshd[1713]: Accepted publickey for core from 139.178.68.195 port 38452 ssh2: RSA SHA256:MfEzgEaq/hy6que95IR+2Hjd2kD3vXBLQEa1zk00RtE Apr 30 01:27:33.088800 sshd-session[1713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:27:33.095026 systemd-logind[1491]: New session 8 of user core. Apr 30 01:27:33.107361 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 30 01:27:33.706987 sshd[1715]: Connection closed by 139.178.68.195 port 38452 Apr 30 01:27:33.706102 sshd-session[1713]: pam_unix(sshd:session): session closed for user core Apr 30 01:27:33.710594 systemd-logind[1491]: Session 8 logged out. Waiting for processes to exit. Apr 30 01:27:33.710921 systemd[1]: sshd@5-10.243.74.78:22-139.178.68.195:38452.service: Deactivated successfully. Apr 30 01:27:33.713409 systemd[1]: session-8.scope: Deactivated successfully. Apr 30 01:27:33.715461 systemd-logind[1491]: Removed session 8. Apr 30 01:27:33.867699 systemd[1]: Started sshd@6-10.243.74.78:22-139.178.68.195:38462.service - OpenSSH per-connection server daemon (139.178.68.195:38462). Apr 30 01:27:34.761764 sshd[1720]: Accepted publickey for core from 139.178.68.195 port 38462 ssh2: RSA SHA256:MfEzgEaq/hy6que95IR+2Hjd2kD3vXBLQEa1zk00RtE Apr 30 01:27:34.763782 sshd-session[1720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:27:34.770465 systemd-logind[1491]: New session 9 of user core. Apr 30 01:27:34.782152 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 30 01:27:35.252803 sudo[1723]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 30 01:27:35.253297 sudo[1723]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 01:27:35.270266 sudo[1723]: pam_unix(sudo:session): session closed for user root Apr 30 01:27:35.340694 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 30 01:27:35.352251 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 01:27:35.414347 sshd[1722]: Connection closed by 139.178.68.195 port 38462 Apr 30 01:27:35.420320 sshd-session[1720]: pam_unix(sshd:session): session closed for user core Apr 30 01:27:35.427891 systemd[1]: sshd@6-10.243.74.78:22-139.178.68.195:38462.service: Deactivated successfully. Apr 30 01:27:35.431475 systemd[1]: session-9.scope: Deactivated successfully. Apr 30 01:27:35.432828 systemd-logind[1491]: Session 9 logged out. Waiting for processes to exit. Apr 30 01:27:35.434740 systemd-logind[1491]: Removed session 9. Apr 30 01:27:35.566323 systemd[1]: Started sshd@7-10.243.74.78:22-139.178.68.195:38060.service - OpenSSH per-connection server daemon (139.178.68.195:38060). Apr 30 01:27:35.661645 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 01:27:35.678410 (kubelet)[1738]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 01:27:35.737611 kubelet[1738]: E0430 01:27:35.737512 1738 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 01:27:35.739585 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 01:27:35.739835 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 01:27:36.466404 sshd[1731]: Accepted publickey for core from 139.178.68.195 port 38060 ssh2: RSA SHA256:MfEzgEaq/hy6que95IR+2Hjd2kD3vXBLQEa1zk00RtE Apr 30 01:27:36.468323 sshd-session[1731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:27:36.475071 systemd-logind[1491]: New session 10 of user core. Apr 30 01:27:36.489171 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 30 01:27:36.941566 sudo[1747]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 30 01:27:36.942592 sudo[1747]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 01:27:36.947924 sudo[1747]: pam_unix(sudo:session): session closed for user root Apr 30 01:27:36.956783 sudo[1746]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Apr 30 01:27:36.957316 sudo[1746]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 01:27:36.974676 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 30 01:27:37.026035 augenrules[1769]: No rules Apr 30 01:27:37.028412 systemd[1]: audit-rules.service: Deactivated successfully. Apr 30 01:27:37.028855 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 30 01:27:37.031830 sudo[1746]: pam_unix(sudo:session): session closed for user root Apr 30 01:27:37.175947 sshd[1745]: Connection closed by 139.178.68.195 port 38060 Apr 30 01:27:37.175272 sshd-session[1731]: pam_unix(sshd:session): session closed for user core Apr 30 01:27:37.179537 systemd-logind[1491]: Session 10 logged out. Waiting for processes to exit. Apr 30 01:27:37.179935 systemd[1]: sshd@7-10.243.74.78:22-139.178.68.195:38060.service: Deactivated successfully. Apr 30 01:27:37.182178 systemd[1]: session-10.scope: Deactivated successfully. Apr 30 01:27:37.184389 systemd-logind[1491]: Removed session 10. Apr 30 01:27:37.327830 systemd[1]: Started sshd@8-10.243.74.78:22-139.178.68.195:38064.service - OpenSSH per-connection server daemon (139.178.68.195:38064). Apr 30 01:27:38.232743 sshd[1777]: Accepted publickey for core from 139.178.68.195 port 38064 ssh2: RSA SHA256:MfEzgEaq/hy6que95IR+2Hjd2kD3vXBLQEa1zk00RtE Apr 30 01:27:38.234566 sshd-session[1777]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:27:38.241188 systemd-logind[1491]: New session 11 of user core. Apr 30 01:27:38.251253 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 30 01:27:38.710505 sudo[1780]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 30 01:27:38.711093 sudo[1780]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 01:27:39.330583 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 30 01:27:39.330599 (dockerd)[1797]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 30 01:27:39.923717 dockerd[1797]: time="2025-04-30T01:27:39.923533906Z" level=info msg="Starting up" Apr 30 01:27:40.085588 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3395313636-merged.mount: Deactivated successfully. Apr 30 01:27:40.123951 dockerd[1797]: time="2025-04-30T01:27:40.123894177Z" level=info msg="Loading containers: start." Apr 30 01:27:40.342118 kernel: Initializing XFRM netlink socket Apr 30 01:27:40.460665 systemd-networkd[1425]: docker0: Link UP Apr 30 01:27:40.494985 dockerd[1797]: time="2025-04-30T01:27:40.494908015Z" level=info msg="Loading containers: done." Apr 30 01:27:40.517186 dockerd[1797]: time="2025-04-30T01:27:40.516382713Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 30 01:27:40.517186 dockerd[1797]: time="2025-04-30T01:27:40.516525012Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Apr 30 01:27:40.517186 dockerd[1797]: time="2025-04-30T01:27:40.516703746Z" level=info msg="Daemon has completed initialization" Apr 30 01:27:40.576296 dockerd[1797]: time="2025-04-30T01:27:40.576230381Z" level=info msg="API listen on /run/docker.sock" Apr 30 01:27:40.576576 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 30 01:27:41.894012 containerd[1508]: time="2025-04-30T01:27:41.893829087Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" Apr 30 01:27:42.633541 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1302226787.mount: Deactivated successfully. Apr 30 01:27:43.154309 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Apr 30 01:27:44.551317 containerd[1508]: time="2025-04-30T01:27:44.551227155Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:27:44.553712 containerd[1508]: time="2025-04-30T01:27:44.553172551Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.4: active requests=0, bytes read=28682887" Apr 30 01:27:44.553712 containerd[1508]: time="2025-04-30T01:27:44.553650372Z" level=info msg="ImageCreate event name:\"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:27:44.558515 containerd[1508]: time="2025-04-30T01:27:44.558453117Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:27:44.560611 containerd[1508]: time="2025-04-30T01:27:44.559903353Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.4\" with image id \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\", size \"28679679\" in 2.665933408s" Apr 30 01:27:44.560611 containerd[1508]: time="2025-04-30T01:27:44.560000686Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\"" Apr 30 01:27:44.561074 containerd[1508]: time="2025-04-30T01:27:44.561016350Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" Apr 30 01:27:45.841233 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 30 01:27:45.849202 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 01:27:46.200597 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 01:27:46.208524 (kubelet)[2056]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 01:27:46.349913 kubelet[2056]: E0430 01:27:46.349818 2056 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 01:27:46.355196 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 01:27:46.355548 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 01:27:47.001155 containerd[1508]: time="2025-04-30T01:27:47.001050107Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:27:47.003997 containerd[1508]: time="2025-04-30T01:27:47.003892185Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.4: active requests=0, bytes read=24779597" Apr 30 01:27:47.007772 containerd[1508]: time="2025-04-30T01:27:47.007702449Z" level=info msg="ImageCreate event name:\"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:27:47.016040 containerd[1508]: time="2025-04-30T01:27:47.015479340Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:27:47.018817 containerd[1508]: time="2025-04-30T01:27:47.018765225Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.4\" with image id \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\", size \"26267962\" in 2.45757927s" Apr 30 01:27:47.018900 containerd[1508]: time="2025-04-30T01:27:47.018822773Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\"" Apr 30 01:27:47.020036 containerd[1508]: time="2025-04-30T01:27:47.019796934Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" Apr 30 01:27:48.869995 containerd[1508]: time="2025-04-30T01:27:48.869871479Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:27:48.871411 containerd[1508]: time="2025-04-30T01:27:48.871183497Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.4: active requests=0, bytes read=19169946" Apr 30 01:27:48.872067 containerd[1508]: time="2025-04-30T01:27:48.872029541Z" level=info msg="ImageCreate event name:\"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:27:48.877003 containerd[1508]: time="2025-04-30T01:27:48.876913725Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:27:48.878715 containerd[1508]: time="2025-04-30T01:27:48.878500408Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.4\" with image id \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\", size \"20658329\" in 1.858661707s" Apr 30 01:27:48.878715 containerd[1508]: time="2025-04-30T01:27:48.878547514Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\"" Apr 30 01:27:48.879874 containerd[1508]: time="2025-04-30T01:27:48.879832742Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" Apr 30 01:27:50.615899 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4008372957.mount: Deactivated successfully. Apr 30 01:27:51.441745 containerd[1508]: time="2025-04-30T01:27:51.441678215Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:27:51.444215 containerd[1508]: time="2025-04-30T01:27:51.444168935Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.4: active requests=0, bytes read=30917864" Apr 30 01:27:51.445147 containerd[1508]: time="2025-04-30T01:27:51.445082250Z" level=info msg="ImageCreate event name:\"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:27:51.448169 containerd[1508]: time="2025-04-30T01:27:51.448060370Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:27:51.450120 containerd[1508]: time="2025-04-30T01:27:51.449488892Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.4\" with image id \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\", repo tag \"registry.k8s.io/kube-proxy:v1.32.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\", size \"30916875\" in 2.569611339s" Apr 30 01:27:51.450120 containerd[1508]: time="2025-04-30T01:27:51.449549915Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\"" Apr 30 01:27:51.450629 containerd[1508]: time="2025-04-30T01:27:51.450354920Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Apr 30 01:27:52.062122 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1158638754.mount: Deactivated successfully. Apr 30 01:27:53.481251 containerd[1508]: time="2025-04-30T01:27:53.481135711Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:27:53.482991 containerd[1508]: time="2025-04-30T01:27:53.482849434Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" Apr 30 01:27:53.483943 containerd[1508]: time="2025-04-30T01:27:53.483868823Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:27:53.488506 containerd[1508]: time="2025-04-30T01:27:53.488441285Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:27:53.493224 containerd[1508]: time="2025-04-30T01:27:53.493171276Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.042776302s" Apr 30 01:27:53.493365 containerd[1508]: time="2025-04-30T01:27:53.493227246Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Apr 30 01:27:53.496050 containerd[1508]: time="2025-04-30T01:27:53.495717605Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 30 01:27:54.142535 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2025354845.mount: Deactivated successfully. Apr 30 01:27:54.147888 containerd[1508]: time="2025-04-30T01:27:54.147745030Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:27:54.149412 containerd[1508]: time="2025-04-30T01:27:54.149324375Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Apr 30 01:27:54.151019 containerd[1508]: time="2025-04-30T01:27:54.149931472Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:27:54.153346 containerd[1508]: time="2025-04-30T01:27:54.153280280Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:27:54.154981 containerd[1508]: time="2025-04-30T01:27:54.154526049Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 658.750574ms" Apr 30 01:27:54.154981 containerd[1508]: time="2025-04-30T01:27:54.154608043Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Apr 30 01:27:54.156765 containerd[1508]: time="2025-04-30T01:27:54.156735255Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Apr 30 01:27:54.842537 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount570431500.mount: Deactivated successfully. Apr 30 01:27:56.592659 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Apr 30 01:27:56.604272 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 01:27:56.939275 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 01:27:56.942446 (kubelet)[2193]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 01:27:57.048087 kubelet[2193]: E0430 01:27:57.047782 2193 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 01:27:57.051684 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 01:27:57.051947 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 01:27:57.165001 update_engine[1493]: I20250430 01:27:57.164660 1493 update_attempter.cc:509] Updating boot flags... Apr 30 01:27:57.452254 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2207) Apr 30 01:27:57.558808 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2205) Apr 30 01:27:57.988179 containerd[1508]: time="2025-04-30T01:27:57.986355017Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:27:57.989147 containerd[1508]: time="2025-04-30T01:27:57.989091893Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551368" Apr 30 01:27:57.990239 containerd[1508]: time="2025-04-30T01:27:57.990180092Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:27:57.996988 containerd[1508]: time="2025-04-30T01:27:57.996212378Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:27:57.998061 containerd[1508]: time="2025-04-30T01:27:57.998017194Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.841234666s" Apr 30 01:27:57.998153 containerd[1508]: time="2025-04-30T01:27:57.998060228Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Apr 30 01:28:02.626669 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 01:28:02.635415 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 01:28:02.677467 systemd[1]: Reloading requested from client PID 2243 ('systemctl') (unit session-11.scope)... Apr 30 01:28:02.677517 systemd[1]: Reloading... Apr 30 01:28:02.863039 zram_generator::config[2282]: No configuration found. Apr 30 01:28:03.029054 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 01:28:03.136645 systemd[1]: Reloading finished in 458 ms. Apr 30 01:28:03.214250 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 01:28:03.221887 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 01:28:03.223201 systemd[1]: kubelet.service: Deactivated successfully. Apr 30 01:28:03.223675 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 01:28:03.230593 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 01:28:03.373170 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 01:28:03.376128 (kubelet)[2351]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 01:28:03.435009 kubelet[2351]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 01:28:03.435009 kubelet[2351]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 30 01:28:03.435009 kubelet[2351]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 01:28:03.435009 kubelet[2351]: I0430 01:28:03.433922 2351 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 01:28:04.433990 kubelet[2351]: I0430 01:28:04.433855 2351 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Apr 30 01:28:04.433990 kubelet[2351]: I0430 01:28:04.433918 2351 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 01:28:04.434366 kubelet[2351]: I0430 01:28:04.434331 2351 server.go:954] "Client rotation is on, will bootstrap in background" Apr 30 01:28:04.471657 kubelet[2351]: E0430 01:28:04.471594 2351 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.243.74.78:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.243.74.78:6443: connect: connection refused" logger="UnhandledError" Apr 30 01:28:04.472571 kubelet[2351]: I0430 01:28:04.472143 2351 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 01:28:04.492641 kubelet[2351]: E0430 01:28:04.492568 2351 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 30 01:28:04.492810 kubelet[2351]: I0430 01:28:04.492646 2351 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 30 01:28:04.500333 kubelet[2351]: I0430 01:28:04.500290 2351 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 01:28:04.503678 kubelet[2351]: I0430 01:28:04.503570 2351 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 01:28:04.503918 kubelet[2351]: I0430 01:28:04.503638 2351 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-tw98u.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 30 01:28:04.505602 kubelet[2351]: I0430 01:28:04.505545 2351 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 01:28:04.505602 kubelet[2351]: I0430 01:28:04.505581 2351 container_manager_linux.go:304] "Creating device plugin manager" Apr 30 01:28:04.505887 kubelet[2351]: I0430 01:28:04.505847 2351 state_mem.go:36] "Initialized new in-memory state store" Apr 30 01:28:04.509744 kubelet[2351]: I0430 01:28:04.509565 2351 kubelet.go:446] "Attempting to sync node with API server" Apr 30 01:28:04.509744 kubelet[2351]: I0430 01:28:04.509598 2351 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 01:28:04.509744 kubelet[2351]: I0430 01:28:04.509647 2351 kubelet.go:352] "Adding apiserver pod source" Apr 30 01:28:04.509744 kubelet[2351]: I0430 01:28:04.509679 2351 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 01:28:04.519048 kubelet[2351]: I0430 01:28:04.518757 2351 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Apr 30 01:28:04.523275 kubelet[2351]: I0430 01:28:04.523245 2351 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 01:28:04.525986 kubelet[2351]: W0430 01:28:04.524627 2351 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 30 01:28:04.529289 kubelet[2351]: W0430 01:28:04.529150 2351 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.243.74.78:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-tw98u.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.243.74.78:6443: connect: connection refused Apr 30 01:28:04.529462 kubelet[2351]: E0430 01:28:04.529425 2351 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.243.74.78:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-tw98u.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.243.74.78:6443: connect: connection refused" logger="UnhandledError" Apr 30 01:28:04.530817 kubelet[2351]: W0430 01:28:04.530732 2351 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.243.74.78:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.243.74.78:6443: connect: connection refused Apr 30 01:28:04.530935 kubelet[2351]: E0430 01:28:04.530823 2351 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.243.74.78:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.243.74.78:6443: connect: connection refused" logger="UnhandledError" Apr 30 01:28:04.533322 kubelet[2351]: I0430 01:28:04.533279 2351 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 30 01:28:04.533396 kubelet[2351]: I0430 01:28:04.533356 2351 server.go:1287] "Started kubelet" Apr 30 01:28:04.538429 kubelet[2351]: I0430 01:28:04.538350 2351 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 01:28:04.538999 kubelet[2351]: I0430 01:28:04.538890 2351 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 01:28:04.539692 kubelet[2351]: I0430 01:28:04.539659 2351 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 01:28:04.542484 kubelet[2351]: I0430 01:28:04.542456 2351 server.go:490] "Adding debug handlers to kubelet server" Apr 30 01:28:04.543673 kubelet[2351]: E0430 01:28:04.540730 2351 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.243.74.78:6443/api/v1/namespaces/default/events\": dial tcp 10.243.74.78:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-tw98u.gb1.brightbox.com.183af45cb2cb472d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-tw98u.gb1.brightbox.com,UID:srv-tw98u.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-tw98u.gb1.brightbox.com,},FirstTimestamp:2025-04-30 01:28:04.533323565 +0000 UTC m=+1.151173550,LastTimestamp:2025-04-30 01:28:04.533323565 +0000 UTC m=+1.151173550,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-tw98u.gb1.brightbox.com,}" Apr 30 01:28:04.547367 kubelet[2351]: I0430 01:28:04.547340 2351 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 01:28:04.549385 kubelet[2351]: I0430 01:28:04.549349 2351 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 30 01:28:04.559566 kubelet[2351]: I0430 01:28:04.559528 2351 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 30 01:28:04.559843 kubelet[2351]: E0430 01:28:04.559812 2351 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"srv-tw98u.gb1.brightbox.com\" not found" Apr 30 01:28:04.561083 kubelet[2351]: I0430 01:28:04.561055 2351 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 01:28:04.561220 kubelet[2351]: I0430 01:28:04.561198 2351 reconciler.go:26] "Reconciler: start to sync state" Apr 30 01:28:04.561986 kubelet[2351]: W0430 01:28:04.561707 2351 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.243.74.78:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.243.74.78:6443: connect: connection refused Apr 30 01:28:04.561986 kubelet[2351]: E0430 01:28:04.561806 2351 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.243.74.78:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.243.74.78:6443: connect: connection refused" logger="UnhandledError" Apr 30 01:28:04.561986 kubelet[2351]: E0430 01:28:04.561897 2351 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.243.74.78:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-tw98u.gb1.brightbox.com?timeout=10s\": dial tcp 10.243.74.78:6443: connect: connection refused" interval="200ms" Apr 30 01:28:04.563059 kubelet[2351]: I0430 01:28:04.562219 2351 factory.go:221] Registration of the systemd container factory successfully Apr 30 01:28:04.563059 kubelet[2351]: I0430 01:28:04.562372 2351 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 01:28:04.564348 kubelet[2351]: E0430 01:28:04.563869 2351 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 01:28:04.566578 kubelet[2351]: I0430 01:28:04.566544 2351 factory.go:221] Registration of the containerd container factory successfully Apr 30 01:28:04.577146 kubelet[2351]: I0430 01:28:04.576941 2351 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 01:28:04.578986 kubelet[2351]: I0430 01:28:04.578579 2351 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 01:28:04.578986 kubelet[2351]: I0430 01:28:04.578636 2351 status_manager.go:227] "Starting to sync pod status with apiserver" Apr 30 01:28:04.578986 kubelet[2351]: I0430 01:28:04.578684 2351 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 30 01:28:04.578986 kubelet[2351]: I0430 01:28:04.578704 2351 kubelet.go:2388] "Starting kubelet main sync loop" Apr 30 01:28:04.578986 kubelet[2351]: E0430 01:28:04.578791 2351 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 01:28:04.590396 kubelet[2351]: W0430 01:28:04.590343 2351 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.243.74.78:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.243.74.78:6443: connect: connection refused Apr 30 01:28:04.591049 kubelet[2351]: E0430 01:28:04.591012 2351 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.243.74.78:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.243.74.78:6443: connect: connection refused" logger="UnhandledError" Apr 30 01:28:04.621654 kubelet[2351]: I0430 01:28:04.621496 2351 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 30 01:28:04.621654 kubelet[2351]: I0430 01:28:04.621533 2351 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 30 01:28:04.621654 kubelet[2351]: I0430 01:28:04.621580 2351 state_mem.go:36] "Initialized new in-memory state store" Apr 30 01:28:04.624222 kubelet[2351]: I0430 01:28:04.624180 2351 policy_none.go:49] "None policy: Start" Apr 30 01:28:04.624348 kubelet[2351]: I0430 01:28:04.624277 2351 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 30 01:28:04.624348 kubelet[2351]: I0430 01:28:04.624340 2351 state_mem.go:35] "Initializing new in-memory state store" Apr 30 01:28:04.638599 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 30 01:28:04.657331 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 30 01:28:04.660159 kubelet[2351]: E0430 01:28:04.660121 2351 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"srv-tw98u.gb1.brightbox.com\" not found" Apr 30 01:28:04.662780 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 30 01:28:04.672957 kubelet[2351]: I0430 01:28:04.672865 2351 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 01:28:04.673385 kubelet[2351]: I0430 01:28:04.673350 2351 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 30 01:28:04.673635 kubelet[2351]: I0430 01:28:04.673439 2351 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 01:28:04.674823 kubelet[2351]: I0430 01:28:04.674229 2351 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 01:28:04.677437 kubelet[2351]: E0430 01:28:04.677136 2351 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 30 01:28:04.677437 kubelet[2351]: E0430 01:28:04.677390 2351 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-tw98u.gb1.brightbox.com\" not found" Apr 30 01:28:04.697770 systemd[1]: Created slice kubepods-burstable-pod82d9eb498d6335040c1bb5b4f7199a0c.slice - libcontainer container kubepods-burstable-pod82d9eb498d6335040c1bb5b4f7199a0c.slice. Apr 30 01:28:04.704115 systemd[1]: Created slice kubepods-burstable-podc709eda291ffd856f3a0fdcba5a356dd.slice - libcontainer container kubepods-burstable-podc709eda291ffd856f3a0fdcba5a356dd.slice. Apr 30 01:28:04.712038 kubelet[2351]: E0430 01:28:04.710559 2351 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-tw98u.gb1.brightbox.com\" not found" node="srv-tw98u.gb1.brightbox.com" Apr 30 01:28:04.713395 kubelet[2351]: E0430 01:28:04.713355 2351 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-tw98u.gb1.brightbox.com\" not found" node="srv-tw98u.gb1.brightbox.com" Apr 30 01:28:04.716404 systemd[1]: Created slice kubepods-burstable-pod3eea208a85f6879000b83d50d297822f.slice - libcontainer container kubepods-burstable-pod3eea208a85f6879000b83d50d297822f.slice. Apr 30 01:28:04.719103 kubelet[2351]: E0430 01:28:04.719075 2351 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-tw98u.gb1.brightbox.com\" not found" node="srv-tw98u.gb1.brightbox.com" Apr 30 01:28:04.763300 kubelet[2351]: E0430 01:28:04.763217 2351 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.243.74.78:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-tw98u.gb1.brightbox.com?timeout=10s\": dial tcp 10.243.74.78:6443: connect: connection refused" interval="400ms" Apr 30 01:28:04.777802 kubelet[2351]: I0430 01:28:04.777730 2351 kubelet_node_status.go:76] "Attempting to register node" node="srv-tw98u.gb1.brightbox.com" Apr 30 01:28:04.778416 kubelet[2351]: E0430 01:28:04.778374 2351 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.243.74.78:6443/api/v1/nodes\": dial tcp 10.243.74.78:6443: connect: connection refused" node="srv-tw98u.gb1.brightbox.com" Apr 30 01:28:04.863283 kubelet[2351]: I0430 01:28:04.863083 2351 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3eea208a85f6879000b83d50d297822f-usr-share-ca-certificates\") pod \"kube-apiserver-srv-tw98u.gb1.brightbox.com\" (UID: \"3eea208a85f6879000b83d50d297822f\") " pod="kube-system/kube-apiserver-srv-tw98u.gb1.brightbox.com" Apr 30 01:28:04.863283 kubelet[2351]: I0430 01:28:04.863176 2351 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c709eda291ffd856f3a0fdcba5a356dd-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-tw98u.gb1.brightbox.com\" (UID: \"c709eda291ffd856f3a0fdcba5a356dd\") " pod="kube-system/kube-controller-manager-srv-tw98u.gb1.brightbox.com" Apr 30 01:28:04.863283 kubelet[2351]: I0430 01:28:04.863230 2351 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3eea208a85f6879000b83d50d297822f-k8s-certs\") pod \"kube-apiserver-srv-tw98u.gb1.brightbox.com\" (UID: \"3eea208a85f6879000b83d50d297822f\") " pod="kube-system/kube-apiserver-srv-tw98u.gb1.brightbox.com" Apr 30 01:28:04.863283 kubelet[2351]: I0430 01:28:04.863272 2351 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3eea208a85f6879000b83d50d297822f-ca-certs\") pod \"kube-apiserver-srv-tw98u.gb1.brightbox.com\" (UID: \"3eea208a85f6879000b83d50d297822f\") " pod="kube-system/kube-apiserver-srv-tw98u.gb1.brightbox.com" Apr 30 01:28:04.863844 kubelet[2351]: I0430 01:28:04.863356 2351 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c709eda291ffd856f3a0fdcba5a356dd-ca-certs\") pod \"kube-controller-manager-srv-tw98u.gb1.brightbox.com\" (UID: \"c709eda291ffd856f3a0fdcba5a356dd\") " pod="kube-system/kube-controller-manager-srv-tw98u.gb1.brightbox.com" Apr 30 01:28:04.863844 kubelet[2351]: I0430 01:28:04.863388 2351 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c709eda291ffd856f3a0fdcba5a356dd-flexvolume-dir\") pod \"kube-controller-manager-srv-tw98u.gb1.brightbox.com\" (UID: \"c709eda291ffd856f3a0fdcba5a356dd\") " pod="kube-system/kube-controller-manager-srv-tw98u.gb1.brightbox.com" Apr 30 01:28:04.863844 kubelet[2351]: I0430 01:28:04.863413 2351 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c709eda291ffd856f3a0fdcba5a356dd-k8s-certs\") pod \"kube-controller-manager-srv-tw98u.gb1.brightbox.com\" (UID: \"c709eda291ffd856f3a0fdcba5a356dd\") " pod="kube-system/kube-controller-manager-srv-tw98u.gb1.brightbox.com" Apr 30 01:28:04.863844 kubelet[2351]: I0430 01:28:04.863442 2351 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c709eda291ffd856f3a0fdcba5a356dd-kubeconfig\") pod \"kube-controller-manager-srv-tw98u.gb1.brightbox.com\" (UID: \"c709eda291ffd856f3a0fdcba5a356dd\") " pod="kube-system/kube-controller-manager-srv-tw98u.gb1.brightbox.com" Apr 30 01:28:04.863844 kubelet[2351]: I0430 01:28:04.863468 2351 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/82d9eb498d6335040c1bb5b4f7199a0c-kubeconfig\") pod \"kube-scheduler-srv-tw98u.gb1.brightbox.com\" (UID: \"82d9eb498d6335040c1bb5b4f7199a0c\") " pod="kube-system/kube-scheduler-srv-tw98u.gb1.brightbox.com" Apr 30 01:28:04.982450 kubelet[2351]: I0430 01:28:04.981670 2351 kubelet_node_status.go:76] "Attempting to register node" node="srv-tw98u.gb1.brightbox.com" Apr 30 01:28:04.982450 kubelet[2351]: E0430 01:28:04.982193 2351 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.243.74.78:6443/api/v1/nodes\": dial tcp 10.243.74.78:6443: connect: connection refused" node="srv-tw98u.gb1.brightbox.com" Apr 30 01:28:05.014948 containerd[1508]: time="2025-04-30T01:28:05.014728120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-tw98u.gb1.brightbox.com,Uid:82d9eb498d6335040c1bb5b4f7199a0c,Namespace:kube-system,Attempt:0,}" Apr 30 01:28:05.014948 containerd[1508]: time="2025-04-30T01:28:05.014912190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-tw98u.gb1.brightbox.com,Uid:c709eda291ffd856f3a0fdcba5a356dd,Namespace:kube-system,Attempt:0,}" Apr 30 01:28:05.021800 containerd[1508]: time="2025-04-30T01:28:05.021171293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-tw98u.gb1.brightbox.com,Uid:3eea208a85f6879000b83d50d297822f,Namespace:kube-system,Attempt:0,}" Apr 30 01:28:05.164554 kubelet[2351]: E0430 01:28:05.164485 2351 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.243.74.78:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-tw98u.gb1.brightbox.com?timeout=10s\": dial tcp 10.243.74.78:6443: connect: connection refused" interval="800ms" Apr 30 01:28:05.385186 kubelet[2351]: I0430 01:28:05.385139 2351 kubelet_node_status.go:76] "Attempting to register node" node="srv-tw98u.gb1.brightbox.com" Apr 30 01:28:05.385625 kubelet[2351]: E0430 01:28:05.385593 2351 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.243.74.78:6443/api/v1/nodes\": dial tcp 10.243.74.78:6443: connect: connection refused" node="srv-tw98u.gb1.brightbox.com" Apr 30 01:28:05.435652 kubelet[2351]: W0430 01:28:05.435514 2351 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.243.74.78:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-tw98u.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.243.74.78:6443: connect: connection refused Apr 30 01:28:05.435652 kubelet[2351]: E0430 01:28:05.435595 2351 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.243.74.78:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-tw98u.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.243.74.78:6443: connect: connection refused" logger="UnhandledError" Apr 30 01:28:05.453675 kubelet[2351]: W0430 01:28:05.453547 2351 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.243.74.78:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.243.74.78:6443: connect: connection refused Apr 30 01:28:05.453675 kubelet[2351]: E0430 01:28:05.453598 2351 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.243.74.78:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.243.74.78:6443: connect: connection refused" logger="UnhandledError" Apr 30 01:28:05.597711 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4064630155.mount: Deactivated successfully. Apr 30 01:28:05.603924 containerd[1508]: time="2025-04-30T01:28:05.603862863Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 01:28:05.605846 containerd[1508]: time="2025-04-30T01:28:05.605800306Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 01:28:05.607639 containerd[1508]: time="2025-04-30T01:28:05.607590435Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Apr 30 01:28:05.608252 containerd[1508]: time="2025-04-30T01:28:05.608201315Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 01:28:05.610168 containerd[1508]: time="2025-04-30T01:28:05.610108708Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 01:28:05.611425 containerd[1508]: time="2025-04-30T01:28:05.611284410Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 01:28:05.612060 containerd[1508]: time="2025-04-30T01:28:05.611735910Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 01:28:05.617257 containerd[1508]: time="2025-04-30T01:28:05.617184042Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 01:28:05.619062 containerd[1508]: time="2025-04-30T01:28:05.618575179Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 597.13373ms" Apr 30 01:28:05.622304 containerd[1508]: time="2025-04-30T01:28:05.622141928Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 605.491193ms" Apr 30 01:28:05.624029 containerd[1508]: time="2025-04-30T01:28:05.623992998Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 607.342193ms" Apr 30 01:28:05.887649 containerd[1508]: time="2025-04-30T01:28:05.875760535Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 01:28:05.887649 containerd[1508]: time="2025-04-30T01:28:05.887287129Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 01:28:05.887649 containerd[1508]: time="2025-04-30T01:28:05.887308676Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 01:28:05.887649 containerd[1508]: time="2025-04-30T01:28:05.887467538Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 01:28:05.887649 containerd[1508]: time="2025-04-30T01:28:05.886088855Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 01:28:05.887649 containerd[1508]: time="2025-04-30T01:28:05.886195028Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 01:28:05.887649 containerd[1508]: time="2025-04-30T01:28:05.886227861Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 01:28:05.887649 containerd[1508]: time="2025-04-30T01:28:05.886378604Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 01:28:05.890910 containerd[1508]: time="2025-04-30T01:28:05.890734513Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 01:28:05.891098 containerd[1508]: time="2025-04-30T01:28:05.891022859Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 01:28:05.891641 containerd[1508]: time="2025-04-30T01:28:05.891253933Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 01:28:05.894258 containerd[1508]: time="2025-04-30T01:28:05.894180360Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 01:28:05.948239 systemd[1]: Started cri-containerd-710e7c2e11293c13042ab2ec298dcde8256a1573e753afc89ab86545a47c5db6.scope - libcontainer container 710e7c2e11293c13042ab2ec298dcde8256a1573e753afc89ab86545a47c5db6. Apr 30 01:28:05.965477 kubelet[2351]: E0430 01:28:05.965308 2351 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.243.74.78:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-tw98u.gb1.brightbox.com?timeout=10s\": dial tcp 10.243.74.78:6443: connect: connection refused" interval="1.6s" Apr 30 01:28:05.974219 systemd[1]: Started cri-containerd-0f5276fa58cbf9876134c212e5245ac5f6cb6d9a6963fb6fdff615cd8a7f1cbc.scope - libcontainer container 0f5276fa58cbf9876134c212e5245ac5f6cb6d9a6963fb6fdff615cd8a7f1cbc. Apr 30 01:28:05.975232 kubelet[2351]: W0430 01:28:05.974532 2351 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.243.74.78:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.243.74.78:6443: connect: connection refused Apr 30 01:28:05.975232 kubelet[2351]: E0430 01:28:05.974628 2351 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.243.74.78:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.243.74.78:6443: connect: connection refused" logger="UnhandledError" Apr 30 01:28:05.977574 systemd[1]: Started cri-containerd-da535e4502158743959d370783ccaf6b08cebd9dadc02da60ddfabe6d5bfcd43.scope - libcontainer container da535e4502158743959d370783ccaf6b08cebd9dadc02da60ddfabe6d5bfcd43. Apr 30 01:28:06.093063 containerd[1508]: time="2025-04-30T01:28:06.092987156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-tw98u.gb1.brightbox.com,Uid:c709eda291ffd856f3a0fdcba5a356dd,Namespace:kube-system,Attempt:0,} returns sandbox id \"da535e4502158743959d370783ccaf6b08cebd9dadc02da60ddfabe6d5bfcd43\"" Apr 30 01:28:06.113378 containerd[1508]: time="2025-04-30T01:28:06.112318448Z" level=info msg="CreateContainer within sandbox \"da535e4502158743959d370783ccaf6b08cebd9dadc02da60ddfabe6d5bfcd43\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 30 01:28:06.117569 containerd[1508]: time="2025-04-30T01:28:06.116728804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-tw98u.gb1.brightbox.com,Uid:3eea208a85f6879000b83d50d297822f,Namespace:kube-system,Attempt:0,} returns sandbox id \"710e7c2e11293c13042ab2ec298dcde8256a1573e753afc89ab86545a47c5db6\"" Apr 30 01:28:06.123220 containerd[1508]: time="2025-04-30T01:28:06.123130833Z" level=info msg="CreateContainer within sandbox \"710e7c2e11293c13042ab2ec298dcde8256a1573e753afc89ab86545a47c5db6\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 30 01:28:06.124039 containerd[1508]: time="2025-04-30T01:28:06.123632821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-tw98u.gb1.brightbox.com,Uid:82d9eb498d6335040c1bb5b4f7199a0c,Namespace:kube-system,Attempt:0,} returns sandbox id \"0f5276fa58cbf9876134c212e5245ac5f6cb6d9a6963fb6fdff615cd8a7f1cbc\"" Apr 30 01:28:06.128766 containerd[1508]: time="2025-04-30T01:28:06.128699428Z" level=info msg="CreateContainer within sandbox \"0f5276fa58cbf9876134c212e5245ac5f6cb6d9a6963fb6fdff615cd8a7f1cbc\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 30 01:28:06.140581 kubelet[2351]: W0430 01:28:06.140145 2351 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.243.74.78:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.243.74.78:6443: connect: connection refused Apr 30 01:28:06.140581 kubelet[2351]: E0430 01:28:06.140270 2351 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.243.74.78:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.243.74.78:6443: connect: connection refused" logger="UnhandledError" Apr 30 01:28:06.162355 containerd[1508]: time="2025-04-30T01:28:06.162044460Z" level=info msg="CreateContainer within sandbox \"710e7c2e11293c13042ab2ec298dcde8256a1573e753afc89ab86545a47c5db6\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f2a792bda2c90ec32fa15ab1490f33024d639f171f559c3d2e6b05109b58e842\"" Apr 30 01:28:06.164371 containerd[1508]: time="2025-04-30T01:28:06.164013875Z" level=info msg="StartContainer for \"f2a792bda2c90ec32fa15ab1490f33024d639f171f559c3d2e6b05109b58e842\"" Apr 30 01:28:06.172465 containerd[1508]: time="2025-04-30T01:28:06.172263442Z" level=info msg="CreateContainer within sandbox \"da535e4502158743959d370783ccaf6b08cebd9dadc02da60ddfabe6d5bfcd43\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8f680d34a304bf3c7f35a9b0d6a818d9a77d4a13ce61c68a15864c3f5a84b070\"" Apr 30 01:28:06.172687 containerd[1508]: time="2025-04-30T01:28:06.172611496Z" level=info msg="CreateContainer within sandbox \"0f5276fa58cbf9876134c212e5245ac5f6cb6d9a6963fb6fdff615cd8a7f1cbc\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ee18ea76acf33b48484cc05c243abb4e3154d9c0ce3c1720a2d5e424a58275ee\"" Apr 30 01:28:06.173600 containerd[1508]: time="2025-04-30T01:28:06.173415113Z" level=info msg="StartContainer for \"ee18ea76acf33b48484cc05c243abb4e3154d9c0ce3c1720a2d5e424a58275ee\"" Apr 30 01:28:06.190651 containerd[1508]: time="2025-04-30T01:28:06.190420850Z" level=info msg="StartContainer for \"8f680d34a304bf3c7f35a9b0d6a818d9a77d4a13ce61c68a15864c3f5a84b070\"" Apr 30 01:28:06.195790 kubelet[2351]: I0430 01:28:06.195213 2351 kubelet_node_status.go:76] "Attempting to register node" node="srv-tw98u.gb1.brightbox.com" Apr 30 01:28:06.195790 kubelet[2351]: E0430 01:28:06.195745 2351 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.243.74.78:6443/api/v1/nodes\": dial tcp 10.243.74.78:6443: connect: connection refused" node="srv-tw98u.gb1.brightbox.com" Apr 30 01:28:06.235637 systemd[1]: Started cri-containerd-f2a792bda2c90ec32fa15ab1490f33024d639f171f559c3d2e6b05109b58e842.scope - libcontainer container f2a792bda2c90ec32fa15ab1490f33024d639f171f559c3d2e6b05109b58e842. Apr 30 01:28:06.246250 systemd[1]: Started cri-containerd-ee18ea76acf33b48484cc05c243abb4e3154d9c0ce3c1720a2d5e424a58275ee.scope - libcontainer container ee18ea76acf33b48484cc05c243abb4e3154d9c0ce3c1720a2d5e424a58275ee. Apr 30 01:28:06.259270 systemd[1]: Started cri-containerd-8f680d34a304bf3c7f35a9b0d6a818d9a77d4a13ce61c68a15864c3f5a84b070.scope - libcontainer container 8f680d34a304bf3c7f35a9b0d6a818d9a77d4a13ce61c68a15864c3f5a84b070. Apr 30 01:28:06.363150 containerd[1508]: time="2025-04-30T01:28:06.363083252Z" level=info msg="StartContainer for \"f2a792bda2c90ec32fa15ab1490f33024d639f171f559c3d2e6b05109b58e842\" returns successfully" Apr 30 01:28:06.385839 containerd[1508]: time="2025-04-30T01:28:06.385753839Z" level=info msg="StartContainer for \"8f680d34a304bf3c7f35a9b0d6a818d9a77d4a13ce61c68a15864c3f5a84b070\" returns successfully" Apr 30 01:28:06.430658 containerd[1508]: time="2025-04-30T01:28:06.429560459Z" level=info msg="StartContainer for \"ee18ea76acf33b48484cc05c243abb4e3154d9c0ce3c1720a2d5e424a58275ee\" returns successfully" Apr 30 01:28:06.558226 kubelet[2351]: E0430 01:28:06.558167 2351 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.243.74.78:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.243.74.78:6443: connect: connection refused" logger="UnhandledError" Apr 30 01:28:06.612921 kubelet[2351]: E0430 01:28:06.612873 2351 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-tw98u.gb1.brightbox.com\" not found" node="srv-tw98u.gb1.brightbox.com" Apr 30 01:28:06.616429 kubelet[2351]: E0430 01:28:06.616395 2351 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-tw98u.gb1.brightbox.com\" not found" node="srv-tw98u.gb1.brightbox.com" Apr 30 01:28:06.620177 kubelet[2351]: E0430 01:28:06.620142 2351 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-tw98u.gb1.brightbox.com\" not found" node="srv-tw98u.gb1.brightbox.com" Apr 30 01:28:07.625000 kubelet[2351]: E0430 01:28:07.623485 2351 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-tw98u.gb1.brightbox.com\" not found" node="srv-tw98u.gb1.brightbox.com" Apr 30 01:28:07.625000 kubelet[2351]: E0430 01:28:07.623521 2351 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-tw98u.gb1.brightbox.com\" not found" node="srv-tw98u.gb1.brightbox.com" Apr 30 01:28:07.800579 kubelet[2351]: I0430 01:28:07.800532 2351 kubelet_node_status.go:76] "Attempting to register node" node="srv-tw98u.gb1.brightbox.com" Apr 30 01:28:09.791017 kubelet[2351]: E0430 01:28:09.790883 2351 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-tw98u.gb1.brightbox.com\" not found" node="srv-tw98u.gb1.brightbox.com" Apr 30 01:28:09.805001 kubelet[2351]: I0430 01:28:09.804613 2351 kubelet_node_status.go:79] "Successfully registered node" node="srv-tw98u.gb1.brightbox.com" Apr 30 01:28:09.805001 kubelet[2351]: E0430 01:28:09.804695 2351 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"srv-tw98u.gb1.brightbox.com\": node \"srv-tw98u.gb1.brightbox.com\" not found" Apr 30 01:28:09.861828 kubelet[2351]: I0430 01:28:09.861622 2351 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-tw98u.gb1.brightbox.com" Apr 30 01:28:09.875815 kubelet[2351]: E0430 01:28:09.875730 2351 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-tw98u.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-tw98u.gb1.brightbox.com" Apr 30 01:28:09.875815 kubelet[2351]: I0430 01:28:09.875786 2351 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-tw98u.gb1.brightbox.com" Apr 30 01:28:09.877934 kubelet[2351]: E0430 01:28:09.877653 2351 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-tw98u.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-tw98u.gb1.brightbox.com" Apr 30 01:28:09.877934 kubelet[2351]: I0430 01:28:09.877693 2351 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-tw98u.gb1.brightbox.com" Apr 30 01:28:09.880157 kubelet[2351]: E0430 01:28:09.880121 2351 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-tw98u.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-tw98u.gb1.brightbox.com" Apr 30 01:28:10.424263 kubelet[2351]: I0430 01:28:10.424201 2351 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-tw98u.gb1.brightbox.com" Apr 30 01:28:10.428239 kubelet[2351]: E0430 01:28:10.428194 2351 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-tw98u.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-tw98u.gb1.brightbox.com" Apr 30 01:28:10.533075 kubelet[2351]: I0430 01:28:10.532918 2351 apiserver.go:52] "Watching apiserver" Apr 30 01:28:10.562085 kubelet[2351]: I0430 01:28:10.561973 2351 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 01:28:11.473337 kubelet[2351]: I0430 01:28:11.473235 2351 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-tw98u.gb1.brightbox.com" Apr 30 01:28:11.482515 kubelet[2351]: W0430 01:28:11.482412 2351 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 01:28:12.430322 systemd[1]: Reloading requested from client PID 2622 ('systemctl') (unit session-11.scope)... Apr 30 01:28:12.430983 systemd[1]: Reloading... Apr 30 01:28:12.617058 zram_generator::config[2664]: No configuration found. Apr 30 01:28:12.805227 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 01:28:12.948331 systemd[1]: Reloading finished in 516 ms. Apr 30 01:28:13.029640 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 01:28:13.043923 systemd[1]: kubelet.service: Deactivated successfully. Apr 30 01:28:13.044378 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 01:28:13.044498 systemd[1]: kubelet.service: Consumed 1.744s CPU time, 124.4M memory peak, 0B memory swap peak. Apr 30 01:28:13.067135 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 01:28:13.292115 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 01:28:13.304544 (kubelet)[2725]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 01:28:13.409475 kubelet[2725]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 01:28:13.409475 kubelet[2725]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 30 01:28:13.409475 kubelet[2725]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 01:28:13.409475 kubelet[2725]: I0430 01:28:13.409402 2725 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 01:28:13.421225 kubelet[2725]: I0430 01:28:13.421189 2725 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Apr 30 01:28:13.421225 kubelet[2725]: I0430 01:28:13.421221 2725 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 01:28:13.421602 kubelet[2725]: I0430 01:28:13.421568 2725 server.go:954] "Client rotation is on, will bootstrap in background" Apr 30 01:28:13.425123 kubelet[2725]: I0430 01:28:13.425097 2725 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Apr 30 01:28:13.432442 kubelet[2725]: I0430 01:28:13.430004 2725 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 01:28:13.440714 kubelet[2725]: E0430 01:28:13.438665 2725 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 30 01:28:13.440714 kubelet[2725]: I0430 01:28:13.438709 2725 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 30 01:28:13.449190 kubelet[2725]: I0430 01:28:13.449153 2725 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 01:28:13.449761 kubelet[2725]: I0430 01:28:13.449721 2725 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 01:28:13.450207 kubelet[2725]: I0430 01:28:13.449857 2725 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-tw98u.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 30 01:28:13.450558 kubelet[2725]: I0430 01:28:13.450536 2725 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 01:28:13.450664 kubelet[2725]: I0430 01:28:13.450648 2725 container_manager_linux.go:304] "Creating device plugin manager" Apr 30 01:28:13.450829 kubelet[2725]: I0430 01:28:13.450811 2725 state_mem.go:36] "Initialized new in-memory state store" Apr 30 01:28:13.451209 kubelet[2725]: I0430 01:28:13.451190 2725 kubelet.go:446] "Attempting to sync node with API server" Apr 30 01:28:13.452642 kubelet[2725]: I0430 01:28:13.452611 2725 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 01:28:13.452717 kubelet[2725]: I0430 01:28:13.452661 2725 kubelet.go:352] "Adding apiserver pod source" Apr 30 01:28:13.452717 kubelet[2725]: I0430 01:28:13.452691 2725 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 01:28:13.455610 sudo[2739]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Apr 30 01:28:13.456190 sudo[2739]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Apr 30 01:28:13.458133 kubelet[2725]: I0430 01:28:13.457207 2725 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Apr 30 01:28:13.458305 kubelet[2725]: I0430 01:28:13.457803 2725 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 01:28:13.459145 kubelet[2725]: I0430 01:28:13.459118 2725 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 30 01:28:13.459330 kubelet[2725]: I0430 01:28:13.459311 2725 server.go:1287] "Started kubelet" Apr 30 01:28:13.466913 kubelet[2725]: I0430 01:28:13.466878 2725 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 01:28:13.482023 kubelet[2725]: I0430 01:28:13.481526 2725 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 01:28:13.490422 kubelet[2725]: I0430 01:28:13.490305 2725 server.go:490] "Adding debug handlers to kubelet server" Apr 30 01:28:13.495875 kubelet[2725]: I0430 01:28:13.494004 2725 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 01:28:13.495875 kubelet[2725]: I0430 01:28:13.494461 2725 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 01:28:13.495875 kubelet[2725]: I0430 01:28:13.494802 2725 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 30 01:28:13.502364 kubelet[2725]: I0430 01:28:13.502335 2725 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 30 01:28:13.505599 kubelet[2725]: I0430 01:28:13.505257 2725 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 01:28:13.525452 kubelet[2725]: I0430 01:28:13.506008 2725 reconciler.go:26] "Reconciler: start to sync state" Apr 30 01:28:13.529374 kubelet[2725]: I0430 01:28:13.517700 2725 factory.go:221] Registration of the systemd container factory successfully Apr 30 01:28:13.541815 kubelet[2725]: I0430 01:28:13.541779 2725 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 01:28:13.542800 kubelet[2725]: E0430 01:28:13.538800 2725 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 01:28:13.550021 kubelet[2725]: I0430 01:28:13.549597 2725 factory.go:221] Registration of the containerd container factory successfully Apr 30 01:28:13.554748 kubelet[2725]: I0430 01:28:13.554309 2725 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 01:28:13.576279 kubelet[2725]: I0430 01:28:13.575931 2725 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 01:28:13.576279 kubelet[2725]: I0430 01:28:13.576016 2725 status_manager.go:227] "Starting to sync pod status with apiserver" Apr 30 01:28:13.576279 kubelet[2725]: I0430 01:28:13.576046 2725 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 30 01:28:13.576279 kubelet[2725]: I0430 01:28:13.576068 2725 kubelet.go:2388] "Starting kubelet main sync loop" Apr 30 01:28:13.576279 kubelet[2725]: E0430 01:28:13.576139 2725 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 01:28:13.677033 kubelet[2725]: E0430 01:28:13.676309 2725 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 30 01:28:13.705312 kubelet[2725]: I0430 01:28:13.703339 2725 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 30 01:28:13.705312 kubelet[2725]: I0430 01:28:13.703370 2725 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 30 01:28:13.705312 kubelet[2725]: I0430 01:28:13.703406 2725 state_mem.go:36] "Initialized new in-memory state store" Apr 30 01:28:13.705312 kubelet[2725]: I0430 01:28:13.703733 2725 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 30 01:28:13.705312 kubelet[2725]: I0430 01:28:13.703761 2725 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 30 01:28:13.705312 kubelet[2725]: I0430 01:28:13.703812 2725 policy_none.go:49] "None policy: Start" Apr 30 01:28:13.705312 kubelet[2725]: I0430 01:28:13.703835 2725 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 30 01:28:13.705312 kubelet[2725]: I0430 01:28:13.703867 2725 state_mem.go:35] "Initializing new in-memory state store" Apr 30 01:28:13.705312 kubelet[2725]: I0430 01:28:13.704070 2725 state_mem.go:75] "Updated machine memory state" Apr 30 01:28:13.723223 kubelet[2725]: I0430 01:28:13.723188 2725 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 01:28:13.725854 kubelet[2725]: I0430 01:28:13.725346 2725 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 30 01:28:13.725854 kubelet[2725]: I0430 01:28:13.725379 2725 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 01:28:13.725854 kubelet[2725]: I0430 01:28:13.725852 2725 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 01:28:13.728386 kubelet[2725]: E0430 01:28:13.728253 2725 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 30 01:28:13.877778 kubelet[2725]: I0430 01:28:13.877605 2725 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-tw98u.gb1.brightbox.com" Apr 30 01:28:13.881064 kubelet[2725]: I0430 01:28:13.879369 2725 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-tw98u.gb1.brightbox.com" Apr 30 01:28:13.883664 kubelet[2725]: I0430 01:28:13.879558 2725 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-tw98u.gb1.brightbox.com" Apr 30 01:28:13.883664 kubelet[2725]: I0430 01:28:13.883355 2725 kubelet_node_status.go:76] "Attempting to register node" node="srv-tw98u.gb1.brightbox.com" Apr 30 01:28:13.900943 kubelet[2725]: W0430 01:28:13.900871 2725 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 01:28:13.905753 kubelet[2725]: W0430 01:28:13.905715 2725 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 01:28:13.909201 kubelet[2725]: W0430 01:28:13.909170 2725 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 01:28:13.909309 kubelet[2725]: E0430 01:28:13.909251 2725 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-tw98u.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-tw98u.gb1.brightbox.com" Apr 30 01:28:13.915208 kubelet[2725]: I0430 01:28:13.915163 2725 kubelet_node_status.go:125] "Node was previously registered" node="srv-tw98u.gb1.brightbox.com" Apr 30 01:28:13.915413 kubelet[2725]: I0430 01:28:13.915390 2725 kubelet_node_status.go:79] "Successfully registered node" node="srv-tw98u.gb1.brightbox.com" Apr 30 01:28:13.931730 kubelet[2725]: I0430 01:28:13.931544 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3eea208a85f6879000b83d50d297822f-ca-certs\") pod \"kube-apiserver-srv-tw98u.gb1.brightbox.com\" (UID: \"3eea208a85f6879000b83d50d297822f\") " pod="kube-system/kube-apiserver-srv-tw98u.gb1.brightbox.com" Apr 30 01:28:13.932389 kubelet[2725]: I0430 01:28:13.932023 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3eea208a85f6879000b83d50d297822f-k8s-certs\") pod \"kube-apiserver-srv-tw98u.gb1.brightbox.com\" (UID: \"3eea208a85f6879000b83d50d297822f\") " pod="kube-system/kube-apiserver-srv-tw98u.gb1.brightbox.com" Apr 30 01:28:13.932389 kubelet[2725]: I0430 01:28:13.932125 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3eea208a85f6879000b83d50d297822f-usr-share-ca-certificates\") pod \"kube-apiserver-srv-tw98u.gb1.brightbox.com\" (UID: \"3eea208a85f6879000b83d50d297822f\") " pod="kube-system/kube-apiserver-srv-tw98u.gb1.brightbox.com" Apr 30 01:28:13.932389 kubelet[2725]: I0430 01:28:13.932160 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c709eda291ffd856f3a0fdcba5a356dd-k8s-certs\") pod \"kube-controller-manager-srv-tw98u.gb1.brightbox.com\" (UID: \"c709eda291ffd856f3a0fdcba5a356dd\") " pod="kube-system/kube-controller-manager-srv-tw98u.gb1.brightbox.com" Apr 30 01:28:13.932389 kubelet[2725]: I0430 01:28:13.932272 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c709eda291ffd856f3a0fdcba5a356dd-kubeconfig\") pod \"kube-controller-manager-srv-tw98u.gb1.brightbox.com\" (UID: \"c709eda291ffd856f3a0fdcba5a356dd\") " pod="kube-system/kube-controller-manager-srv-tw98u.gb1.brightbox.com" Apr 30 01:28:13.932389 kubelet[2725]: I0430 01:28:13.932339 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/82d9eb498d6335040c1bb5b4f7199a0c-kubeconfig\") pod \"kube-scheduler-srv-tw98u.gb1.brightbox.com\" (UID: \"82d9eb498d6335040c1bb5b4f7199a0c\") " pod="kube-system/kube-scheduler-srv-tw98u.gb1.brightbox.com" Apr 30 01:28:13.933583 kubelet[2725]: I0430 01:28:13.933056 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c709eda291ffd856f3a0fdcba5a356dd-ca-certs\") pod \"kube-controller-manager-srv-tw98u.gb1.brightbox.com\" (UID: \"c709eda291ffd856f3a0fdcba5a356dd\") " pod="kube-system/kube-controller-manager-srv-tw98u.gb1.brightbox.com" Apr 30 01:28:13.933583 kubelet[2725]: I0430 01:28:13.933452 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c709eda291ffd856f3a0fdcba5a356dd-flexvolume-dir\") pod \"kube-controller-manager-srv-tw98u.gb1.brightbox.com\" (UID: \"c709eda291ffd856f3a0fdcba5a356dd\") " pod="kube-system/kube-controller-manager-srv-tw98u.gb1.brightbox.com" Apr 30 01:28:13.933583 kubelet[2725]: I0430 01:28:13.933521 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c709eda291ffd856f3a0fdcba5a356dd-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-tw98u.gb1.brightbox.com\" (UID: \"c709eda291ffd856f3a0fdcba5a356dd\") " pod="kube-system/kube-controller-manager-srv-tw98u.gb1.brightbox.com" Apr 30 01:28:14.331310 sudo[2739]: pam_unix(sudo:session): session closed for user root Apr 30 01:28:14.472195 kubelet[2725]: I0430 01:28:14.471823 2725 apiserver.go:52] "Watching apiserver" Apr 30 01:28:14.526244 kubelet[2725]: I0430 01:28:14.526148 2725 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 01:28:14.636543 kubelet[2725]: I0430 01:28:14.636263 2725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-tw98u.gb1.brightbox.com" podStartSLOduration=1.636199009 podStartE2EDuration="1.636199009s" podCreationTimestamp="2025-04-30 01:28:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 01:28:14.635855889 +0000 UTC m=+1.305053090" watchObservedRunningTime="2025-04-30 01:28:14.636199009 +0000 UTC m=+1.305396202" Apr 30 01:28:14.658469 kubelet[2725]: I0430 01:28:14.658172 2725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-tw98u.gb1.brightbox.com" podStartSLOduration=3.658152453 podStartE2EDuration="3.658152453s" podCreationTimestamp="2025-04-30 01:28:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 01:28:14.655032612 +0000 UTC m=+1.324229808" watchObservedRunningTime="2025-04-30 01:28:14.658152453 +0000 UTC m=+1.327349648" Apr 30 01:28:14.687270 kubelet[2725]: I0430 01:28:14.687101 2725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-tw98u.gb1.brightbox.com" podStartSLOduration=1.687074213 podStartE2EDuration="1.687074213s" podCreationTimestamp="2025-04-30 01:28:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 01:28:14.673498812 +0000 UTC m=+1.342696016" watchObservedRunningTime="2025-04-30 01:28:14.687074213 +0000 UTC m=+1.356271420" Apr 30 01:28:16.169645 sudo[1780]: pam_unix(sudo:session): session closed for user root Apr 30 01:28:16.315238 sshd[1779]: Connection closed by 139.178.68.195 port 38064 Apr 30 01:28:16.317438 sshd-session[1777]: pam_unix(sshd:session): session closed for user core Apr 30 01:28:16.323238 systemd[1]: sshd@8-10.243.74.78:22-139.178.68.195:38064.service: Deactivated successfully. Apr 30 01:28:16.327247 systemd[1]: session-11.scope: Deactivated successfully. Apr 30 01:28:16.327518 systemd[1]: session-11.scope: Consumed 7.220s CPU time, 136.5M memory peak, 0B memory swap peak. Apr 30 01:28:16.330128 systemd-logind[1491]: Session 11 logged out. Waiting for processes to exit. Apr 30 01:28:16.332462 systemd-logind[1491]: Removed session 11. Apr 30 01:28:17.121093 kubelet[2725]: I0430 01:28:17.121040 2725 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 30 01:28:17.121910 containerd[1508]: time="2025-04-30T01:28:17.121739812Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 30 01:28:17.124778 kubelet[2725]: I0430 01:28:17.122597 2725 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 30 01:28:17.259880 kubelet[2725]: I0430 01:28:17.255782 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2eab43b7-c4c1-4e23-a96c-4d8eeaf72e45-kube-proxy\") pod \"kube-proxy-gz78b\" (UID: \"2eab43b7-c4c1-4e23-a96c-4d8eeaf72e45\") " pod="kube-system/kube-proxy-gz78b" Apr 30 01:28:17.260642 kubelet[2725]: I0430 01:28:17.260502 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2eab43b7-c4c1-4e23-a96c-4d8eeaf72e45-xtables-lock\") pod \"kube-proxy-gz78b\" (UID: \"2eab43b7-c4c1-4e23-a96c-4d8eeaf72e45\") " pod="kube-system/kube-proxy-gz78b" Apr 30 01:28:17.260642 kubelet[2725]: I0430 01:28:17.260618 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2eab43b7-c4c1-4e23-a96c-4d8eeaf72e45-lib-modules\") pod \"kube-proxy-gz78b\" (UID: \"2eab43b7-c4c1-4e23-a96c-4d8eeaf72e45\") " pod="kube-system/kube-proxy-gz78b" Apr 30 01:28:17.260763 kubelet[2725]: I0430 01:28:17.260688 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvcpg\" (UniqueName: \"kubernetes.io/projected/2eab43b7-c4c1-4e23-a96c-4d8eeaf72e45-kube-api-access-wvcpg\") pod \"kube-proxy-gz78b\" (UID: \"2eab43b7-c4c1-4e23-a96c-4d8eeaf72e45\") " pod="kube-system/kube-proxy-gz78b" Apr 30 01:28:17.262724 systemd[1]: Created slice kubepods-besteffort-pod2eab43b7_c4c1_4e23_a96c_4d8eeaf72e45.slice - libcontainer container kubepods-besteffort-pod2eab43b7_c4c1_4e23_a96c_4d8eeaf72e45.slice. Apr 30 01:28:17.280880 systemd[1]: Created slice kubepods-burstable-pod14a5f62f_66ac_4dee_89cc_59dd66c6b2e7.slice - libcontainer container kubepods-burstable-pod14a5f62f_66ac_4dee_89cc_59dd66c6b2e7.slice. Apr 30 01:28:17.361087 kubelet[2725]: I0430 01:28:17.360950 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/14a5f62f-66ac-4dee-89cc-59dd66c6b2e7-xtables-lock\") pod \"cilium-zmqzb\" (UID: \"14a5f62f-66ac-4dee-89cc-59dd66c6b2e7\") " pod="kube-system/cilium-zmqzb" Apr 30 01:28:17.361087 kubelet[2725]: I0430 01:28:17.361046 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/14a5f62f-66ac-4dee-89cc-59dd66c6b2e7-host-proc-sys-net\") pod \"cilium-zmqzb\" (UID: \"14a5f62f-66ac-4dee-89cc-59dd66c6b2e7\") " pod="kube-system/cilium-zmqzb" Apr 30 01:28:17.361087 kubelet[2725]: I0430 01:28:17.361082 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/14a5f62f-66ac-4dee-89cc-59dd66c6b2e7-hubble-tls\") pod \"cilium-zmqzb\" (UID: \"14a5f62f-66ac-4dee-89cc-59dd66c6b2e7\") " pod="kube-system/cilium-zmqzb" Apr 30 01:28:17.361087 kubelet[2725]: I0430 01:28:17.361114 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/14a5f62f-66ac-4dee-89cc-59dd66c6b2e7-cilium-cgroup\") pod \"cilium-zmqzb\" (UID: \"14a5f62f-66ac-4dee-89cc-59dd66c6b2e7\") " pod="kube-system/cilium-zmqzb" Apr 30 01:28:17.361671 kubelet[2725]: I0430 01:28:17.361152 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/14a5f62f-66ac-4dee-89cc-59dd66c6b2e7-cilium-config-path\") pod \"cilium-zmqzb\" (UID: \"14a5f62f-66ac-4dee-89cc-59dd66c6b2e7\") " pod="kube-system/cilium-zmqzb" Apr 30 01:28:17.361671 kubelet[2725]: I0430 01:28:17.361180 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/14a5f62f-66ac-4dee-89cc-59dd66c6b2e7-host-proc-sys-kernel\") pod \"cilium-zmqzb\" (UID: \"14a5f62f-66ac-4dee-89cc-59dd66c6b2e7\") " pod="kube-system/cilium-zmqzb" Apr 30 01:28:17.361671 kubelet[2725]: I0430 01:28:17.361218 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcrmr\" (UniqueName: \"kubernetes.io/projected/14a5f62f-66ac-4dee-89cc-59dd66c6b2e7-kube-api-access-hcrmr\") pod \"cilium-zmqzb\" (UID: \"14a5f62f-66ac-4dee-89cc-59dd66c6b2e7\") " pod="kube-system/cilium-zmqzb" Apr 30 01:28:17.361671 kubelet[2725]: I0430 01:28:17.361245 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/14a5f62f-66ac-4dee-89cc-59dd66c6b2e7-cilium-run\") pod \"cilium-zmqzb\" (UID: \"14a5f62f-66ac-4dee-89cc-59dd66c6b2e7\") " pod="kube-system/cilium-zmqzb" Apr 30 01:28:17.361671 kubelet[2725]: I0430 01:28:17.361270 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/14a5f62f-66ac-4dee-89cc-59dd66c6b2e7-cni-path\") pod \"cilium-zmqzb\" (UID: \"14a5f62f-66ac-4dee-89cc-59dd66c6b2e7\") " pod="kube-system/cilium-zmqzb" Apr 30 01:28:17.361671 kubelet[2725]: I0430 01:28:17.361297 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/14a5f62f-66ac-4dee-89cc-59dd66c6b2e7-lib-modules\") pod \"cilium-zmqzb\" (UID: \"14a5f62f-66ac-4dee-89cc-59dd66c6b2e7\") " pod="kube-system/cilium-zmqzb" Apr 30 01:28:17.362046 kubelet[2725]: I0430 01:28:17.361327 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/14a5f62f-66ac-4dee-89cc-59dd66c6b2e7-hostproc\") pod \"cilium-zmqzb\" (UID: \"14a5f62f-66ac-4dee-89cc-59dd66c6b2e7\") " pod="kube-system/cilium-zmqzb" Apr 30 01:28:17.362046 kubelet[2725]: I0430 01:28:17.361368 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/14a5f62f-66ac-4dee-89cc-59dd66c6b2e7-bpf-maps\") pod \"cilium-zmqzb\" (UID: \"14a5f62f-66ac-4dee-89cc-59dd66c6b2e7\") " pod="kube-system/cilium-zmqzb" Apr 30 01:28:17.362046 kubelet[2725]: I0430 01:28:17.361395 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/14a5f62f-66ac-4dee-89cc-59dd66c6b2e7-etc-cni-netd\") pod \"cilium-zmqzb\" (UID: \"14a5f62f-66ac-4dee-89cc-59dd66c6b2e7\") " pod="kube-system/cilium-zmqzb" Apr 30 01:28:17.362046 kubelet[2725]: I0430 01:28:17.361468 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/14a5f62f-66ac-4dee-89cc-59dd66c6b2e7-clustermesh-secrets\") pod \"cilium-zmqzb\" (UID: \"14a5f62f-66ac-4dee-89cc-59dd66c6b2e7\") " pod="kube-system/cilium-zmqzb" Apr 30 01:28:17.371349 kubelet[2725]: E0430 01:28:17.370990 2725 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Apr 30 01:28:17.371349 kubelet[2725]: E0430 01:28:17.371063 2725 projected.go:194] Error preparing data for projected volume kube-api-access-wvcpg for pod kube-system/kube-proxy-gz78b: configmap "kube-root-ca.crt" not found Apr 30 01:28:17.371349 kubelet[2725]: E0430 01:28:17.371222 2725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2eab43b7-c4c1-4e23-a96c-4d8eeaf72e45-kube-api-access-wvcpg podName:2eab43b7-c4c1-4e23-a96c-4d8eeaf72e45 nodeName:}" failed. No retries permitted until 2025-04-30 01:28:17.871174228 +0000 UTC m=+4.540371417 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-wvcpg" (UniqueName: "kubernetes.io/projected/2eab43b7-c4c1-4e23-a96c-4d8eeaf72e45-kube-api-access-wvcpg") pod "kube-proxy-gz78b" (UID: "2eab43b7-c4c1-4e23-a96c-4d8eeaf72e45") : configmap "kube-root-ca.crt" not found Apr 30 01:28:17.486235 kubelet[2725]: E0430 01:28:17.486176 2725 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Apr 30 01:28:17.486235 kubelet[2725]: E0430 01:28:17.486232 2725 projected.go:194] Error preparing data for projected volume kube-api-access-hcrmr for pod kube-system/cilium-zmqzb: configmap "kube-root-ca.crt" not found Apr 30 01:28:17.486481 kubelet[2725]: E0430 01:28:17.486307 2725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/14a5f62f-66ac-4dee-89cc-59dd66c6b2e7-kube-api-access-hcrmr podName:14a5f62f-66ac-4dee-89cc-59dd66c6b2e7 nodeName:}" failed. No retries permitted until 2025-04-30 01:28:17.98628519 +0000 UTC m=+4.655482385 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-hcrmr" (UniqueName: "kubernetes.io/projected/14a5f62f-66ac-4dee-89cc-59dd66c6b2e7-kube-api-access-hcrmr") pod "cilium-zmqzb" (UID: "14a5f62f-66ac-4dee-89cc-59dd66c6b2e7") : configmap "kube-root-ca.crt" not found Apr 30 01:28:17.965511 kubelet[2725]: E0430 01:28:17.965447 2725 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Apr 30 01:28:17.965511 kubelet[2725]: E0430 01:28:17.965503 2725 projected.go:194] Error preparing data for projected volume kube-api-access-wvcpg for pod kube-system/kube-proxy-gz78b: configmap "kube-root-ca.crt" not found Apr 30 01:28:17.965777 kubelet[2725]: E0430 01:28:17.965588 2725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2eab43b7-c4c1-4e23-a96c-4d8eeaf72e45-kube-api-access-wvcpg podName:2eab43b7-c4c1-4e23-a96c-4d8eeaf72e45 nodeName:}" failed. No retries permitted until 2025-04-30 01:28:18.965565695 +0000 UTC m=+5.634762891 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-wvcpg" (UniqueName: "kubernetes.io/projected/2eab43b7-c4c1-4e23-a96c-4d8eeaf72e45-kube-api-access-wvcpg") pod "kube-proxy-gz78b" (UID: "2eab43b7-c4c1-4e23-a96c-4d8eeaf72e45") : configmap "kube-root-ca.crt" not found Apr 30 01:28:18.178991 systemd[1]: Created slice kubepods-besteffort-podced6da18_764a_41bb_85b5_f3af62cdfa1c.slice - libcontainer container kubepods-besteffort-podced6da18_764a_41bb_85b5_f3af62cdfa1c.slice. Apr 30 01:28:18.191032 containerd[1508]: time="2025-04-30T01:28:18.190330372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zmqzb,Uid:14a5f62f-66ac-4dee-89cc-59dd66c6b2e7,Namespace:kube-system,Attempt:0,}" Apr 30 01:28:18.253168 containerd[1508]: time="2025-04-30T01:28:18.252141070Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 01:28:18.253168 containerd[1508]: time="2025-04-30T01:28:18.252257823Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 01:28:18.253168 containerd[1508]: time="2025-04-30T01:28:18.252281699Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 01:28:18.253168 containerd[1508]: time="2025-04-30T01:28:18.252885801Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 01:28:18.267399 kubelet[2725]: I0430 01:28:18.267225 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ced6da18-764a-41bb-85b5-f3af62cdfa1c-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-2l27n\" (UID: \"ced6da18-764a-41bb-85b5-f3af62cdfa1c\") " pod="kube-system/cilium-operator-6c4d7847fc-2l27n" Apr 30 01:28:18.267399 kubelet[2725]: I0430 01:28:18.267295 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x97pc\" (UniqueName: \"kubernetes.io/projected/ced6da18-764a-41bb-85b5-f3af62cdfa1c-kube-api-access-x97pc\") pod \"cilium-operator-6c4d7847fc-2l27n\" (UID: \"ced6da18-764a-41bb-85b5-f3af62cdfa1c\") " pod="kube-system/cilium-operator-6c4d7847fc-2l27n" Apr 30 01:28:18.285317 systemd[1]: Started cri-containerd-2808f15b2b2a7aa33b1ba8ccc2ea84bb381fcd72a7594d9f900db7e44366b34e.scope - libcontainer container 2808f15b2b2a7aa33b1ba8ccc2ea84bb381fcd72a7594d9f900db7e44366b34e. Apr 30 01:28:18.327191 containerd[1508]: time="2025-04-30T01:28:18.327105688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zmqzb,Uid:14a5f62f-66ac-4dee-89cc-59dd66c6b2e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"2808f15b2b2a7aa33b1ba8ccc2ea84bb381fcd72a7594d9f900db7e44366b34e\"" Apr 30 01:28:18.331474 containerd[1508]: time="2025-04-30T01:28:18.331431623Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Apr 30 01:28:18.488230 containerd[1508]: time="2025-04-30T01:28:18.487823711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-2l27n,Uid:ced6da18-764a-41bb-85b5-f3af62cdfa1c,Namespace:kube-system,Attempt:0,}" Apr 30 01:28:18.524073 containerd[1508]: time="2025-04-30T01:28:18.523774554Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 01:28:18.524073 containerd[1508]: time="2025-04-30T01:28:18.523862965Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 01:28:18.524073 containerd[1508]: time="2025-04-30T01:28:18.523889101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 01:28:18.524073 containerd[1508]: time="2025-04-30T01:28:18.524026264Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 01:28:18.552975 systemd[1]: run-containerd-runc-k8s.io-b1d8565daa860ed9ba68a8be5fa0790b0ffc414eb9060480ab95be938e00ca2d-runc.I0bK2T.mount: Deactivated successfully. Apr 30 01:28:18.561162 systemd[1]: Started cri-containerd-b1d8565daa860ed9ba68a8be5fa0790b0ffc414eb9060480ab95be938e00ca2d.scope - libcontainer container b1d8565daa860ed9ba68a8be5fa0790b0ffc414eb9060480ab95be938e00ca2d. Apr 30 01:28:18.633832 containerd[1508]: time="2025-04-30T01:28:18.633755108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-2l27n,Uid:ced6da18-764a-41bb-85b5-f3af62cdfa1c,Namespace:kube-system,Attempt:0,} returns sandbox id \"b1d8565daa860ed9ba68a8be5fa0790b0ffc414eb9060480ab95be938e00ca2d\"" Apr 30 01:28:19.073920 containerd[1508]: time="2025-04-30T01:28:19.073868478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gz78b,Uid:2eab43b7-c4c1-4e23-a96c-4d8eeaf72e45,Namespace:kube-system,Attempt:0,}" Apr 30 01:28:19.103459 containerd[1508]: time="2025-04-30T01:28:19.103244962Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 01:28:19.103459 containerd[1508]: time="2025-04-30T01:28:19.103330239Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 01:28:19.103459 containerd[1508]: time="2025-04-30T01:28:19.103357853Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 01:28:19.103901 containerd[1508]: time="2025-04-30T01:28:19.103655635Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 01:28:19.129231 systemd[1]: Started cri-containerd-9b5f413f771863f863dba108a208397ef94f5d8a7617454fbb41c4dfe5ea442e.scope - libcontainer container 9b5f413f771863f863dba108a208397ef94f5d8a7617454fbb41c4dfe5ea442e. Apr 30 01:28:19.170602 containerd[1508]: time="2025-04-30T01:28:19.170512634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gz78b,Uid:2eab43b7-c4c1-4e23-a96c-4d8eeaf72e45,Namespace:kube-system,Attempt:0,} returns sandbox id \"9b5f413f771863f863dba108a208397ef94f5d8a7617454fbb41c4dfe5ea442e\"" Apr 30 01:28:19.174585 containerd[1508]: time="2025-04-30T01:28:19.174310525Z" level=info msg="CreateContainer within sandbox \"9b5f413f771863f863dba108a208397ef94f5d8a7617454fbb41c4dfe5ea442e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 30 01:28:19.190304 containerd[1508]: time="2025-04-30T01:28:19.190247321Z" level=info msg="CreateContainer within sandbox \"9b5f413f771863f863dba108a208397ef94f5d8a7617454fbb41c4dfe5ea442e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c55ae0e67e564e04b80f1f0965aed367642b1b9b8b9bbbc083b85ff9177a8212\"" Apr 30 01:28:19.192044 containerd[1508]: time="2025-04-30T01:28:19.191422138Z" level=info msg="StartContainer for \"c55ae0e67e564e04b80f1f0965aed367642b1b9b8b9bbbc083b85ff9177a8212\"" Apr 30 01:28:19.232176 systemd[1]: Started cri-containerd-c55ae0e67e564e04b80f1f0965aed367642b1b9b8b9bbbc083b85ff9177a8212.scope - libcontainer container c55ae0e67e564e04b80f1f0965aed367642b1b9b8b9bbbc083b85ff9177a8212. Apr 30 01:28:19.283851 containerd[1508]: time="2025-04-30T01:28:19.283644153Z" level=info msg="StartContainer for \"c55ae0e67e564e04b80f1f0965aed367642b1b9b8b9bbbc083b85ff9177a8212\" returns successfully" Apr 30 01:28:19.693827 kubelet[2725]: I0430 01:28:19.693732 2725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gz78b" podStartSLOduration=2.693713904 podStartE2EDuration="2.693713904s" podCreationTimestamp="2025-04-30 01:28:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 01:28:19.693468238 +0000 UTC m=+6.362665449" watchObservedRunningTime="2025-04-30 01:28:19.693713904 +0000 UTC m=+6.362911098" Apr 30 01:28:25.512451 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1665989348.mount: Deactivated successfully. Apr 30 01:28:28.972041 containerd[1508]: time="2025-04-30T01:28:28.971811471Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:28:28.975132 containerd[1508]: time="2025-04-30T01:28:28.975066288Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Apr 30 01:28:28.975787 containerd[1508]: time="2025-04-30T01:28:28.975730316Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:28:28.978739 containerd[1508]: time="2025-04-30T01:28:28.978505636Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.647003704s" Apr 30 01:28:28.978739 containerd[1508]: time="2025-04-30T01:28:28.978552508Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Apr 30 01:28:28.981134 containerd[1508]: time="2025-04-30T01:28:28.981084359Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Apr 30 01:28:28.983418 containerd[1508]: time="2025-04-30T01:28:28.983373394Z" level=info msg="CreateContainer within sandbox \"2808f15b2b2a7aa33b1ba8ccc2ea84bb381fcd72a7594d9f900db7e44366b34e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 30 01:28:29.060427 containerd[1508]: time="2025-04-30T01:28:29.060245897Z" level=info msg="CreateContainer within sandbox \"2808f15b2b2a7aa33b1ba8ccc2ea84bb381fcd72a7594d9f900db7e44366b34e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8c168bf9b68e12780ac12c1ea131243642e96b209fdc9bc9d46105f6c2a67b4e\"" Apr 30 01:28:29.062243 containerd[1508]: time="2025-04-30T01:28:29.060938695Z" level=info msg="StartContainer for \"8c168bf9b68e12780ac12c1ea131243642e96b209fdc9bc9d46105f6c2a67b4e\"" Apr 30 01:28:29.222187 systemd[1]: Started cri-containerd-8c168bf9b68e12780ac12c1ea131243642e96b209fdc9bc9d46105f6c2a67b4e.scope - libcontainer container 8c168bf9b68e12780ac12c1ea131243642e96b209fdc9bc9d46105f6c2a67b4e. Apr 30 01:28:29.266492 containerd[1508]: time="2025-04-30T01:28:29.266411104Z" level=info msg="StartContainer for \"8c168bf9b68e12780ac12c1ea131243642e96b209fdc9bc9d46105f6c2a67b4e\" returns successfully" Apr 30 01:28:29.288562 systemd[1]: cri-containerd-8c168bf9b68e12780ac12c1ea131243642e96b209fdc9bc9d46105f6c2a67b4e.scope: Deactivated successfully. Apr 30 01:28:29.592273 containerd[1508]: time="2025-04-30T01:28:29.572380520Z" level=info msg="shim disconnected" id=8c168bf9b68e12780ac12c1ea131243642e96b209fdc9bc9d46105f6c2a67b4e namespace=k8s.io Apr 30 01:28:29.592273 containerd[1508]: time="2025-04-30T01:28:29.592207603Z" level=warning msg="cleaning up after shim disconnected" id=8c168bf9b68e12780ac12c1ea131243642e96b209fdc9bc9d46105f6c2a67b4e namespace=k8s.io Apr 30 01:28:29.592273 containerd[1508]: time="2025-04-30T01:28:29.592260539Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 01:28:29.705720 containerd[1508]: time="2025-04-30T01:28:29.704344889Z" level=info msg="CreateContainer within sandbox \"2808f15b2b2a7aa33b1ba8ccc2ea84bb381fcd72a7594d9f900db7e44366b34e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 30 01:28:29.722047 containerd[1508]: time="2025-04-30T01:28:29.721730590Z" level=info msg="CreateContainer within sandbox \"2808f15b2b2a7aa33b1ba8ccc2ea84bb381fcd72a7594d9f900db7e44366b34e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"953980b9987e3fc960810d5f1dacf0c9ef2f8f4c9c602e54b2603d0f642a43c3\"" Apr 30 01:28:29.726646 containerd[1508]: time="2025-04-30T01:28:29.723100548Z" level=info msg="StartContainer for \"953980b9987e3fc960810d5f1dacf0c9ef2f8f4c9c602e54b2603d0f642a43c3\"" Apr 30 01:28:29.767161 systemd[1]: Started cri-containerd-953980b9987e3fc960810d5f1dacf0c9ef2f8f4c9c602e54b2603d0f642a43c3.scope - libcontainer container 953980b9987e3fc960810d5f1dacf0c9ef2f8f4c9c602e54b2603d0f642a43c3. Apr 30 01:28:29.821995 containerd[1508]: time="2025-04-30T01:28:29.821292850Z" level=info msg="StartContainer for \"953980b9987e3fc960810d5f1dacf0c9ef2f8f4c9c602e54b2603d0f642a43c3\" returns successfully" Apr 30 01:28:29.839978 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 01:28:29.842093 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 01:28:29.842471 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Apr 30 01:28:29.849416 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 01:28:29.850362 systemd[1]: cri-containerd-953980b9987e3fc960810d5f1dacf0c9ef2f8f4c9c602e54b2603d0f642a43c3.scope: Deactivated successfully. Apr 30 01:28:29.894039 containerd[1508]: time="2025-04-30T01:28:29.893544319Z" level=info msg="shim disconnected" id=953980b9987e3fc960810d5f1dacf0c9ef2f8f4c9c602e54b2603d0f642a43c3 namespace=k8s.io Apr 30 01:28:29.894039 containerd[1508]: time="2025-04-30T01:28:29.893788601Z" level=warning msg="cleaning up after shim disconnected" id=953980b9987e3fc960810d5f1dacf0c9ef2f8f4c9c602e54b2603d0f642a43c3 namespace=k8s.io Apr 30 01:28:29.894039 containerd[1508]: time="2025-04-30T01:28:29.893811605Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 01:28:29.924088 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 01:28:30.054680 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8c168bf9b68e12780ac12c1ea131243642e96b209fdc9bc9d46105f6c2a67b4e-rootfs.mount: Deactivated successfully. Apr 30 01:28:30.709796 containerd[1508]: time="2025-04-30T01:28:30.709565576Z" level=info msg="CreateContainer within sandbox \"2808f15b2b2a7aa33b1ba8ccc2ea84bb381fcd72a7594d9f900db7e44366b34e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 30 01:28:30.779403 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4053872557.mount: Deactivated successfully. Apr 30 01:28:30.781068 containerd[1508]: time="2025-04-30T01:28:30.780993340Z" level=info msg="CreateContainer within sandbox \"2808f15b2b2a7aa33b1ba8ccc2ea84bb381fcd72a7594d9f900db7e44366b34e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a3df37d72ef03dd3ad21e7cd701451dafa3bc10dad63bb84e52f0ab6a2f23ad7\"" Apr 30 01:28:30.785557 containerd[1508]: time="2025-04-30T01:28:30.785520889Z" level=info msg="StartContainer for \"a3df37d72ef03dd3ad21e7cd701451dafa3bc10dad63bb84e52f0ab6a2f23ad7\"" Apr 30 01:28:30.854238 systemd[1]: Started cri-containerd-a3df37d72ef03dd3ad21e7cd701451dafa3bc10dad63bb84e52f0ab6a2f23ad7.scope - libcontainer container a3df37d72ef03dd3ad21e7cd701451dafa3bc10dad63bb84e52f0ab6a2f23ad7. Apr 30 01:28:30.901872 containerd[1508]: time="2025-04-30T01:28:30.901800237Z" level=info msg="StartContainer for \"a3df37d72ef03dd3ad21e7cd701451dafa3bc10dad63bb84e52f0ab6a2f23ad7\" returns successfully" Apr 30 01:28:30.912612 systemd[1]: cri-containerd-a3df37d72ef03dd3ad21e7cd701451dafa3bc10dad63bb84e52f0ab6a2f23ad7.scope: Deactivated successfully. Apr 30 01:28:30.949826 containerd[1508]: time="2025-04-30T01:28:30.949696578Z" level=info msg="shim disconnected" id=a3df37d72ef03dd3ad21e7cd701451dafa3bc10dad63bb84e52f0ab6a2f23ad7 namespace=k8s.io Apr 30 01:28:30.949826 containerd[1508]: time="2025-04-30T01:28:30.949811375Z" level=warning msg="cleaning up after shim disconnected" id=a3df37d72ef03dd3ad21e7cd701451dafa3bc10dad63bb84e52f0ab6a2f23ad7 namespace=k8s.io Apr 30 01:28:30.949826 containerd[1508]: time="2025-04-30T01:28:30.949827983Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 01:28:31.054221 systemd[1]: run-containerd-runc-k8s.io-a3df37d72ef03dd3ad21e7cd701451dafa3bc10dad63bb84e52f0ab6a2f23ad7-runc.DS82wk.mount: Deactivated successfully. Apr 30 01:28:31.054385 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a3df37d72ef03dd3ad21e7cd701451dafa3bc10dad63bb84e52f0ab6a2f23ad7-rootfs.mount: Deactivated successfully. Apr 30 01:28:31.715375 containerd[1508]: time="2025-04-30T01:28:31.715272918Z" level=info msg="CreateContainer within sandbox \"2808f15b2b2a7aa33b1ba8ccc2ea84bb381fcd72a7594d9f900db7e44366b34e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 30 01:28:31.748668 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3021808114.mount: Deactivated successfully. Apr 30 01:28:31.752373 containerd[1508]: time="2025-04-30T01:28:31.752319491Z" level=info msg="CreateContainer within sandbox \"2808f15b2b2a7aa33b1ba8ccc2ea84bb381fcd72a7594d9f900db7e44366b34e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"fe1c791c3ac731715be3ae7cc9130fd9e27468e2fa831c447e4e448f4bc8fc1b\"" Apr 30 01:28:31.755209 containerd[1508]: time="2025-04-30T01:28:31.755167799Z" level=info msg="StartContainer for \"fe1c791c3ac731715be3ae7cc9130fd9e27468e2fa831c447e4e448f4bc8fc1b\"" Apr 30 01:28:31.802289 systemd[1]: Started cri-containerd-fe1c791c3ac731715be3ae7cc9130fd9e27468e2fa831c447e4e448f4bc8fc1b.scope - libcontainer container fe1c791c3ac731715be3ae7cc9130fd9e27468e2fa831c447e4e448f4bc8fc1b. Apr 30 01:28:31.843235 systemd[1]: cri-containerd-fe1c791c3ac731715be3ae7cc9130fd9e27468e2fa831c447e4e448f4bc8fc1b.scope: Deactivated successfully. Apr 30 01:28:31.845635 containerd[1508]: time="2025-04-30T01:28:31.845576175Z" level=info msg="StartContainer for \"fe1c791c3ac731715be3ae7cc9130fd9e27468e2fa831c447e4e448f4bc8fc1b\" returns successfully" Apr 30 01:28:31.880145 containerd[1508]: time="2025-04-30T01:28:31.880007534Z" level=info msg="shim disconnected" id=fe1c791c3ac731715be3ae7cc9130fd9e27468e2fa831c447e4e448f4bc8fc1b namespace=k8s.io Apr 30 01:28:31.880145 containerd[1508]: time="2025-04-30T01:28:31.880109611Z" level=warning msg="cleaning up after shim disconnected" id=fe1c791c3ac731715be3ae7cc9130fd9e27468e2fa831c447e4e448f4bc8fc1b namespace=k8s.io Apr 30 01:28:31.880145 containerd[1508]: time="2025-04-30T01:28:31.880125299Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 01:28:32.054725 systemd[1]: run-containerd-runc-k8s.io-fe1c791c3ac731715be3ae7cc9130fd9e27468e2fa831c447e4e448f4bc8fc1b-runc.ATdFD7.mount: Deactivated successfully. Apr 30 01:28:32.055948 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fe1c791c3ac731715be3ae7cc9130fd9e27468e2fa831c447e4e448f4bc8fc1b-rootfs.mount: Deactivated successfully. Apr 30 01:28:32.071905 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount204211044.mount: Deactivated successfully. Apr 30 01:28:32.728547 containerd[1508]: time="2025-04-30T01:28:32.727952084Z" level=info msg="CreateContainer within sandbox \"2808f15b2b2a7aa33b1ba8ccc2ea84bb381fcd72a7594d9f900db7e44366b34e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 30 01:28:32.764630 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1382934232.mount: Deactivated successfully. Apr 30 01:28:32.778473 containerd[1508]: time="2025-04-30T01:28:32.778400923Z" level=info msg="CreateContainer within sandbox \"2808f15b2b2a7aa33b1ba8ccc2ea84bb381fcd72a7594d9f900db7e44366b34e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b9a724981526fe0b67e2e2ffac81a24287966933071c6e79a2585a784722916f\"" Apr 30 01:28:32.784408 containerd[1508]: time="2025-04-30T01:28:32.784362419Z" level=info msg="StartContainer for \"b9a724981526fe0b67e2e2ffac81a24287966933071c6e79a2585a784722916f\"" Apr 30 01:28:32.856375 systemd[1]: Started cri-containerd-b9a724981526fe0b67e2e2ffac81a24287966933071c6e79a2585a784722916f.scope - libcontainer container b9a724981526fe0b67e2e2ffac81a24287966933071c6e79a2585a784722916f. Apr 30 01:28:32.976667 containerd[1508]: time="2025-04-30T01:28:32.974256491Z" level=info msg="StartContainer for \"b9a724981526fe0b67e2e2ffac81a24287966933071c6e79a2585a784722916f\" returns successfully" Apr 30 01:28:33.338338 kubelet[2725]: I0430 01:28:33.338285 2725 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Apr 30 01:28:33.437661 systemd[1]: Created slice kubepods-burstable-podd1dfcd76_6707_4415_9067_a1e85104b356.slice - libcontainer container kubepods-burstable-podd1dfcd76_6707_4415_9067_a1e85104b356.slice. Apr 30 01:28:33.447993 kubelet[2725]: I0430 01:28:33.445141 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdgbn\" (UniqueName: \"kubernetes.io/projected/d1dfcd76-6707-4415-9067-a1e85104b356-kube-api-access-fdgbn\") pod \"coredns-668d6bf9bc-lkqjj\" (UID: \"d1dfcd76-6707-4415-9067-a1e85104b356\") " pod="kube-system/coredns-668d6bf9bc-lkqjj" Apr 30 01:28:33.447993 kubelet[2725]: I0430 01:28:33.445496 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d1dfcd76-6707-4415-9067-a1e85104b356-config-volume\") pod \"coredns-668d6bf9bc-lkqjj\" (UID: \"d1dfcd76-6707-4415-9067-a1e85104b356\") " pod="kube-system/coredns-668d6bf9bc-lkqjj" Apr 30 01:28:33.447993 kubelet[2725]: I0430 01:28:33.445677 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e267952-7573-4e96-a216-6f5ae90b3094-config-volume\") pod \"coredns-668d6bf9bc-fhlrk\" (UID: \"2e267952-7573-4e96-a216-6f5ae90b3094\") " pod="kube-system/coredns-668d6bf9bc-fhlrk" Apr 30 01:28:33.447993 kubelet[2725]: I0430 01:28:33.445782 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x28mn\" (UniqueName: \"kubernetes.io/projected/2e267952-7573-4e96-a216-6f5ae90b3094-kube-api-access-x28mn\") pod \"coredns-668d6bf9bc-fhlrk\" (UID: \"2e267952-7573-4e96-a216-6f5ae90b3094\") " pod="kube-system/coredns-668d6bf9bc-fhlrk" Apr 30 01:28:33.458074 systemd[1]: Created slice kubepods-burstable-pod2e267952_7573_4e96_a216_6f5ae90b3094.slice - libcontainer container kubepods-burstable-pod2e267952_7573_4e96_a216_6f5ae90b3094.slice. Apr 30 01:28:33.689024 containerd[1508]: time="2025-04-30T01:28:33.688666990Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:28:33.691904 containerd[1508]: time="2025-04-30T01:28:33.691812794Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Apr 30 01:28:33.693407 containerd[1508]: time="2025-04-30T01:28:33.693355878Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:28:33.699049 containerd[1508]: time="2025-04-30T01:28:33.698632646Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 4.717489583s" Apr 30 01:28:33.699049 containerd[1508]: time="2025-04-30T01:28:33.698710384Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Apr 30 01:28:33.708377 containerd[1508]: time="2025-04-30T01:28:33.708317824Z" level=info msg="CreateContainer within sandbox \"b1d8565daa860ed9ba68a8be5fa0790b0ffc414eb9060480ab95be938e00ca2d\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Apr 30 01:28:33.741249 containerd[1508]: time="2025-04-30T01:28:33.738933573Z" level=info msg="CreateContainer within sandbox \"b1d8565daa860ed9ba68a8be5fa0790b0ffc414eb9060480ab95be938e00ca2d\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"b71c3ab25112a8bb620e9db8ab39961803d1b49f45266d8e2374fd93a2dc6ffc\"" Apr 30 01:28:33.743406 containerd[1508]: time="2025-04-30T01:28:33.743377004Z" level=info msg="StartContainer for \"b71c3ab25112a8bb620e9db8ab39961803d1b49f45266d8e2374fd93a2dc6ffc\"" Apr 30 01:28:33.752989 containerd[1508]: time="2025-04-30T01:28:33.752479837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lkqjj,Uid:d1dfcd76-6707-4415-9067-a1e85104b356,Namespace:kube-system,Attempt:0,}" Apr 30 01:28:33.801290 containerd[1508]: time="2025-04-30T01:28:33.801233370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fhlrk,Uid:2e267952-7573-4e96-a216-6f5ae90b3094,Namespace:kube-system,Attempt:0,}" Apr 30 01:28:33.811504 kubelet[2725]: I0430 01:28:33.811072 2725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-zmqzb" podStartSLOduration=6.160391126 podStartE2EDuration="16.810903064s" podCreationTimestamp="2025-04-30 01:28:17 +0000 UTC" firstStartedPulling="2025-04-30 01:28:18.330220269 +0000 UTC m=+4.999417464" lastFinishedPulling="2025-04-30 01:28:28.9807322 +0000 UTC m=+15.649929402" observedRunningTime="2025-04-30 01:28:33.80867818 +0000 UTC m=+20.477875389" watchObservedRunningTime="2025-04-30 01:28:33.810903064 +0000 UTC m=+20.480100254" Apr 30 01:28:33.876485 systemd[1]: Started cri-containerd-b71c3ab25112a8bb620e9db8ab39961803d1b49f45266d8e2374fd93a2dc6ffc.scope - libcontainer container b71c3ab25112a8bb620e9db8ab39961803d1b49f45266d8e2374fd93a2dc6ffc. Apr 30 01:28:34.072474 containerd[1508]: time="2025-04-30T01:28:34.071785393Z" level=info msg="StartContainer for \"b71c3ab25112a8bb620e9db8ab39961803d1b49f45266d8e2374fd93a2dc6ffc\" returns successfully" Apr 30 01:28:38.062180 systemd-networkd[1425]: cilium_host: Link UP Apr 30 01:28:38.062860 systemd-networkd[1425]: cilium_net: Link UP Apr 30 01:28:38.062866 systemd-networkd[1425]: cilium_net: Gained carrier Apr 30 01:28:38.063793 systemd-networkd[1425]: cilium_host: Gained carrier Apr 30 01:28:38.249595 systemd-networkd[1425]: cilium_vxlan: Link UP Apr 30 01:28:38.249612 systemd-networkd[1425]: cilium_vxlan: Gained carrier Apr 30 01:28:38.523235 systemd-networkd[1425]: cilium_host: Gained IPv6LL Apr 30 01:28:38.699287 systemd-networkd[1425]: cilium_net: Gained IPv6LL Apr 30 01:28:38.804381 kernel: NET: Registered PF_ALG protocol family Apr 30 01:28:39.857585 systemd-networkd[1425]: lxc_health: Link UP Apr 30 01:28:39.867056 systemd-networkd[1425]: lxc_health: Gained carrier Apr 30 01:28:40.043166 systemd-networkd[1425]: cilium_vxlan: Gained IPv6LL Apr 30 01:28:40.224648 kubelet[2725]: I0430 01:28:40.224419 2725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-2l27n" podStartSLOduration=7.156383495 podStartE2EDuration="22.224394605s" podCreationTimestamp="2025-04-30 01:28:18 +0000 UTC" firstStartedPulling="2025-04-30 01:28:18.63579714 +0000 UTC m=+5.304994335" lastFinishedPulling="2025-04-30 01:28:33.703808249 +0000 UTC m=+20.373005445" observedRunningTime="2025-04-30 01:28:34.893927141 +0000 UTC m=+21.563124335" watchObservedRunningTime="2025-04-30 01:28:40.224394605 +0000 UTC m=+26.893591798" Apr 30 01:28:40.375387 systemd-networkd[1425]: lxcf40f729b18ca: Link UP Apr 30 01:28:40.391990 kernel: eth0: renamed from tmp9d84c Apr 30 01:28:40.402552 systemd-networkd[1425]: lxcf40f729b18ca: Gained carrier Apr 30 01:28:40.540357 systemd-networkd[1425]: lxc0a31043f0b2f: Link UP Apr 30 01:28:40.559054 kernel: eth0: renamed from tmp38970 Apr 30 01:28:40.572196 systemd-networkd[1425]: lxc0a31043f0b2f: Gained carrier Apr 30 01:28:41.195276 systemd-networkd[1425]: lxc_health: Gained IPv6LL Apr 30 01:28:41.835191 systemd-networkd[1425]: lxc0a31043f0b2f: Gained IPv6LL Apr 30 01:28:42.283477 systemd-networkd[1425]: lxcf40f729b18ca: Gained IPv6LL Apr 30 01:28:46.086518 containerd[1508]: time="2025-04-30T01:28:46.085645110Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 01:28:46.086518 containerd[1508]: time="2025-04-30T01:28:46.085784970Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 01:28:46.086518 containerd[1508]: time="2025-04-30T01:28:46.085808416Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 01:28:46.088235 containerd[1508]: time="2025-04-30T01:28:46.088051965Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 01:28:46.156302 systemd[1]: run-containerd-runc-k8s.io-9d84ca1f486dbb7d58aa70391668a0449ab86ff2127f6fe5b011dcc83cd0cac2-runc.X2v2wp.mount: Deactivated successfully. Apr 30 01:28:46.164425 containerd[1508]: time="2025-04-30T01:28:46.163152441Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 01:28:46.164425 containerd[1508]: time="2025-04-30T01:28:46.163400772Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 01:28:46.164425 containerd[1508]: time="2025-04-30T01:28:46.163441569Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 01:28:46.164425 containerd[1508]: time="2025-04-30T01:28:46.164261098Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 01:28:46.204323 systemd[1]: Started cri-containerd-9d84ca1f486dbb7d58aa70391668a0449ab86ff2127f6fe5b011dcc83cd0cac2.scope - libcontainer container 9d84ca1f486dbb7d58aa70391668a0449ab86ff2127f6fe5b011dcc83cd0cac2. Apr 30 01:28:46.248215 systemd[1]: Started cri-containerd-38970298acc8307c63d15e4b59ba414095f15f30fcda849adef4c5067e3c5234.scope - libcontainer container 38970298acc8307c63d15e4b59ba414095f15f30fcda849adef4c5067e3c5234. Apr 30 01:28:46.359987 containerd[1508]: time="2025-04-30T01:28:46.359924182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lkqjj,Uid:d1dfcd76-6707-4415-9067-a1e85104b356,Namespace:kube-system,Attempt:0,} returns sandbox id \"9d84ca1f486dbb7d58aa70391668a0449ab86ff2127f6fe5b011dcc83cd0cac2\"" Apr 30 01:28:46.370657 containerd[1508]: time="2025-04-30T01:28:46.370378173Z" level=info msg="CreateContainer within sandbox \"9d84ca1f486dbb7d58aa70391668a0449ab86ff2127f6fe5b011dcc83cd0cac2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 01:28:46.373355 containerd[1508]: time="2025-04-30T01:28:46.373318807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fhlrk,Uid:2e267952-7573-4e96-a216-6f5ae90b3094,Namespace:kube-system,Attempt:0,} returns sandbox id \"38970298acc8307c63d15e4b59ba414095f15f30fcda849adef4c5067e3c5234\"" Apr 30 01:28:46.377457 containerd[1508]: time="2025-04-30T01:28:46.377341884Z" level=info msg="CreateContainer within sandbox \"38970298acc8307c63d15e4b59ba414095f15f30fcda849adef4c5067e3c5234\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 01:28:46.406141 containerd[1508]: time="2025-04-30T01:28:46.406023227Z" level=info msg="CreateContainer within sandbox \"9d84ca1f486dbb7d58aa70391668a0449ab86ff2127f6fe5b011dcc83cd0cac2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c68216cd18752b55d0d0cbddef1f4694be3324a12d475e99ea864d9be582e0f2\"" Apr 30 01:28:46.408669 containerd[1508]: time="2025-04-30T01:28:46.408626865Z" level=info msg="StartContainer for \"c68216cd18752b55d0d0cbddef1f4694be3324a12d475e99ea864d9be582e0f2\"" Apr 30 01:28:46.409069 containerd[1508]: time="2025-04-30T01:28:46.408930066Z" level=info msg="CreateContainer within sandbox \"38970298acc8307c63d15e4b59ba414095f15f30fcda849adef4c5067e3c5234\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e6974f862c2aee80ce381f7b2b13bdc3240f746e9e156fe1fa652bb175d3e6f8\"" Apr 30 01:28:46.411175 containerd[1508]: time="2025-04-30T01:28:46.411073271Z" level=info msg="StartContainer for \"e6974f862c2aee80ce381f7b2b13bdc3240f746e9e156fe1fa652bb175d3e6f8\"" Apr 30 01:28:46.470171 systemd[1]: Started cri-containerd-c68216cd18752b55d0d0cbddef1f4694be3324a12d475e99ea864d9be582e0f2.scope - libcontainer container c68216cd18752b55d0d0cbddef1f4694be3324a12d475e99ea864d9be582e0f2. Apr 30 01:28:46.482850 systemd[1]: Started cri-containerd-e6974f862c2aee80ce381f7b2b13bdc3240f746e9e156fe1fa652bb175d3e6f8.scope - libcontainer container e6974f862c2aee80ce381f7b2b13bdc3240f746e9e156fe1fa652bb175d3e6f8. Apr 30 01:28:46.545530 containerd[1508]: time="2025-04-30T01:28:46.545472422Z" level=info msg="StartContainer for \"c68216cd18752b55d0d0cbddef1f4694be3324a12d475e99ea864d9be582e0f2\" returns successfully" Apr 30 01:28:46.545893 containerd[1508]: time="2025-04-30T01:28:46.545658949Z" level=info msg="StartContainer for \"e6974f862c2aee80ce381f7b2b13bdc3240f746e9e156fe1fa652bb175d3e6f8\" returns successfully" Apr 30 01:28:46.835061 kubelet[2725]: I0430 01:28:46.833229 2725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-fhlrk" podStartSLOduration=28.833195285 podStartE2EDuration="28.833195285s" podCreationTimestamp="2025-04-30 01:28:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 01:28:46.827733125 +0000 UTC m=+33.496930336" watchObservedRunningTime="2025-04-30 01:28:46.833195285 +0000 UTC m=+33.502392478" Apr 30 01:28:47.824822 kubelet[2725]: I0430 01:28:47.824401 2725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-lkqjj" podStartSLOduration=29.824381793 podStartE2EDuration="29.824381793s" podCreationTimestamp="2025-04-30 01:28:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 01:28:46.852062117 +0000 UTC m=+33.521259331" watchObservedRunningTime="2025-04-30 01:28:47.824381793 +0000 UTC m=+34.493578983" Apr 30 01:29:24.758442 systemd[1]: Started sshd@9-10.243.74.78:22-139.178.68.195:50140.service - OpenSSH per-connection server daemon (139.178.68.195:50140). Apr 30 01:29:25.685555 sshd[4119]: Accepted publickey for core from 139.178.68.195 port 50140 ssh2: RSA SHA256:MfEzgEaq/hy6que95IR+2Hjd2kD3vXBLQEa1zk00RtE Apr 30 01:29:25.688207 sshd-session[4119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:29:25.697904 systemd-logind[1491]: New session 12 of user core. Apr 30 01:29:25.704201 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 30 01:29:26.845298 sshd[4121]: Connection closed by 139.178.68.195 port 50140 Apr 30 01:29:26.846555 sshd-session[4119]: pam_unix(sshd:session): session closed for user core Apr 30 01:29:26.853870 systemd[1]: sshd@9-10.243.74.78:22-139.178.68.195:50140.service: Deactivated successfully. Apr 30 01:29:26.859372 systemd[1]: session-12.scope: Deactivated successfully. Apr 30 01:29:26.860546 systemd-logind[1491]: Session 12 logged out. Waiting for processes to exit. Apr 30 01:29:26.863015 systemd-logind[1491]: Removed session 12. Apr 30 01:29:32.010519 systemd[1]: Started sshd@10-10.243.74.78:22-139.178.68.195:42542.service - OpenSSH per-connection server daemon (139.178.68.195:42542). Apr 30 01:29:32.918350 sshd[4133]: Accepted publickey for core from 139.178.68.195 port 42542 ssh2: RSA SHA256:MfEzgEaq/hy6que95IR+2Hjd2kD3vXBLQEa1zk00RtE Apr 30 01:29:32.921371 sshd-session[4133]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:29:32.929211 systemd-logind[1491]: New session 13 of user core. Apr 30 01:29:32.935369 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 30 01:29:33.655608 sshd[4135]: Connection closed by 139.178.68.195 port 42542 Apr 30 01:29:33.656765 sshd-session[4133]: pam_unix(sshd:session): session closed for user core Apr 30 01:29:33.663840 systemd[1]: sshd@10-10.243.74.78:22-139.178.68.195:42542.service: Deactivated successfully. Apr 30 01:29:33.667189 systemd[1]: session-13.scope: Deactivated successfully. Apr 30 01:29:33.668444 systemd-logind[1491]: Session 13 logged out. Waiting for processes to exit. Apr 30 01:29:33.670171 systemd-logind[1491]: Removed session 13. Apr 30 01:29:38.819447 systemd[1]: Started sshd@11-10.243.74.78:22-139.178.68.195:45480.service - OpenSSH per-connection server daemon (139.178.68.195:45480). Apr 30 01:29:39.736706 sshd[4147]: Accepted publickey for core from 139.178.68.195 port 45480 ssh2: RSA SHA256:MfEzgEaq/hy6que95IR+2Hjd2kD3vXBLQEa1zk00RtE Apr 30 01:29:39.738898 sshd-session[4147]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:29:39.745324 systemd-logind[1491]: New session 14 of user core. Apr 30 01:29:39.753228 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 30 01:29:40.495004 sshd[4149]: Connection closed by 139.178.68.195 port 45480 Apr 30 01:29:40.496252 sshd-session[4147]: pam_unix(sshd:session): session closed for user core Apr 30 01:29:40.502139 systemd[1]: sshd@11-10.243.74.78:22-139.178.68.195:45480.service: Deactivated successfully. Apr 30 01:29:40.504643 systemd[1]: session-14.scope: Deactivated successfully. Apr 30 01:29:40.506028 systemd-logind[1491]: Session 14 logged out. Waiting for processes to exit. Apr 30 01:29:40.507796 systemd-logind[1491]: Removed session 14. Apr 30 01:29:45.656259 systemd[1]: Started sshd@12-10.243.74.78:22-139.178.68.195:40356.service - OpenSSH per-connection server daemon (139.178.68.195:40356). Apr 30 01:29:46.583240 sshd[4161]: Accepted publickey for core from 139.178.68.195 port 40356 ssh2: RSA SHA256:MfEzgEaq/hy6que95IR+2Hjd2kD3vXBLQEa1zk00RtE Apr 30 01:29:46.585489 sshd-session[4161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:29:46.592016 systemd-logind[1491]: New session 15 of user core. Apr 30 01:29:46.598178 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 30 01:29:47.316905 sshd[4163]: Connection closed by 139.178.68.195 port 40356 Apr 30 01:29:47.318124 sshd-session[4161]: pam_unix(sshd:session): session closed for user core Apr 30 01:29:47.323994 systemd[1]: sshd@12-10.243.74.78:22-139.178.68.195:40356.service: Deactivated successfully. Apr 30 01:29:47.326506 systemd[1]: session-15.scope: Deactivated successfully. Apr 30 01:29:47.327578 systemd-logind[1491]: Session 15 logged out. Waiting for processes to exit. Apr 30 01:29:47.329620 systemd-logind[1491]: Removed session 15. Apr 30 01:29:47.480422 systemd[1]: Started sshd@13-10.243.74.78:22-139.178.68.195:40372.service - OpenSSH per-connection server daemon (139.178.68.195:40372). Apr 30 01:29:48.376657 sshd[4175]: Accepted publickey for core from 139.178.68.195 port 40372 ssh2: RSA SHA256:MfEzgEaq/hy6que95IR+2Hjd2kD3vXBLQEa1zk00RtE Apr 30 01:29:48.378985 sshd-session[4175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:29:48.385822 systemd-logind[1491]: New session 16 of user core. Apr 30 01:29:48.396183 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 30 01:29:49.147464 sshd[4177]: Connection closed by 139.178.68.195 port 40372 Apr 30 01:29:49.147317 sshd-session[4175]: pam_unix(sshd:session): session closed for user core Apr 30 01:29:49.152338 systemd[1]: sshd@13-10.243.74.78:22-139.178.68.195:40372.service: Deactivated successfully. Apr 30 01:29:49.154949 systemd[1]: session-16.scope: Deactivated successfully. Apr 30 01:29:49.157061 systemd-logind[1491]: Session 16 logged out. Waiting for processes to exit. Apr 30 01:29:49.158470 systemd-logind[1491]: Removed session 16. Apr 30 01:29:49.303333 systemd[1]: Started sshd@14-10.243.74.78:22-139.178.68.195:40386.service - OpenSSH per-connection server daemon (139.178.68.195:40386). Apr 30 01:29:50.213299 sshd[4186]: Accepted publickey for core from 139.178.68.195 port 40386 ssh2: RSA SHA256:MfEzgEaq/hy6que95IR+2Hjd2kD3vXBLQEa1zk00RtE Apr 30 01:29:50.216058 sshd-session[4186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:29:50.222780 systemd-logind[1491]: New session 17 of user core. Apr 30 01:29:50.226134 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 30 01:29:50.931007 sshd[4190]: Connection closed by 139.178.68.195 port 40386 Apr 30 01:29:50.931860 sshd-session[4186]: pam_unix(sshd:session): session closed for user core Apr 30 01:29:50.937024 systemd-logind[1491]: Session 17 logged out. Waiting for processes to exit. Apr 30 01:29:50.938565 systemd[1]: sshd@14-10.243.74.78:22-139.178.68.195:40386.service: Deactivated successfully. Apr 30 01:29:50.941931 systemd[1]: session-17.scope: Deactivated successfully. Apr 30 01:29:50.943933 systemd-logind[1491]: Removed session 17. Apr 30 01:29:56.091404 systemd[1]: Started sshd@15-10.243.74.78:22-139.178.68.195:48180.service - OpenSSH per-connection server daemon (139.178.68.195:48180). Apr 30 01:29:56.986090 sshd[4201]: Accepted publickey for core from 139.178.68.195 port 48180 ssh2: RSA SHA256:MfEzgEaq/hy6que95IR+2Hjd2kD3vXBLQEa1zk00RtE Apr 30 01:29:56.988029 sshd-session[4201]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:29:56.996177 systemd-logind[1491]: New session 18 of user core. Apr 30 01:29:57.002250 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 30 01:29:57.688030 sshd[4203]: Connection closed by 139.178.68.195 port 48180 Apr 30 01:29:57.689192 sshd-session[4201]: pam_unix(sshd:session): session closed for user core Apr 30 01:29:57.694877 systemd[1]: sshd@15-10.243.74.78:22-139.178.68.195:48180.service: Deactivated successfully. Apr 30 01:29:57.697661 systemd[1]: session-18.scope: Deactivated successfully. Apr 30 01:29:57.698781 systemd-logind[1491]: Session 18 logged out. Waiting for processes to exit. Apr 30 01:29:57.701095 systemd-logind[1491]: Removed session 18. Apr 30 01:30:02.855395 systemd[1]: Started sshd@16-10.243.74.78:22-139.178.68.195:48182.service - OpenSSH per-connection server daemon (139.178.68.195:48182). Apr 30 01:30:03.767001 sshd[4214]: Accepted publickey for core from 139.178.68.195 port 48182 ssh2: RSA SHA256:MfEzgEaq/hy6que95IR+2Hjd2kD3vXBLQEa1zk00RtE Apr 30 01:30:03.769111 sshd-session[4214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:30:03.777266 systemd-logind[1491]: New session 19 of user core. Apr 30 01:30:03.784376 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 30 01:30:04.479075 sshd[4216]: Connection closed by 139.178.68.195 port 48182 Apr 30 01:30:04.480157 sshd-session[4214]: pam_unix(sshd:session): session closed for user core Apr 30 01:30:04.485677 systemd[1]: sshd@16-10.243.74.78:22-139.178.68.195:48182.service: Deactivated successfully. Apr 30 01:30:04.488824 systemd[1]: session-19.scope: Deactivated successfully. Apr 30 01:30:04.490829 systemd-logind[1491]: Session 19 logged out. Waiting for processes to exit. Apr 30 01:30:04.492775 systemd-logind[1491]: Removed session 19. Apr 30 01:30:04.642680 systemd[1]: Started sshd@17-10.243.74.78:22-139.178.68.195:48188.service - OpenSSH per-connection server daemon (139.178.68.195:48188). Apr 30 01:30:05.547707 sshd[4226]: Accepted publickey for core from 139.178.68.195 port 48188 ssh2: RSA SHA256:MfEzgEaq/hy6que95IR+2Hjd2kD3vXBLQEa1zk00RtE Apr 30 01:30:05.549810 sshd-session[4226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:30:05.556994 systemd-logind[1491]: New session 20 of user core. Apr 30 01:30:05.569364 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 30 01:30:06.610018 sshd[4228]: Connection closed by 139.178.68.195 port 48188 Apr 30 01:30:06.610879 sshd-session[4226]: pam_unix(sshd:session): session closed for user core Apr 30 01:30:06.616348 systemd[1]: sshd@17-10.243.74.78:22-139.178.68.195:48188.service: Deactivated successfully. Apr 30 01:30:06.618991 systemd[1]: session-20.scope: Deactivated successfully. Apr 30 01:30:06.619944 systemd-logind[1491]: Session 20 logged out. Waiting for processes to exit. Apr 30 01:30:06.621513 systemd-logind[1491]: Removed session 20. Apr 30 01:30:06.770452 systemd[1]: Started sshd@18-10.243.74.78:22-139.178.68.195:36314.service - OpenSSH per-connection server daemon (139.178.68.195:36314). Apr 30 01:30:07.681552 sshd[4237]: Accepted publickey for core from 139.178.68.195 port 36314 ssh2: RSA SHA256:MfEzgEaq/hy6que95IR+2Hjd2kD3vXBLQEa1zk00RtE Apr 30 01:30:07.683450 sshd-session[4237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:30:07.689830 systemd-logind[1491]: New session 21 of user core. Apr 30 01:30:07.697212 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 30 01:30:09.565944 sshd[4239]: Connection closed by 139.178.68.195 port 36314 Apr 30 01:30:09.567109 sshd-session[4237]: pam_unix(sshd:session): session closed for user core Apr 30 01:30:09.577037 systemd[1]: sshd@18-10.243.74.78:22-139.178.68.195:36314.service: Deactivated successfully. Apr 30 01:30:09.580932 systemd[1]: session-21.scope: Deactivated successfully. Apr 30 01:30:09.582629 systemd-logind[1491]: Session 21 logged out. Waiting for processes to exit. Apr 30 01:30:09.584619 systemd-logind[1491]: Removed session 21. Apr 30 01:30:09.725389 systemd[1]: Started sshd@19-10.243.74.78:22-139.178.68.195:36330.service - OpenSSH per-connection server daemon (139.178.68.195:36330). Apr 30 01:30:10.618330 sshd[4255]: Accepted publickey for core from 139.178.68.195 port 36330 ssh2: RSA SHA256:MfEzgEaq/hy6que95IR+2Hjd2kD3vXBLQEa1zk00RtE Apr 30 01:30:10.620494 sshd-session[4255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:30:10.628663 systemd-logind[1491]: New session 22 of user core. Apr 30 01:30:10.640206 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 30 01:30:11.548382 sshd[4257]: Connection closed by 139.178.68.195 port 36330 Apr 30 01:30:11.548153 sshd-session[4255]: pam_unix(sshd:session): session closed for user core Apr 30 01:30:11.554158 systemd[1]: sshd@19-10.243.74.78:22-139.178.68.195:36330.service: Deactivated successfully. Apr 30 01:30:11.557758 systemd[1]: session-22.scope: Deactivated successfully. Apr 30 01:30:11.559866 systemd-logind[1491]: Session 22 logged out. Waiting for processes to exit. Apr 30 01:30:11.561286 systemd-logind[1491]: Removed session 22. Apr 30 01:30:11.702399 systemd[1]: Started sshd@20-10.243.74.78:22-139.178.68.195:36338.service - OpenSSH per-connection server daemon (139.178.68.195:36338). Apr 30 01:30:12.600190 sshd[4266]: Accepted publickey for core from 139.178.68.195 port 36338 ssh2: RSA SHA256:MfEzgEaq/hy6que95IR+2Hjd2kD3vXBLQEa1zk00RtE Apr 30 01:30:12.603295 sshd-session[4266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:30:12.610774 systemd-logind[1491]: New session 23 of user core. Apr 30 01:30:12.619234 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 30 01:30:13.306028 sshd[4268]: Connection closed by 139.178.68.195 port 36338 Apr 30 01:30:13.307100 sshd-session[4266]: pam_unix(sshd:session): session closed for user core Apr 30 01:30:13.311410 systemd-logind[1491]: Session 23 logged out. Waiting for processes to exit. Apr 30 01:30:13.312542 systemd[1]: sshd@20-10.243.74.78:22-139.178.68.195:36338.service: Deactivated successfully. Apr 30 01:30:13.315337 systemd[1]: session-23.scope: Deactivated successfully. Apr 30 01:30:13.317951 systemd-logind[1491]: Removed session 23. Apr 30 01:30:18.467328 systemd[1]: Started sshd@21-10.243.74.78:22-139.178.68.195:55436.service - OpenSSH per-connection server daemon (139.178.68.195:55436). Apr 30 01:30:19.364290 sshd[4282]: Accepted publickey for core from 139.178.68.195 port 55436 ssh2: RSA SHA256:MfEzgEaq/hy6que95IR+2Hjd2kD3vXBLQEa1zk00RtE Apr 30 01:30:19.366283 sshd-session[4282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:30:19.374063 systemd-logind[1491]: New session 24 of user core. Apr 30 01:30:19.378159 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 30 01:30:20.065992 sshd[4284]: Connection closed by 139.178.68.195 port 55436 Apr 30 01:30:20.067070 sshd-session[4282]: pam_unix(sshd:session): session closed for user core Apr 30 01:30:20.071897 systemd-logind[1491]: Session 24 logged out. Waiting for processes to exit. Apr 30 01:30:20.072525 systemd[1]: sshd@21-10.243.74.78:22-139.178.68.195:55436.service: Deactivated successfully. Apr 30 01:30:20.074945 systemd[1]: session-24.scope: Deactivated successfully. Apr 30 01:30:20.076460 systemd-logind[1491]: Removed session 24. Apr 30 01:30:25.227374 systemd[1]: Started sshd@22-10.243.74.78:22-139.178.68.195:37164.service - OpenSSH per-connection server daemon (139.178.68.195:37164). Apr 30 01:30:26.145594 sshd[4296]: Accepted publickey for core from 139.178.68.195 port 37164 ssh2: RSA SHA256:MfEzgEaq/hy6que95IR+2Hjd2kD3vXBLQEa1zk00RtE Apr 30 01:30:26.147974 sshd-session[4296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:30:26.157271 systemd-logind[1491]: New session 25 of user core. Apr 30 01:30:26.165338 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 30 01:30:26.870288 sshd[4298]: Connection closed by 139.178.68.195 port 37164 Apr 30 01:30:26.871525 sshd-session[4296]: pam_unix(sshd:session): session closed for user core Apr 30 01:30:26.878125 systemd[1]: sshd@22-10.243.74.78:22-139.178.68.195:37164.service: Deactivated successfully. Apr 30 01:30:26.881036 systemd[1]: session-25.scope: Deactivated successfully. Apr 30 01:30:26.882846 systemd-logind[1491]: Session 25 logged out. Waiting for processes to exit. Apr 30 01:30:26.884648 systemd-logind[1491]: Removed session 25. Apr 30 01:30:32.034392 systemd[1]: Started sshd@23-10.243.74.78:22-139.178.68.195:37176.service - OpenSSH per-connection server daemon (139.178.68.195:37176). Apr 30 01:30:32.926559 sshd[4309]: Accepted publickey for core from 139.178.68.195 port 37176 ssh2: RSA SHA256:MfEzgEaq/hy6que95IR+2Hjd2kD3vXBLQEa1zk00RtE Apr 30 01:30:32.929005 sshd-session[4309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:30:32.935323 systemd-logind[1491]: New session 26 of user core. Apr 30 01:30:32.941188 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 30 01:30:33.636663 sshd[4311]: Connection closed by 139.178.68.195 port 37176 Apr 30 01:30:33.637643 sshd-session[4309]: pam_unix(sshd:session): session closed for user core Apr 30 01:30:33.642173 systemd-logind[1491]: Session 26 logged out. Waiting for processes to exit. Apr 30 01:30:33.643390 systemd[1]: sshd@23-10.243.74.78:22-139.178.68.195:37176.service: Deactivated successfully. Apr 30 01:30:33.646157 systemd[1]: session-26.scope: Deactivated successfully. Apr 30 01:30:33.647434 systemd-logind[1491]: Removed session 26. Apr 30 01:30:33.797734 systemd[1]: Started sshd@24-10.243.74.78:22-139.178.68.195:37188.service - OpenSSH per-connection server daemon (139.178.68.195:37188). Apr 30 01:30:34.693903 sshd[4322]: Accepted publickey for core from 139.178.68.195 port 37188 ssh2: RSA SHA256:MfEzgEaq/hy6que95IR+2Hjd2kD3vXBLQEa1zk00RtE Apr 30 01:30:34.695972 sshd-session[4322]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:30:34.702351 systemd-logind[1491]: New session 27 of user core. Apr 30 01:30:34.714205 systemd[1]: Started session-27.scope - Session 27 of User core. Apr 30 01:30:37.137977 systemd[1]: run-containerd-runc-k8s.io-b9a724981526fe0b67e2e2ffac81a24287966933071c6e79a2585a784722916f-runc.1kW7VW.mount: Deactivated successfully. Apr 30 01:30:37.169696 containerd[1508]: time="2025-04-30T01:30:37.169577904Z" level=info msg="StopContainer for \"b71c3ab25112a8bb620e9db8ab39961803d1b49f45266d8e2374fd93a2dc6ffc\" with timeout 30 (s)" Apr 30 01:30:37.171839 containerd[1508]: time="2025-04-30T01:30:37.171109293Z" level=info msg="Stop container \"b71c3ab25112a8bb620e9db8ab39961803d1b49f45266d8e2374fd93a2dc6ffc\" with signal terminated" Apr 30 01:30:37.172053 containerd[1508]: time="2025-04-30T01:30:37.172004284Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 01:30:37.180693 containerd[1508]: time="2025-04-30T01:30:37.180623235Z" level=info msg="StopContainer for \"b9a724981526fe0b67e2e2ffac81a24287966933071c6e79a2585a784722916f\" with timeout 2 (s)" Apr 30 01:30:37.181298 containerd[1508]: time="2025-04-30T01:30:37.181081547Z" level=info msg="Stop container \"b9a724981526fe0b67e2e2ffac81a24287966933071c6e79a2585a784722916f\" with signal terminated" Apr 30 01:30:37.198632 systemd[1]: cri-containerd-b71c3ab25112a8bb620e9db8ab39961803d1b49f45266d8e2374fd93a2dc6ffc.scope: Deactivated successfully. Apr 30 01:30:37.199680 systemd-networkd[1425]: lxc_health: Link DOWN Apr 30 01:30:37.199687 systemd-networkd[1425]: lxc_health: Lost carrier Apr 30 01:30:37.229066 systemd[1]: cri-containerd-b9a724981526fe0b67e2e2ffac81a24287966933071c6e79a2585a784722916f.scope: Deactivated successfully. Apr 30 01:30:37.230448 systemd[1]: cri-containerd-b9a724981526fe0b67e2e2ffac81a24287966933071c6e79a2585a784722916f.scope: Consumed 10.315s CPU time. Apr 30 01:30:37.266393 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b71c3ab25112a8bb620e9db8ab39961803d1b49f45266d8e2374fd93a2dc6ffc-rootfs.mount: Deactivated successfully. Apr 30 01:30:37.269926 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b9a724981526fe0b67e2e2ffac81a24287966933071c6e79a2585a784722916f-rootfs.mount: Deactivated successfully. Apr 30 01:30:37.342873 containerd[1508]: time="2025-04-30T01:30:37.342478182Z" level=info msg="shim disconnected" id=b9a724981526fe0b67e2e2ffac81a24287966933071c6e79a2585a784722916f namespace=k8s.io Apr 30 01:30:37.342873 containerd[1508]: time="2025-04-30T01:30:37.342846299Z" level=warning msg="cleaning up after shim disconnected" id=b9a724981526fe0b67e2e2ffac81a24287966933071c6e79a2585a784722916f namespace=k8s.io Apr 30 01:30:37.343329 containerd[1508]: time="2025-04-30T01:30:37.342919583Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 01:30:37.343329 containerd[1508]: time="2025-04-30T01:30:37.342538877Z" level=info msg="shim disconnected" id=b71c3ab25112a8bb620e9db8ab39961803d1b49f45266d8e2374fd93a2dc6ffc namespace=k8s.io Apr 30 01:30:37.343798 containerd[1508]: time="2025-04-30T01:30:37.343016750Z" level=warning msg="cleaning up after shim disconnected" id=b71c3ab25112a8bb620e9db8ab39961803d1b49f45266d8e2374fd93a2dc6ffc namespace=k8s.io Apr 30 01:30:37.343798 containerd[1508]: time="2025-04-30T01:30:37.343507523Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 01:30:37.371265 containerd[1508]: time="2025-04-30T01:30:37.371197807Z" level=info msg="StopContainer for \"b71c3ab25112a8bb620e9db8ab39961803d1b49f45266d8e2374fd93a2dc6ffc\" returns successfully" Apr 30 01:30:37.372777 containerd[1508]: time="2025-04-30T01:30:37.372588854Z" level=info msg="StopContainer for \"b9a724981526fe0b67e2e2ffac81a24287966933071c6e79a2585a784722916f\" returns successfully" Apr 30 01:30:37.377240 containerd[1508]: time="2025-04-30T01:30:37.377126229Z" level=info msg="StopPodSandbox for \"2808f15b2b2a7aa33b1ba8ccc2ea84bb381fcd72a7594d9f900db7e44366b34e\"" Apr 30 01:30:37.377591 containerd[1508]: time="2025-04-30T01:30:37.377544263Z" level=info msg="StopPodSandbox for \"b1d8565daa860ed9ba68a8be5fa0790b0ffc414eb9060480ab95be938e00ca2d\"" Apr 30 01:30:37.383889 containerd[1508]: time="2025-04-30T01:30:37.383838415Z" level=info msg="Container to stop \"b71c3ab25112a8bb620e9db8ab39961803d1b49f45266d8e2374fd93a2dc6ffc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 01:30:37.387661 containerd[1508]: time="2025-04-30T01:30:37.383835268Z" level=info msg="Container to stop \"8c168bf9b68e12780ac12c1ea131243642e96b209fdc9bc9d46105f6c2a67b4e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 01:30:37.387661 containerd[1508]: time="2025-04-30T01:30:37.387615413Z" level=info msg="Container to stop \"953980b9987e3fc960810d5f1dacf0c9ef2f8f4c9c602e54b2603d0f642a43c3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 01:30:37.388000 containerd[1508]: time="2025-04-30T01:30:37.387633626Z" level=info msg="Container to stop \"a3df37d72ef03dd3ad21e7cd701451dafa3bc10dad63bb84e52f0ab6a2f23ad7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 01:30:37.388000 containerd[1508]: time="2025-04-30T01:30:37.387974080Z" level=info msg="Container to stop \"fe1c791c3ac731715be3ae7cc9130fd9e27468e2fa831c447e4e448f4bc8fc1b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 01:30:37.388130 containerd[1508]: time="2025-04-30T01:30:37.388012051Z" level=info msg="Container to stop \"b9a724981526fe0b67e2e2ffac81a24287966933071c6e79a2585a784722916f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 01:30:37.388582 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b1d8565daa860ed9ba68a8be5fa0790b0ffc414eb9060480ab95be938e00ca2d-shm.mount: Deactivated successfully. Apr 30 01:30:37.402396 systemd[1]: cri-containerd-2808f15b2b2a7aa33b1ba8ccc2ea84bb381fcd72a7594d9f900db7e44366b34e.scope: Deactivated successfully. Apr 30 01:30:37.405288 systemd[1]: cri-containerd-b1d8565daa860ed9ba68a8be5fa0790b0ffc414eb9060480ab95be938e00ca2d.scope: Deactivated successfully. Apr 30 01:30:37.449136 containerd[1508]: time="2025-04-30T01:30:37.449049972Z" level=info msg="shim disconnected" id=b1d8565daa860ed9ba68a8be5fa0790b0ffc414eb9060480ab95be938e00ca2d namespace=k8s.io Apr 30 01:30:37.449870 containerd[1508]: time="2025-04-30T01:30:37.449661050Z" level=warning msg="cleaning up after shim disconnected" id=b1d8565daa860ed9ba68a8be5fa0790b0ffc414eb9060480ab95be938e00ca2d namespace=k8s.io Apr 30 01:30:37.449870 containerd[1508]: time="2025-04-30T01:30:37.449687928Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 01:30:37.450731 containerd[1508]: time="2025-04-30T01:30:37.450293374Z" level=info msg="shim disconnected" id=2808f15b2b2a7aa33b1ba8ccc2ea84bb381fcd72a7594d9f900db7e44366b34e namespace=k8s.io Apr 30 01:30:37.450731 containerd[1508]: time="2025-04-30T01:30:37.450336322Z" level=warning msg="cleaning up after shim disconnected" id=2808f15b2b2a7aa33b1ba8ccc2ea84bb381fcd72a7594d9f900db7e44366b34e namespace=k8s.io Apr 30 01:30:37.450731 containerd[1508]: time="2025-04-30T01:30:37.450468688Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 01:30:37.477588 containerd[1508]: time="2025-04-30T01:30:37.477532490Z" level=info msg="TearDown network for sandbox \"2808f15b2b2a7aa33b1ba8ccc2ea84bb381fcd72a7594d9f900db7e44366b34e\" successfully" Apr 30 01:30:37.477808 containerd[1508]: time="2025-04-30T01:30:37.477784175Z" level=info msg="StopPodSandbox for \"2808f15b2b2a7aa33b1ba8ccc2ea84bb381fcd72a7594d9f900db7e44366b34e\" returns successfully" Apr 30 01:30:37.479139 containerd[1508]: time="2025-04-30T01:30:37.479107921Z" level=info msg="TearDown network for sandbox \"b1d8565daa860ed9ba68a8be5fa0790b0ffc414eb9060480ab95be938e00ca2d\" successfully" Apr 30 01:30:37.479139 containerd[1508]: time="2025-04-30T01:30:37.479138305Z" level=info msg="StopPodSandbox for \"b1d8565daa860ed9ba68a8be5fa0790b0ffc414eb9060480ab95be938e00ca2d\" returns successfully" Apr 30 01:30:37.584636 kubelet[2725]: I0430 01:30:37.584588 2725 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hcrmr\" (UniqueName: \"kubernetes.io/projected/14a5f62f-66ac-4dee-89cc-59dd66c6b2e7-kube-api-access-hcrmr\") pod \"14a5f62f-66ac-4dee-89cc-59dd66c6b2e7\" (UID: \"14a5f62f-66ac-4dee-89cc-59dd66c6b2e7\") " Apr 30 01:30:37.586090 kubelet[2725]: I0430 01:30:37.584644 2725 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/14a5f62f-66ac-4dee-89cc-59dd66c6b2e7-etc-cni-netd\") pod \"14a5f62f-66ac-4dee-89cc-59dd66c6b2e7\" (UID: \"14a5f62f-66ac-4dee-89cc-59dd66c6b2e7\") " Apr 30 01:30:37.586090 kubelet[2725]: I0430 01:30:37.584674 2725 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/14a5f62f-66ac-4dee-89cc-59dd66c6b2e7-cilium-cgroup\") pod \"14a5f62f-66ac-4dee-89cc-59dd66c6b2e7\" (UID: \"14a5f62f-66ac-4dee-89cc-59dd66c6b2e7\") " Apr 30 01:30:37.586090 kubelet[2725]: I0430 01:30:37.584698 2725 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/14a5f62f-66ac-4dee-89cc-59dd66c6b2e7-host-proc-sys-kernel\") pod \"14a5f62f-66ac-4dee-89cc-59dd66c6b2e7\" (UID: \"14a5f62f-66ac-4dee-89cc-59dd66c6b2e7\") " Apr 30 01:30:37.586090 kubelet[2725]: I0430 01:30:37.584727 2725 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x97pc\" (UniqueName: \"kubernetes.io/projected/ced6da18-764a-41bb-85b5-f3af62cdfa1c-kube-api-access-x97pc\") pod \"ced6da18-764a-41bb-85b5-f3af62cdfa1c\" (UID: \"ced6da18-764a-41bb-85b5-f3af62cdfa1c\") " Apr 30 01:30:37.586090 kubelet[2725]: I0430 01:30:37.584770 2725 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/14a5f62f-66ac-4dee-89cc-59dd66c6b2e7-clustermesh-secrets\") pod \"14a5f62f-66ac-4dee-89cc-59dd66c6b2e7\" (UID: \"14a5f62f-66ac-4dee-89cc-59dd66c6b2e7\") " Apr 30 01:30:37.586090 kubelet[2725]: I0430 01:30:37.584799 2725 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/14a5f62f-66ac-4dee-89cc-59dd66c6b2e7-cilium-run\") pod \"14a5f62f-66ac-4dee-89cc-59dd66c6b2e7\" (UID: \"14a5f62f-66ac-4dee-89cc-59dd66c6b2e7\") " Apr 30 01:30:37.586387 kubelet[2725]: I0430 01:30:37.584839 2725 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/14a5f62f-66ac-4dee-89cc-59dd66c6b2e7-xtables-lock\") pod \"14a5f62f-66ac-4dee-89cc-59dd66c6b2e7\" (UID: \"14a5f62f-66ac-4dee-89cc-59dd66c6b2e7\") " Apr 30 01:30:37.586387 kubelet[2725]: I0430 01:30:37.584865 2725 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/14a5f62f-66ac-4dee-89cc-59dd66c6b2e7-hubble-tls\") pod \"14a5f62f-66ac-4dee-89cc-59dd66c6b2e7\" (UID: \"14a5f62f-66ac-4dee-89cc-59dd66c6b2e7\") " Apr 30 01:30:37.586387 kubelet[2725]: I0430 01:30:37.584889 2725 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/14a5f62f-66ac-4dee-89cc-59dd66c6b2e7-hostproc\") pod \"14a5f62f-66ac-4dee-89cc-59dd66c6b2e7\" (UID: \"14a5f62f-66ac-4dee-89cc-59dd66c6b2e7\") " Apr 30 01:30:37.586387 kubelet[2725]: I0430 01:30:37.584912 2725 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/14a5f62f-66ac-4dee-89cc-59dd66c6b2e7-bpf-maps\") pod \"14a5f62f-66ac-4dee-89cc-59dd66c6b2e7\" (UID: \"14a5f62f-66ac-4dee-89cc-59dd66c6b2e7\") " Apr 30 01:30:37.586387 kubelet[2725]: I0430 01:30:37.584939 2725 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ced6da18-764a-41bb-85b5-f3af62cdfa1c-cilium-config-path\") pod \"ced6da18-764a-41bb-85b5-f3af62cdfa1c\" (UID: \"ced6da18-764a-41bb-85b5-f3af62cdfa1c\") " Apr 30 01:30:37.586387 kubelet[2725]: I0430 01:30:37.584985 2725 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/14a5f62f-66ac-4dee-89cc-59dd66c6b2e7-host-proc-sys-net\") pod \"14a5f62f-66ac-4dee-89cc-59dd66c6b2e7\" (UID: \"14a5f62f-66ac-4dee-89cc-59dd66c6b2e7\") " Apr 30 01:30:37.586683 kubelet[2725]: I0430 01:30:37.585013 2725 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/14a5f62f-66ac-4dee-89cc-59dd66c6b2e7-cni-path\") pod \"14a5f62f-66ac-4dee-89cc-59dd66c6b2e7\" (UID: \"14a5f62f-66ac-4dee-89cc-59dd66c6b2e7\") " Apr 30 01:30:37.586683 kubelet[2725]: I0430 01:30:37.585039 2725 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/14a5f62f-66ac-4dee-89cc-59dd66c6b2e7-lib-modules\") pod \"14a5f62f-66ac-4dee-89cc-59dd66c6b2e7\" (UID: \"14a5f62f-66ac-4dee-89cc-59dd66c6b2e7\") " Apr 30 01:30:37.586683 kubelet[2725]: I0430 01:30:37.585067 2725 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/14a5f62f-66ac-4dee-89cc-59dd66c6b2e7-cilium-config-path\") pod \"14a5f62f-66ac-4dee-89cc-59dd66c6b2e7\" (UID: \"14a5f62f-66ac-4dee-89cc-59dd66c6b2e7\") " Apr 30 01:30:37.591756 kubelet[2725]: I0430 01:30:37.590112 2725 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14a5f62f-66ac-4dee-89cc-59dd66c6b2e7-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "14a5f62f-66ac-4dee-89cc-59dd66c6b2e7" (UID: "14a5f62f-66ac-4dee-89cc-59dd66c6b2e7"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 30 01:30:37.591756 kubelet[2725]: I0430 01:30:37.591409 2725 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14a5f62f-66ac-4dee-89cc-59dd66c6b2e7-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "14a5f62f-66ac-4dee-89cc-59dd66c6b2e7" (UID: "14a5f62f-66ac-4dee-89cc-59dd66c6b2e7"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 01:30:37.591756 kubelet[2725]: I0430 01:30:37.591450 2725 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14a5f62f-66ac-4dee-89cc-59dd66c6b2e7-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "14a5f62f-66ac-4dee-89cc-59dd66c6b2e7" (UID: "14a5f62f-66ac-4dee-89cc-59dd66c6b2e7"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 01:30:37.593124 kubelet[2725]: I0430 01:30:37.593085 2725 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14a5f62f-66ac-4dee-89cc-59dd66c6b2e7-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "14a5f62f-66ac-4dee-89cc-59dd66c6b2e7" (UID: "14a5f62f-66ac-4dee-89cc-59dd66c6b2e7"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 01:30:37.596060 kubelet[2725]: I0430 01:30:37.596021 2725 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ced6da18-764a-41bb-85b5-f3af62cdfa1c-kube-api-access-x97pc" (OuterVolumeSpecName: "kube-api-access-x97pc") pod "ced6da18-764a-41bb-85b5-f3af62cdfa1c" (UID: "ced6da18-764a-41bb-85b5-f3af62cdfa1c"). InnerVolumeSpecName "kube-api-access-x97pc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 30 01:30:37.597415 kubelet[2725]: I0430 01:30:37.597108 2725 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14a5f62f-66ac-4dee-89cc-59dd66c6b2e7-kube-api-access-hcrmr" (OuterVolumeSpecName: "kube-api-access-hcrmr") pod "14a5f62f-66ac-4dee-89cc-59dd66c6b2e7" (UID: "14a5f62f-66ac-4dee-89cc-59dd66c6b2e7"). InnerVolumeSpecName "kube-api-access-hcrmr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 30 01:30:37.598660 kubelet[2725]: I0430 01:30:37.598542 2725 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14a5f62f-66ac-4dee-89cc-59dd66c6b2e7-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "14a5f62f-66ac-4dee-89cc-59dd66c6b2e7" (UID: "14a5f62f-66ac-4dee-89cc-59dd66c6b2e7"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 01:30:37.599152 kubelet[2725]: I0430 01:30:37.599113 2725 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14a5f62f-66ac-4dee-89cc-59dd66c6b2e7-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "14a5f62f-66ac-4dee-89cc-59dd66c6b2e7" (UID: "14a5f62f-66ac-4dee-89cc-59dd66c6b2e7"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 01:30:37.600261 kubelet[2725]: I0430 01:30:37.599477 2725 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14a5f62f-66ac-4dee-89cc-59dd66c6b2e7-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "14a5f62f-66ac-4dee-89cc-59dd66c6b2e7" (UID: "14a5f62f-66ac-4dee-89cc-59dd66c6b2e7"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 01:30:37.603501 kubelet[2725]: I0430 01:30:37.603306 2725 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ced6da18-764a-41bb-85b5-f3af62cdfa1c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ced6da18-764a-41bb-85b5-f3af62cdfa1c" (UID: "ced6da18-764a-41bb-85b5-f3af62cdfa1c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 30 01:30:37.603501 kubelet[2725]: I0430 01:30:37.603395 2725 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14a5f62f-66ac-4dee-89cc-59dd66c6b2e7-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "14a5f62f-66ac-4dee-89cc-59dd66c6b2e7" (UID: "14a5f62f-66ac-4dee-89cc-59dd66c6b2e7"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 01:30:37.603501 kubelet[2725]: I0430 01:30:37.603431 2725 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14a5f62f-66ac-4dee-89cc-59dd66c6b2e7-cni-path" (OuterVolumeSpecName: "cni-path") pod "14a5f62f-66ac-4dee-89cc-59dd66c6b2e7" (UID: "14a5f62f-66ac-4dee-89cc-59dd66c6b2e7"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 01:30:37.603501 kubelet[2725]: I0430 01:30:37.603460 2725 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14a5f62f-66ac-4dee-89cc-59dd66c6b2e7-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "14a5f62f-66ac-4dee-89cc-59dd66c6b2e7" (UID: "14a5f62f-66ac-4dee-89cc-59dd66c6b2e7"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 01:30:37.605102 kubelet[2725]: I0430 01:30:37.603567 2725 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14a5f62f-66ac-4dee-89cc-59dd66c6b2e7-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "14a5f62f-66ac-4dee-89cc-59dd66c6b2e7" (UID: "14a5f62f-66ac-4dee-89cc-59dd66c6b2e7"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 30 01:30:37.605102 kubelet[2725]: I0430 01:30:37.603623 2725 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14a5f62f-66ac-4dee-89cc-59dd66c6b2e7-hostproc" (OuterVolumeSpecName: "hostproc") pod "14a5f62f-66ac-4dee-89cc-59dd66c6b2e7" (UID: "14a5f62f-66ac-4dee-89cc-59dd66c6b2e7"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 01:30:37.607630 kubelet[2725]: I0430 01:30:37.607583 2725 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14a5f62f-66ac-4dee-89cc-59dd66c6b2e7-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "14a5f62f-66ac-4dee-89cc-59dd66c6b2e7" (UID: "14a5f62f-66ac-4dee-89cc-59dd66c6b2e7"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 30 01:30:37.685954 kubelet[2725]: I0430 01:30:37.685778 2725 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/14a5f62f-66ac-4dee-89cc-59dd66c6b2e7-bpf-maps\") on node \"srv-tw98u.gb1.brightbox.com\" DevicePath \"\"" Apr 30 01:30:37.685954 kubelet[2725]: I0430 01:30:37.685849 2725 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ced6da18-764a-41bb-85b5-f3af62cdfa1c-cilium-config-path\") on node \"srv-tw98u.gb1.brightbox.com\" DevicePath \"\"" Apr 30 01:30:37.685954 kubelet[2725]: I0430 01:30:37.685869 2725 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/14a5f62f-66ac-4dee-89cc-59dd66c6b2e7-cilium-run\") on node \"srv-tw98u.gb1.brightbox.com\" DevicePath \"\"" Apr 30 01:30:37.685954 kubelet[2725]: I0430 01:30:37.685888 2725 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/14a5f62f-66ac-4dee-89cc-59dd66c6b2e7-xtables-lock\") on node \"srv-tw98u.gb1.brightbox.com\" DevicePath \"\"" Apr 30 01:30:37.685954 kubelet[2725]: I0430 01:30:37.685905 2725 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/14a5f62f-66ac-4dee-89cc-59dd66c6b2e7-hubble-tls\") on node \"srv-tw98u.gb1.brightbox.com\" DevicePath \"\"" Apr 30 01:30:37.685954 kubelet[2725]: I0430 01:30:37.685920 2725 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/14a5f62f-66ac-4dee-89cc-59dd66c6b2e7-hostproc\") on node \"srv-tw98u.gb1.brightbox.com\" DevicePath \"\"" Apr 30 01:30:37.685954 kubelet[2725]: I0430 01:30:37.685946 2725 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/14a5f62f-66ac-4dee-89cc-59dd66c6b2e7-host-proc-sys-net\") on node \"srv-tw98u.gb1.brightbox.com\" DevicePath \"\"" Apr 30 01:30:37.686425 kubelet[2725]: I0430 01:30:37.685982 2725 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/14a5f62f-66ac-4dee-89cc-59dd66c6b2e7-cni-path\") on node \"srv-tw98u.gb1.brightbox.com\" DevicePath \"\"" Apr 30 01:30:37.686425 kubelet[2725]: I0430 01:30:37.686000 2725 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/14a5f62f-66ac-4dee-89cc-59dd66c6b2e7-lib-modules\") on node \"srv-tw98u.gb1.brightbox.com\" DevicePath \"\"" Apr 30 01:30:37.686425 kubelet[2725]: I0430 01:30:37.686014 2725 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/14a5f62f-66ac-4dee-89cc-59dd66c6b2e7-cilium-config-path\") on node \"srv-tw98u.gb1.brightbox.com\" DevicePath \"\"" Apr 30 01:30:37.686425 kubelet[2725]: I0430 01:30:37.686028 2725 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hcrmr\" (UniqueName: \"kubernetes.io/projected/14a5f62f-66ac-4dee-89cc-59dd66c6b2e7-kube-api-access-hcrmr\") on node \"srv-tw98u.gb1.brightbox.com\" DevicePath \"\"" Apr 30 01:30:37.686425 kubelet[2725]: I0430 01:30:37.686043 2725 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/14a5f62f-66ac-4dee-89cc-59dd66c6b2e7-etc-cni-netd\") on node \"srv-tw98u.gb1.brightbox.com\" DevicePath \"\"" Apr 30 01:30:37.686425 kubelet[2725]: I0430 01:30:37.686059 2725 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/14a5f62f-66ac-4dee-89cc-59dd66c6b2e7-cilium-cgroup\") on node \"srv-tw98u.gb1.brightbox.com\" DevicePath \"\"" Apr 30 01:30:37.686425 kubelet[2725]: I0430 01:30:37.686073 2725 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/14a5f62f-66ac-4dee-89cc-59dd66c6b2e7-host-proc-sys-kernel\") on node \"srv-tw98u.gb1.brightbox.com\" DevicePath \"\"" Apr 30 01:30:37.686756 kubelet[2725]: I0430 01:30:37.686088 2725 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-x97pc\" (UniqueName: \"kubernetes.io/projected/ced6da18-764a-41bb-85b5-f3af62cdfa1c-kube-api-access-x97pc\") on node \"srv-tw98u.gb1.brightbox.com\" DevicePath \"\"" Apr 30 01:30:37.686756 kubelet[2725]: I0430 01:30:37.686106 2725 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/14a5f62f-66ac-4dee-89cc-59dd66c6b2e7-clustermesh-secrets\") on node \"srv-tw98u.gb1.brightbox.com\" DevicePath \"\"" Apr 30 01:30:38.095867 systemd[1]: Removed slice kubepods-besteffort-podced6da18_764a_41bb_85b5_f3af62cdfa1c.slice - libcontainer container kubepods-besteffort-podced6da18_764a_41bb_85b5_f3af62cdfa1c.slice. Apr 30 01:30:38.102772 kubelet[2725]: I0430 01:30:38.102684 2725 scope.go:117] "RemoveContainer" containerID="b71c3ab25112a8bb620e9db8ab39961803d1b49f45266d8e2374fd93a2dc6ffc" Apr 30 01:30:38.113082 containerd[1508]: time="2025-04-30T01:30:38.112534279Z" level=info msg="RemoveContainer for \"b71c3ab25112a8bb620e9db8ab39961803d1b49f45266d8e2374fd93a2dc6ffc\"" Apr 30 01:30:38.118049 containerd[1508]: time="2025-04-30T01:30:38.118015811Z" level=info msg="RemoveContainer for \"b71c3ab25112a8bb620e9db8ab39961803d1b49f45266d8e2374fd93a2dc6ffc\" returns successfully" Apr 30 01:30:38.120353 systemd[1]: Removed slice kubepods-burstable-pod14a5f62f_66ac_4dee_89cc_59dd66c6b2e7.slice - libcontainer container kubepods-burstable-pod14a5f62f_66ac_4dee_89cc_59dd66c6b2e7.slice. Apr 30 01:30:38.120700 systemd[1]: kubepods-burstable-pod14a5f62f_66ac_4dee_89cc_59dd66c6b2e7.slice: Consumed 10.446s CPU time. Apr 30 01:30:38.124363 kubelet[2725]: I0430 01:30:38.123799 2725 scope.go:117] "RemoveContainer" containerID="b71c3ab25112a8bb620e9db8ab39961803d1b49f45266d8e2374fd93a2dc6ffc" Apr 30 01:30:38.125222 containerd[1508]: time="2025-04-30T01:30:38.124657079Z" level=error msg="ContainerStatus for \"b71c3ab25112a8bb620e9db8ab39961803d1b49f45266d8e2374fd93a2dc6ffc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b71c3ab25112a8bb620e9db8ab39961803d1b49f45266d8e2374fd93a2dc6ffc\": not found" Apr 30 01:30:38.127747 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b1d8565daa860ed9ba68a8be5fa0790b0ffc414eb9060480ab95be938e00ca2d-rootfs.mount: Deactivated successfully. Apr 30 01:30:38.131536 kubelet[2725]: E0430 01:30:38.130357 2725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b71c3ab25112a8bb620e9db8ab39961803d1b49f45266d8e2374fd93a2dc6ffc\": not found" containerID="b71c3ab25112a8bb620e9db8ab39961803d1b49f45266d8e2374fd93a2dc6ffc" Apr 30 01:30:38.128147 systemd[1]: var-lib-kubelet-pods-ced6da18\x2d764a\x2d41bb\x2d85b5\x2df3af62cdfa1c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dx97pc.mount: Deactivated successfully. Apr 30 01:30:38.128310 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2808f15b2b2a7aa33b1ba8ccc2ea84bb381fcd72a7594d9f900db7e44366b34e-rootfs.mount: Deactivated successfully. Apr 30 01:30:38.128436 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2808f15b2b2a7aa33b1ba8ccc2ea84bb381fcd72a7594d9f900db7e44366b34e-shm.mount: Deactivated successfully. Apr 30 01:30:38.128580 systemd[1]: var-lib-kubelet-pods-14a5f62f\x2d66ac\x2d4dee\x2d89cc\x2d59dd66c6b2e7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhcrmr.mount: Deactivated successfully. Apr 30 01:30:38.128705 systemd[1]: var-lib-kubelet-pods-14a5f62f\x2d66ac\x2d4dee\x2d89cc\x2d59dd66c6b2e7-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 30 01:30:38.128846 systemd[1]: var-lib-kubelet-pods-14a5f62f\x2d66ac\x2d4dee\x2d89cc\x2d59dd66c6b2e7-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 30 01:30:38.170011 kubelet[2725]: I0430 01:30:38.134243 2725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b71c3ab25112a8bb620e9db8ab39961803d1b49f45266d8e2374fd93a2dc6ffc"} err="failed to get container status \"b71c3ab25112a8bb620e9db8ab39961803d1b49f45266d8e2374fd93a2dc6ffc\": rpc error: code = NotFound desc = an error occurred when try to find container \"b71c3ab25112a8bb620e9db8ab39961803d1b49f45266d8e2374fd93a2dc6ffc\": not found" Apr 30 01:30:38.170011 kubelet[2725]: I0430 01:30:38.169856 2725 scope.go:117] "RemoveContainer" containerID="b9a724981526fe0b67e2e2ffac81a24287966933071c6e79a2585a784722916f" Apr 30 01:30:38.171642 containerd[1508]: time="2025-04-30T01:30:38.171588184Z" level=info msg="RemoveContainer for \"b9a724981526fe0b67e2e2ffac81a24287966933071c6e79a2585a784722916f\"" Apr 30 01:30:38.175253 containerd[1508]: time="2025-04-30T01:30:38.175221895Z" level=info msg="RemoveContainer for \"b9a724981526fe0b67e2e2ffac81a24287966933071c6e79a2585a784722916f\" returns successfully" Apr 30 01:30:38.177304 kubelet[2725]: I0430 01:30:38.177142 2725 scope.go:117] "RemoveContainer" containerID="fe1c791c3ac731715be3ae7cc9130fd9e27468e2fa831c447e4e448f4bc8fc1b" Apr 30 01:30:38.179579 containerd[1508]: time="2025-04-30T01:30:38.179539996Z" level=info msg="RemoveContainer for \"fe1c791c3ac731715be3ae7cc9130fd9e27468e2fa831c447e4e448f4bc8fc1b\"" Apr 30 01:30:38.182707 containerd[1508]: time="2025-04-30T01:30:38.182621029Z" level=info msg="RemoveContainer for \"fe1c791c3ac731715be3ae7cc9130fd9e27468e2fa831c447e4e448f4bc8fc1b\" returns successfully" Apr 30 01:30:38.183022 kubelet[2725]: I0430 01:30:38.182913 2725 scope.go:117] "RemoveContainer" containerID="a3df37d72ef03dd3ad21e7cd701451dafa3bc10dad63bb84e52f0ab6a2f23ad7" Apr 30 01:30:38.184913 containerd[1508]: time="2025-04-30T01:30:38.184869883Z" level=info msg="RemoveContainer for \"a3df37d72ef03dd3ad21e7cd701451dafa3bc10dad63bb84e52f0ab6a2f23ad7\"" Apr 30 01:30:38.187921 containerd[1508]: time="2025-04-30T01:30:38.187889751Z" level=info msg="RemoveContainer for \"a3df37d72ef03dd3ad21e7cd701451dafa3bc10dad63bb84e52f0ab6a2f23ad7\" returns successfully" Apr 30 01:30:38.188359 kubelet[2725]: I0430 01:30:38.188263 2725 scope.go:117] "RemoveContainer" containerID="953980b9987e3fc960810d5f1dacf0c9ef2f8f4c9c602e54b2603d0f642a43c3" Apr 30 01:30:38.190182 containerd[1508]: time="2025-04-30T01:30:38.189831569Z" level=info msg="RemoveContainer for \"953980b9987e3fc960810d5f1dacf0c9ef2f8f4c9c602e54b2603d0f642a43c3\"" Apr 30 01:30:38.193032 containerd[1508]: time="2025-04-30T01:30:38.192960793Z" level=info msg="RemoveContainer for \"953980b9987e3fc960810d5f1dacf0c9ef2f8f4c9c602e54b2603d0f642a43c3\" returns successfully" Apr 30 01:30:38.193573 kubelet[2725]: I0430 01:30:38.193437 2725 scope.go:117] "RemoveContainer" containerID="8c168bf9b68e12780ac12c1ea131243642e96b209fdc9bc9d46105f6c2a67b4e" Apr 30 01:30:38.194917 containerd[1508]: time="2025-04-30T01:30:38.194855089Z" level=info msg="RemoveContainer for \"8c168bf9b68e12780ac12c1ea131243642e96b209fdc9bc9d46105f6c2a67b4e\"" Apr 30 01:30:38.197881 containerd[1508]: time="2025-04-30T01:30:38.197851056Z" level=info msg="RemoveContainer for \"8c168bf9b68e12780ac12c1ea131243642e96b209fdc9bc9d46105f6c2a67b4e\" returns successfully" Apr 30 01:30:38.198125 kubelet[2725]: I0430 01:30:38.198100 2725 scope.go:117] "RemoveContainer" containerID="b9a724981526fe0b67e2e2ffac81a24287966933071c6e79a2585a784722916f" Apr 30 01:30:38.198507 containerd[1508]: time="2025-04-30T01:30:38.198467887Z" level=error msg="ContainerStatus for \"b9a724981526fe0b67e2e2ffac81a24287966933071c6e79a2585a784722916f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b9a724981526fe0b67e2e2ffac81a24287966933071c6e79a2585a784722916f\": not found" Apr 30 01:30:38.198740 kubelet[2725]: E0430 01:30:38.198673 2725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b9a724981526fe0b67e2e2ffac81a24287966933071c6e79a2585a784722916f\": not found" containerID="b9a724981526fe0b67e2e2ffac81a24287966933071c6e79a2585a784722916f" Apr 30 01:30:38.198837 kubelet[2725]: I0430 01:30:38.198737 2725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b9a724981526fe0b67e2e2ffac81a24287966933071c6e79a2585a784722916f"} err="failed to get container status \"b9a724981526fe0b67e2e2ffac81a24287966933071c6e79a2585a784722916f\": rpc error: code = NotFound desc = an error occurred when try to find container \"b9a724981526fe0b67e2e2ffac81a24287966933071c6e79a2585a784722916f\": not found" Apr 30 01:30:38.198837 kubelet[2725]: I0430 01:30:38.198768 2725 scope.go:117] "RemoveContainer" containerID="fe1c791c3ac731715be3ae7cc9130fd9e27468e2fa831c447e4e448f4bc8fc1b" Apr 30 01:30:38.199118 containerd[1508]: time="2025-04-30T01:30:38.199065813Z" level=error msg="ContainerStatus for \"fe1c791c3ac731715be3ae7cc9130fd9e27468e2fa831c447e4e448f4bc8fc1b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fe1c791c3ac731715be3ae7cc9130fd9e27468e2fa831c447e4e448f4bc8fc1b\": not found" Apr 30 01:30:38.199529 kubelet[2725]: E0430 01:30:38.199396 2725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fe1c791c3ac731715be3ae7cc9130fd9e27468e2fa831c447e4e448f4bc8fc1b\": not found" containerID="fe1c791c3ac731715be3ae7cc9130fd9e27468e2fa831c447e4e448f4bc8fc1b" Apr 30 01:30:38.199727 kubelet[2725]: I0430 01:30:38.199483 2725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fe1c791c3ac731715be3ae7cc9130fd9e27468e2fa831c447e4e448f4bc8fc1b"} err="failed to get container status \"fe1c791c3ac731715be3ae7cc9130fd9e27468e2fa831c447e4e448f4bc8fc1b\": rpc error: code = NotFound desc = an error occurred when try to find container \"fe1c791c3ac731715be3ae7cc9130fd9e27468e2fa831c447e4e448f4bc8fc1b\": not found" Apr 30 01:30:38.199727 kubelet[2725]: I0430 01:30:38.199617 2725 scope.go:117] "RemoveContainer" containerID="a3df37d72ef03dd3ad21e7cd701451dafa3bc10dad63bb84e52f0ab6a2f23ad7" Apr 30 01:30:38.200457 containerd[1508]: time="2025-04-30T01:30:38.200008733Z" level=error msg="ContainerStatus for \"a3df37d72ef03dd3ad21e7cd701451dafa3bc10dad63bb84e52f0ab6a2f23ad7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a3df37d72ef03dd3ad21e7cd701451dafa3bc10dad63bb84e52f0ab6a2f23ad7\": not found" Apr 30 01:30:38.200534 kubelet[2725]: E0430 01:30:38.200293 2725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a3df37d72ef03dd3ad21e7cd701451dafa3bc10dad63bb84e52f0ab6a2f23ad7\": not found" containerID="a3df37d72ef03dd3ad21e7cd701451dafa3bc10dad63bb84e52f0ab6a2f23ad7" Apr 30 01:30:38.200534 kubelet[2725]: I0430 01:30:38.200343 2725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a3df37d72ef03dd3ad21e7cd701451dafa3bc10dad63bb84e52f0ab6a2f23ad7"} err="failed to get container status \"a3df37d72ef03dd3ad21e7cd701451dafa3bc10dad63bb84e52f0ab6a2f23ad7\": rpc error: code = NotFound desc = an error occurred when try to find container \"a3df37d72ef03dd3ad21e7cd701451dafa3bc10dad63bb84e52f0ab6a2f23ad7\": not found" Apr 30 01:30:38.200534 kubelet[2725]: I0430 01:30:38.200366 2725 scope.go:117] "RemoveContainer" containerID="953980b9987e3fc960810d5f1dacf0c9ef2f8f4c9c602e54b2603d0f642a43c3" Apr 30 01:30:38.200990 containerd[1508]: time="2025-04-30T01:30:38.200799283Z" level=error msg="ContainerStatus for \"953980b9987e3fc960810d5f1dacf0c9ef2f8f4c9c602e54b2603d0f642a43c3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"953980b9987e3fc960810d5f1dacf0c9ef2f8f4c9c602e54b2603d0f642a43c3\": not found" Apr 30 01:30:38.201152 kubelet[2725]: E0430 01:30:38.201036 2725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"953980b9987e3fc960810d5f1dacf0c9ef2f8f4c9c602e54b2603d0f642a43c3\": not found" containerID="953980b9987e3fc960810d5f1dacf0c9ef2f8f4c9c602e54b2603d0f642a43c3" Apr 30 01:30:38.201152 kubelet[2725]: I0430 01:30:38.201064 2725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"953980b9987e3fc960810d5f1dacf0c9ef2f8f4c9c602e54b2603d0f642a43c3"} err="failed to get container status \"953980b9987e3fc960810d5f1dacf0c9ef2f8f4c9c602e54b2603d0f642a43c3\": rpc error: code = NotFound desc = an error occurred when try to find container \"953980b9987e3fc960810d5f1dacf0c9ef2f8f4c9c602e54b2603d0f642a43c3\": not found" Apr 30 01:30:38.201152 kubelet[2725]: I0430 01:30:38.201086 2725 scope.go:117] "RemoveContainer" containerID="8c168bf9b68e12780ac12c1ea131243642e96b209fdc9bc9d46105f6c2a67b4e" Apr 30 01:30:38.202182 kubelet[2725]: E0430 01:30:38.202100 2725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8c168bf9b68e12780ac12c1ea131243642e96b209fdc9bc9d46105f6c2a67b4e\": not found" containerID="8c168bf9b68e12780ac12c1ea131243642e96b209fdc9bc9d46105f6c2a67b4e" Apr 30 01:30:38.202182 kubelet[2725]: I0430 01:30:38.202140 2725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8c168bf9b68e12780ac12c1ea131243642e96b209fdc9bc9d46105f6c2a67b4e"} err="failed to get container status \"8c168bf9b68e12780ac12c1ea131243642e96b209fdc9bc9d46105f6c2a67b4e\": rpc error: code = NotFound desc = an error occurred when try to find container \"8c168bf9b68e12780ac12c1ea131243642e96b209fdc9bc9d46105f6c2a67b4e\": not found" Apr 30 01:30:38.202339 containerd[1508]: time="2025-04-30T01:30:38.201356633Z" level=error msg="ContainerStatus for \"8c168bf9b68e12780ac12c1ea131243642e96b209fdc9bc9d46105f6c2a67b4e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8c168bf9b68e12780ac12c1ea131243642e96b209fdc9bc9d46105f6c2a67b4e\": not found" Apr 30 01:30:38.798144 kubelet[2725]: E0430 01:30:38.798050 2725 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 30 01:30:39.151285 sshd[4324]: Connection closed by 139.178.68.195 port 37188 Apr 30 01:30:39.152351 sshd-session[4322]: pam_unix(sshd:session): session closed for user core Apr 30 01:30:39.158350 systemd[1]: sshd@24-10.243.74.78:22-139.178.68.195:37188.service: Deactivated successfully. Apr 30 01:30:39.161171 systemd[1]: session-27.scope: Deactivated successfully. Apr 30 01:30:39.161421 systemd[1]: session-27.scope: Consumed 1.204s CPU time. Apr 30 01:30:39.163092 systemd-logind[1491]: Session 27 logged out. Waiting for processes to exit. Apr 30 01:30:39.165582 systemd-logind[1491]: Removed session 27. Apr 30 01:30:39.313329 systemd[1]: Started sshd@25-10.243.74.78:22-139.178.68.195:54158.service - OpenSSH per-connection server daemon (139.178.68.195:54158). Apr 30 01:30:39.580710 kubelet[2725]: I0430 01:30:39.580531 2725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14a5f62f-66ac-4dee-89cc-59dd66c6b2e7" path="/var/lib/kubelet/pods/14a5f62f-66ac-4dee-89cc-59dd66c6b2e7/volumes" Apr 30 01:30:39.583028 kubelet[2725]: I0430 01:30:39.582911 2725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ced6da18-764a-41bb-85b5-f3af62cdfa1c" path="/var/lib/kubelet/pods/ced6da18-764a-41bb-85b5-f3af62cdfa1c/volumes" Apr 30 01:30:40.205163 sshd[4484]: Accepted publickey for core from 139.178.68.195 port 54158 ssh2: RSA SHA256:MfEzgEaq/hy6que95IR+2Hjd2kD3vXBLQEa1zk00RtE Apr 30 01:30:40.207243 sshd-session[4484]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:30:40.213939 systemd-logind[1491]: New session 28 of user core. Apr 30 01:30:40.223177 systemd[1]: Started session-28.scope - Session 28 of User core. Apr 30 01:30:41.479728 kubelet[2725]: I0430 01:30:41.475720 2725 memory_manager.go:355] "RemoveStaleState removing state" podUID="14a5f62f-66ac-4dee-89cc-59dd66c6b2e7" containerName="cilium-agent" Apr 30 01:30:41.479728 kubelet[2725]: I0430 01:30:41.479722 2725 memory_manager.go:355] "RemoveStaleState removing state" podUID="ced6da18-764a-41bb-85b5-f3af62cdfa1c" containerName="cilium-operator" Apr 30 01:30:41.537957 systemd[1]: Created slice kubepods-burstable-pod2dbff5df_d396_4c64_85f8_794348ac64b6.slice - libcontainer container kubepods-burstable-pod2dbff5df_d396_4c64_85f8_794348ac64b6.slice. Apr 30 01:30:41.580676 sshd[4486]: Connection closed by 139.178.68.195 port 54158 Apr 30 01:30:41.581156 sshd-session[4484]: pam_unix(sshd:session): session closed for user core Apr 30 01:30:41.585411 systemd-logind[1491]: Session 28 logged out. Waiting for processes to exit. Apr 30 01:30:41.586569 systemd[1]: sshd@25-10.243.74.78:22-139.178.68.195:54158.service: Deactivated successfully. Apr 30 01:30:41.589438 systemd[1]: session-28.scope: Deactivated successfully. Apr 30 01:30:41.591426 systemd-logind[1491]: Removed session 28. Apr 30 01:30:41.620261 kubelet[2725]: I0430 01:30:41.620171 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2dbff5df-d396-4c64-85f8-794348ac64b6-hubble-tls\") pod \"cilium-4724j\" (UID: \"2dbff5df-d396-4c64-85f8-794348ac64b6\") " pod="kube-system/cilium-4724j" Apr 30 01:30:41.620261 kubelet[2725]: I0430 01:30:41.620244 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2dbff5df-d396-4c64-85f8-794348ac64b6-cni-path\") pod \"cilium-4724j\" (UID: \"2dbff5df-d396-4c64-85f8-794348ac64b6\") " pod="kube-system/cilium-4724j" Apr 30 01:30:41.620707 kubelet[2725]: I0430 01:30:41.620292 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2dbff5df-d396-4c64-85f8-794348ac64b6-clustermesh-secrets\") pod \"cilium-4724j\" (UID: \"2dbff5df-d396-4c64-85f8-794348ac64b6\") " pod="kube-system/cilium-4724j" Apr 30 01:30:41.620707 kubelet[2725]: I0430 01:30:41.620320 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2dbff5df-d396-4c64-85f8-794348ac64b6-host-proc-sys-kernel\") pod \"cilium-4724j\" (UID: \"2dbff5df-d396-4c64-85f8-794348ac64b6\") " pod="kube-system/cilium-4724j" Apr 30 01:30:41.620707 kubelet[2725]: I0430 01:30:41.620358 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2dbff5df-d396-4c64-85f8-794348ac64b6-etc-cni-netd\") pod \"cilium-4724j\" (UID: \"2dbff5df-d396-4c64-85f8-794348ac64b6\") " pod="kube-system/cilium-4724j" Apr 30 01:30:41.621985 kubelet[2725]: I0430 01:30:41.621728 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2dbff5df-d396-4c64-85f8-794348ac64b6-xtables-lock\") pod \"cilium-4724j\" (UID: \"2dbff5df-d396-4c64-85f8-794348ac64b6\") " pod="kube-system/cilium-4724j" Apr 30 01:30:41.621985 kubelet[2725]: I0430 01:30:41.621784 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2dbff5df-d396-4c64-85f8-794348ac64b6-cilium-config-path\") pod \"cilium-4724j\" (UID: \"2dbff5df-d396-4c64-85f8-794348ac64b6\") " pod="kube-system/cilium-4724j" Apr 30 01:30:41.621985 kubelet[2725]: I0430 01:30:41.621821 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6s9pw\" (UniqueName: \"kubernetes.io/projected/2dbff5df-d396-4c64-85f8-794348ac64b6-kube-api-access-6s9pw\") pod \"cilium-4724j\" (UID: \"2dbff5df-d396-4c64-85f8-794348ac64b6\") " pod="kube-system/cilium-4724j" Apr 30 01:30:41.621985 kubelet[2725]: I0430 01:30:41.621864 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2dbff5df-d396-4c64-85f8-794348ac64b6-host-proc-sys-net\") pod \"cilium-4724j\" (UID: \"2dbff5df-d396-4c64-85f8-794348ac64b6\") " pod="kube-system/cilium-4724j" Apr 30 01:30:41.621985 kubelet[2725]: I0430 01:30:41.621892 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2dbff5df-d396-4c64-85f8-794348ac64b6-cilium-ipsec-secrets\") pod \"cilium-4724j\" (UID: \"2dbff5df-d396-4c64-85f8-794348ac64b6\") " pod="kube-system/cilium-4724j" Apr 30 01:30:41.622215 kubelet[2725]: I0430 01:30:41.621921 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2dbff5df-d396-4c64-85f8-794348ac64b6-cilium-run\") pod \"cilium-4724j\" (UID: \"2dbff5df-d396-4c64-85f8-794348ac64b6\") " pod="kube-system/cilium-4724j" Apr 30 01:30:41.622215 kubelet[2725]: I0430 01:30:41.621950 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2dbff5df-d396-4c64-85f8-794348ac64b6-hostproc\") pod \"cilium-4724j\" (UID: \"2dbff5df-d396-4c64-85f8-794348ac64b6\") " pod="kube-system/cilium-4724j" Apr 30 01:30:41.622215 kubelet[2725]: I0430 01:30:41.621994 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2dbff5df-d396-4c64-85f8-794348ac64b6-lib-modules\") pod \"cilium-4724j\" (UID: \"2dbff5df-d396-4c64-85f8-794348ac64b6\") " pod="kube-system/cilium-4724j" Apr 30 01:30:41.622215 kubelet[2725]: I0430 01:30:41.622035 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2dbff5df-d396-4c64-85f8-794348ac64b6-bpf-maps\") pod \"cilium-4724j\" (UID: \"2dbff5df-d396-4c64-85f8-794348ac64b6\") " pod="kube-system/cilium-4724j" Apr 30 01:30:41.622215 kubelet[2725]: I0430 01:30:41.622073 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2dbff5df-d396-4c64-85f8-794348ac64b6-cilium-cgroup\") pod \"cilium-4724j\" (UID: \"2dbff5df-d396-4c64-85f8-794348ac64b6\") " pod="kube-system/cilium-4724j" Apr 30 01:30:41.753765 systemd[1]: Started sshd@26-10.243.74.78:22-139.178.68.195:54172.service - OpenSSH per-connection server daemon (139.178.68.195:54172). Apr 30 01:30:41.852043 containerd[1508]: time="2025-04-30T01:30:41.851950124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4724j,Uid:2dbff5df-d396-4c64-85f8-794348ac64b6,Namespace:kube-system,Attempt:0,}" Apr 30 01:30:41.894653 containerd[1508]: time="2025-04-30T01:30:41.894381768Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 01:30:41.894653 containerd[1508]: time="2025-04-30T01:30:41.894545629Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 01:30:41.894653 containerd[1508]: time="2025-04-30T01:30:41.894572629Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 01:30:41.895831 containerd[1508]: time="2025-04-30T01:30:41.894709405Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 01:30:41.922246 systemd[1]: Started cri-containerd-22a17311737c79605171436cb6e6ce937fa3392e377c48f81831978386723888.scope - libcontainer container 22a17311737c79605171436cb6e6ce937fa3392e377c48f81831978386723888. Apr 30 01:30:41.953633 containerd[1508]: time="2025-04-30T01:30:41.953421481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4724j,Uid:2dbff5df-d396-4c64-85f8-794348ac64b6,Namespace:kube-system,Attempt:0,} returns sandbox id \"22a17311737c79605171436cb6e6ce937fa3392e377c48f81831978386723888\"" Apr 30 01:30:41.960196 containerd[1508]: time="2025-04-30T01:30:41.960015245Z" level=info msg="CreateContainer within sandbox \"22a17311737c79605171436cb6e6ce937fa3392e377c48f81831978386723888\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 30 01:30:41.971670 containerd[1508]: time="2025-04-30T01:30:41.971633103Z" level=info msg="CreateContainer within sandbox \"22a17311737c79605171436cb6e6ce937fa3392e377c48f81831978386723888\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"db91a4adc5e0fe5b59a19438c9cac4f948384b9c8d08a482b9847294c575399f\"" Apr 30 01:30:41.972309 containerd[1508]: time="2025-04-30T01:30:41.972281223Z" level=info msg="StartContainer for \"db91a4adc5e0fe5b59a19438c9cac4f948384b9c8d08a482b9847294c575399f\"" Apr 30 01:30:42.009191 systemd[1]: Started cri-containerd-db91a4adc5e0fe5b59a19438c9cac4f948384b9c8d08a482b9847294c575399f.scope - libcontainer container db91a4adc5e0fe5b59a19438c9cac4f948384b9c8d08a482b9847294c575399f. Apr 30 01:30:42.047041 containerd[1508]: time="2025-04-30T01:30:42.046942902Z" level=info msg="StartContainer for \"db91a4adc5e0fe5b59a19438c9cac4f948384b9c8d08a482b9847294c575399f\" returns successfully" Apr 30 01:30:42.067764 systemd[1]: cri-containerd-db91a4adc5e0fe5b59a19438c9cac4f948384b9c8d08a482b9847294c575399f.scope: Deactivated successfully. Apr 30 01:30:42.175540 containerd[1508]: time="2025-04-30T01:30:42.175401349Z" level=info msg="shim disconnected" id=db91a4adc5e0fe5b59a19438c9cac4f948384b9c8d08a482b9847294c575399f namespace=k8s.io Apr 30 01:30:42.175540 containerd[1508]: time="2025-04-30T01:30:42.175479251Z" level=warning msg="cleaning up after shim disconnected" id=db91a4adc5e0fe5b59a19438c9cac4f948384b9c8d08a482b9847294c575399f namespace=k8s.io Apr 30 01:30:42.175540 containerd[1508]: time="2025-04-30T01:30:42.175506335Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 01:30:42.664594 sshd[4499]: Accepted publickey for core from 139.178.68.195 port 54172 ssh2: RSA SHA256:MfEzgEaq/hy6que95IR+2Hjd2kD3vXBLQEa1zk00RtE Apr 30 01:30:42.666522 sshd-session[4499]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:30:42.673174 systemd-logind[1491]: New session 29 of user core. Apr 30 01:30:42.678214 systemd[1]: Started session-29.scope - Session 29 of User core. Apr 30 01:30:43.140218 containerd[1508]: time="2025-04-30T01:30:43.140153047Z" level=info msg="CreateContainer within sandbox \"22a17311737c79605171436cb6e6ce937fa3392e377c48f81831978386723888\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 30 01:30:43.162528 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2705295267.mount: Deactivated successfully. Apr 30 01:30:43.169744 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount233071760.mount: Deactivated successfully. Apr 30 01:30:43.174499 containerd[1508]: time="2025-04-30T01:30:43.173945570Z" level=info msg="CreateContainer within sandbox \"22a17311737c79605171436cb6e6ce937fa3392e377c48f81831978386723888\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f3a4205b3656f83f664cc70a857d53434ffed20279b7b4ac8b0968499676be7a\"" Apr 30 01:30:43.176603 containerd[1508]: time="2025-04-30T01:30:43.176564444Z" level=info msg="StartContainer for \"f3a4205b3656f83f664cc70a857d53434ffed20279b7b4ac8b0968499676be7a\"" Apr 30 01:30:43.217271 systemd[1]: Started cri-containerd-f3a4205b3656f83f664cc70a857d53434ffed20279b7b4ac8b0968499676be7a.scope - libcontainer container f3a4205b3656f83f664cc70a857d53434ffed20279b7b4ac8b0968499676be7a. Apr 30 01:30:43.255907 containerd[1508]: time="2025-04-30T01:30:43.255855706Z" level=info msg="StartContainer for \"f3a4205b3656f83f664cc70a857d53434ffed20279b7b4ac8b0968499676be7a\" returns successfully" Apr 30 01:30:43.271726 systemd[1]: cri-containerd-f3a4205b3656f83f664cc70a857d53434ffed20279b7b4ac8b0968499676be7a.scope: Deactivated successfully. Apr 30 01:30:43.286002 sshd[4603]: Connection closed by 139.178.68.195 port 54172 Apr 30 01:30:43.285747 sshd-session[4499]: pam_unix(sshd:session): session closed for user core Apr 30 01:30:43.291340 systemd[1]: sshd@26-10.243.74.78:22-139.178.68.195:54172.service: Deactivated successfully. Apr 30 01:30:43.294925 systemd[1]: session-29.scope: Deactivated successfully. Apr 30 01:30:43.297336 systemd-logind[1491]: Session 29 logged out. Waiting for processes to exit. Apr 30 01:30:43.300192 systemd-logind[1491]: Removed session 29. Apr 30 01:30:43.320023 containerd[1508]: time="2025-04-30T01:30:43.319887476Z" level=info msg="shim disconnected" id=f3a4205b3656f83f664cc70a857d53434ffed20279b7b4ac8b0968499676be7a namespace=k8s.io Apr 30 01:30:43.320348 containerd[1508]: time="2025-04-30T01:30:43.320075641Z" level=warning msg="cleaning up after shim disconnected" id=f3a4205b3656f83f664cc70a857d53434ffed20279b7b4ac8b0968499676be7a namespace=k8s.io Apr 30 01:30:43.320348 containerd[1508]: time="2025-04-30T01:30:43.320095054Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 01:30:43.449351 systemd[1]: Started sshd@27-10.243.74.78:22-139.178.68.195:54184.service - OpenSSH per-connection server daemon (139.178.68.195:54184). Apr 30 01:30:43.736470 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f3a4205b3656f83f664cc70a857d53434ffed20279b7b4ac8b0968499676be7a-rootfs.mount: Deactivated successfully. Apr 30 01:30:43.800083 kubelet[2725]: E0430 01:30:43.799993 2725 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 30 01:30:44.140451 containerd[1508]: time="2025-04-30T01:30:44.140363253Z" level=info msg="CreateContainer within sandbox \"22a17311737c79605171436cb6e6ce937fa3392e377c48f81831978386723888\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 30 01:30:44.232094 containerd[1508]: time="2025-04-30T01:30:44.231869189Z" level=info msg="CreateContainer within sandbox \"22a17311737c79605171436cb6e6ce937fa3392e377c48f81831978386723888\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"132789d0358bbf1f191532024f0d452284fff4b2c060d3b2db97315613abcbab\"" Apr 30 01:30:44.232876 containerd[1508]: time="2025-04-30T01:30:44.232685004Z" level=info msg="StartContainer for \"132789d0358bbf1f191532024f0d452284fff4b2c060d3b2db97315613abcbab\"" Apr 30 01:30:44.277185 systemd[1]: Started cri-containerd-132789d0358bbf1f191532024f0d452284fff4b2c060d3b2db97315613abcbab.scope - libcontainer container 132789d0358bbf1f191532024f0d452284fff4b2c060d3b2db97315613abcbab. Apr 30 01:30:44.321306 containerd[1508]: time="2025-04-30T01:30:44.320733644Z" level=info msg="StartContainer for \"132789d0358bbf1f191532024f0d452284fff4b2c060d3b2db97315613abcbab\" returns successfully" Apr 30 01:30:44.330514 systemd[1]: cri-containerd-132789d0358bbf1f191532024f0d452284fff4b2c060d3b2db97315613abcbab.scope: Deactivated successfully. Apr 30 01:30:44.351457 sshd[4670]: Accepted publickey for core from 139.178.68.195 port 54184 ssh2: RSA SHA256:MfEzgEaq/hy6que95IR+2Hjd2kD3vXBLQEa1zk00RtE Apr 30 01:30:44.353925 sshd-session[4670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:30:44.365553 systemd-logind[1491]: New session 30 of user core. Apr 30 01:30:44.368772 containerd[1508]: time="2025-04-30T01:30:44.368382177Z" level=info msg="shim disconnected" id=132789d0358bbf1f191532024f0d452284fff4b2c060d3b2db97315613abcbab namespace=k8s.io Apr 30 01:30:44.368772 containerd[1508]: time="2025-04-30T01:30:44.368453335Z" level=warning msg="cleaning up after shim disconnected" id=132789d0358bbf1f191532024f0d452284fff4b2c060d3b2db97315613abcbab namespace=k8s.io Apr 30 01:30:44.368772 containerd[1508]: time="2025-04-30T01:30:44.368482600Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 01:30:44.372214 systemd[1]: Started session-30.scope - Session 30 of User core. Apr 30 01:30:44.736359 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-132789d0358bbf1f191532024f0d452284fff4b2c060d3b2db97315613abcbab-rootfs.mount: Deactivated successfully. Apr 30 01:30:45.147362 containerd[1508]: time="2025-04-30T01:30:45.147298772Z" level=info msg="CreateContainer within sandbox \"22a17311737c79605171436cb6e6ce937fa3392e377c48f81831978386723888\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 30 01:30:45.173344 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount282270737.mount: Deactivated successfully. Apr 30 01:30:45.180699 containerd[1508]: time="2025-04-30T01:30:45.180603930Z" level=info msg="CreateContainer within sandbox \"22a17311737c79605171436cb6e6ce937fa3392e377c48f81831978386723888\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a9d274f09b5270acd3ceb859ae827e6bfe1f7fb422098a09bef0d5d0ba66d1cb\"" Apr 30 01:30:45.183000 containerd[1508]: time="2025-04-30T01:30:45.181601680Z" level=info msg="StartContainer for \"a9d274f09b5270acd3ceb859ae827e6bfe1f7fb422098a09bef0d5d0ba66d1cb\"" Apr 30 01:30:45.241323 systemd[1]: Started cri-containerd-a9d274f09b5270acd3ceb859ae827e6bfe1f7fb422098a09bef0d5d0ba66d1cb.scope - libcontainer container a9d274f09b5270acd3ceb859ae827e6bfe1f7fb422098a09bef0d5d0ba66d1cb. Apr 30 01:30:45.280768 systemd[1]: cri-containerd-a9d274f09b5270acd3ceb859ae827e6bfe1f7fb422098a09bef0d5d0ba66d1cb.scope: Deactivated successfully. Apr 30 01:30:45.287194 containerd[1508]: time="2025-04-30T01:30:45.287145244Z" level=info msg="StartContainer for \"a9d274f09b5270acd3ceb859ae827e6bfe1f7fb422098a09bef0d5d0ba66d1cb\" returns successfully" Apr 30 01:30:45.287909 containerd[1508]: time="2025-04-30T01:30:45.283658367Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2dbff5df_d396_4c64_85f8_794348ac64b6.slice/cri-containerd-a9d274f09b5270acd3ceb859ae827e6bfe1f7fb422098a09bef0d5d0ba66d1cb.scope/memory.events\": no such file or directory" Apr 30 01:30:45.316589 containerd[1508]: time="2025-04-30T01:30:45.316513243Z" level=info msg="shim disconnected" id=a9d274f09b5270acd3ceb859ae827e6bfe1f7fb422098a09bef0d5d0ba66d1cb namespace=k8s.io Apr 30 01:30:45.316924 containerd[1508]: time="2025-04-30T01:30:45.316895351Z" level=warning msg="cleaning up after shim disconnected" id=a9d274f09b5270acd3ceb859ae827e6bfe1f7fb422098a09bef0d5d0ba66d1cb namespace=k8s.io Apr 30 01:30:45.317085 containerd[1508]: time="2025-04-30T01:30:45.317058632Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 01:30:45.333539 containerd[1508]: time="2025-04-30T01:30:45.333464372Z" level=warning msg="cleanup warnings time=\"2025-04-30T01:30:45Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 30 01:30:45.736498 systemd[1]: run-containerd-runc-k8s.io-a9d274f09b5270acd3ceb859ae827e6bfe1f7fb422098a09bef0d5d0ba66d1cb-runc.sAjSXy.mount: Deactivated successfully. Apr 30 01:30:45.736639 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a9d274f09b5270acd3ceb859ae827e6bfe1f7fb422098a09bef0d5d0ba66d1cb-rootfs.mount: Deactivated successfully. Apr 30 01:30:46.152329 containerd[1508]: time="2025-04-30T01:30:46.152272096Z" level=info msg="CreateContainer within sandbox \"22a17311737c79605171436cb6e6ce937fa3392e377c48f81831978386723888\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 30 01:30:46.176409 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3663306018.mount: Deactivated successfully. Apr 30 01:30:46.179086 containerd[1508]: time="2025-04-30T01:30:46.179034520Z" level=info msg="CreateContainer within sandbox \"22a17311737c79605171436cb6e6ce937fa3392e377c48f81831978386723888\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ba4bc41624ab2e731000f9e6338318eb4c308efcd860ddaba6f006b4098db3b4\"" Apr 30 01:30:46.182218 containerd[1508]: time="2025-04-30T01:30:46.182186567Z" level=info msg="StartContainer for \"ba4bc41624ab2e731000f9e6338318eb4c308efcd860ddaba6f006b4098db3b4\"" Apr 30 01:30:46.227154 systemd[1]: Started cri-containerd-ba4bc41624ab2e731000f9e6338318eb4c308efcd860ddaba6f006b4098db3b4.scope - libcontainer container ba4bc41624ab2e731000f9e6338318eb4c308efcd860ddaba6f006b4098db3b4. Apr 30 01:30:46.293837 containerd[1508]: time="2025-04-30T01:30:46.293665526Z" level=info msg="StartContainer for \"ba4bc41624ab2e731000f9e6338318eb4c308efcd860ddaba6f006b4098db3b4\" returns successfully" Apr 30 01:30:46.987033 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Apr 30 01:30:47.009340 kubelet[2725]: I0430 01:30:47.009261 2725 setters.go:602] "Node became not ready" node="srv-tw98u.gb1.brightbox.com" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-04-30T01:30:47Z","lastTransitionTime":"2025-04-30T01:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Apr 30 01:30:47.190578 kubelet[2725]: I0430 01:30:47.190470 2725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-4724j" podStartSLOduration=6.190440855 podStartE2EDuration="6.190440855s" podCreationTimestamp="2025-04-30 01:30:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 01:30:47.187505963 +0000 UTC m=+153.856703188" watchObservedRunningTime="2025-04-30 01:30:47.190440855 +0000 UTC m=+153.859638050" Apr 30 01:30:49.584942 systemd[1]: run-containerd-runc-k8s.io-ba4bc41624ab2e731000f9e6338318eb4c308efcd860ddaba6f006b4098db3b4-runc.CZ2wQp.mount: Deactivated successfully. Apr 30 01:30:50.808533 systemd-networkd[1425]: lxc_health: Link UP Apr 30 01:30:50.819781 systemd-networkd[1425]: lxc_health: Gained carrier Apr 30 01:30:52.587254 systemd-networkd[1425]: lxc_health: Gained IPv6LL Apr 30 01:30:54.220949 kubelet[2725]: E0430 01:30:54.220845 2725 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:49888->127.0.0.1:45377: write tcp 127.0.0.1:49888->127.0.0.1:45377: write: broken pipe Apr 30 01:30:56.664003 sshd[4723]: Connection closed by 139.178.68.195 port 54184 Apr 30 01:30:56.665794 sshd-session[4670]: pam_unix(sshd:session): session closed for user core Apr 30 01:30:56.675463 systemd-logind[1491]: Session 30 logged out. Waiting for processes to exit. Apr 30 01:30:56.676524 systemd[1]: sshd@27-10.243.74.78:22-139.178.68.195:54184.service: Deactivated successfully. Apr 30 01:30:56.683387 systemd[1]: session-30.scope: Deactivated successfully. Apr 30 01:30:56.691332 systemd-logind[1491]: Removed session 30.