Nov 1 00:42:36.026972 kernel: Linux version 5.15.192-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Oct 31 23:02:53 -00 2025 Nov 1 00:42:36.026994 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c4c72a4f851a6da01cbc7150799371516ef8311ea786098908d8eb164df01ee2 Nov 1 00:42:36.027002 kernel: BIOS-provided physical RAM map: Nov 1 00:42:36.027008 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Nov 1 00:42:36.027014 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Nov 1 00:42:36.027019 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 1 00:42:36.027026 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Nov 1 00:42:36.027032 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Nov 1 00:42:36.027039 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Nov 1 00:42:36.027045 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Nov 1 00:42:36.027051 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 1 00:42:36.027056 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 1 00:42:36.027062 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 1 00:42:36.027067 kernel: NX (Execute Disable) protection: active Nov 1 00:42:36.027076 kernel: SMBIOS 2.8 present. Nov 1 00:42:36.027082 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Nov 1 00:42:36.027088 kernel: Hypervisor detected: KVM Nov 1 00:42:36.027094 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 1 00:42:36.027103 kernel: kvm-clock: cpu 0, msr 1d1a0001, primary cpu clock Nov 1 00:42:36.027109 kernel: kvm-clock: using sched offset of 4549640220 cycles Nov 1 00:42:36.027116 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 1 00:42:36.027123 kernel: tsc: Detected 2794.748 MHz processor Nov 1 00:42:36.027129 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 1 00:42:36.027137 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 1 00:42:36.027143 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Nov 1 00:42:36.027149 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 1 00:42:36.027156 kernel: Using GB pages for direct mapping Nov 1 00:42:36.027162 kernel: ACPI: Early table checksum verification disabled Nov 1 00:42:36.027168 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Nov 1 00:42:36.027174 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:42:36.027181 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:42:36.027187 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:42:36.027195 kernel: ACPI: FACS 0x000000009CFE0000 000040 Nov 1 00:42:36.027201 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:42:36.027207 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:42:36.027213 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:42:36.027220 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:42:36.027226 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Nov 1 00:42:36.027232 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Nov 1 00:42:36.027239 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Nov 1 00:42:36.027248 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Nov 1 00:42:36.027255 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Nov 1 00:42:36.027262 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Nov 1 00:42:36.027268 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Nov 1 00:42:36.027275 kernel: No NUMA configuration found Nov 1 00:42:36.027281 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Nov 1 00:42:36.027289 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Nov 1 00:42:36.027296 kernel: Zone ranges: Nov 1 00:42:36.027303 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 1 00:42:36.027309 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Nov 1 00:42:36.027316 kernel: Normal empty Nov 1 00:42:36.027322 kernel: Movable zone start for each node Nov 1 00:42:36.027329 kernel: Early memory node ranges Nov 1 00:42:36.027336 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 1 00:42:36.027342 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Nov 1 00:42:36.027350 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Nov 1 00:42:36.027359 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 1 00:42:36.027366 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 1 00:42:36.027373 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Nov 1 00:42:36.027380 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 1 00:42:36.027389 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 1 00:42:36.027398 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 1 00:42:36.027407 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 1 00:42:36.027415 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 1 00:42:36.027424 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 1 00:42:36.027437 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 1 00:42:36.027444 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 1 00:42:36.027462 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 1 00:42:36.027469 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 1 00:42:36.027475 kernel: TSC deadline timer available Nov 1 00:42:36.027482 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Nov 1 00:42:36.027489 kernel: kvm-guest: KVM setup pv remote TLB flush Nov 1 00:42:36.027495 kernel: kvm-guest: setup PV sched yield Nov 1 00:42:36.027502 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Nov 1 00:42:36.027510 kernel: Booting paravirtualized kernel on KVM Nov 1 00:42:36.027517 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 1 00:42:36.027524 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Nov 1 00:42:36.027531 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 Nov 1 00:42:36.027537 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 Nov 1 00:42:36.027544 kernel: pcpu-alloc: [0] 0 1 2 3 Nov 1 00:42:36.027550 kernel: kvm-guest: setup async PF for cpu 0 Nov 1 00:42:36.027557 kernel: kvm-guest: stealtime: cpu 0, msr 9a41c0c0 Nov 1 00:42:36.027563 kernel: kvm-guest: PV spinlocks enabled Nov 1 00:42:36.027571 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 1 00:42:36.027582 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Nov 1 00:42:36.027589 kernel: Policy zone: DMA32 Nov 1 00:42:36.027597 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c4c72a4f851a6da01cbc7150799371516ef8311ea786098908d8eb164df01ee2 Nov 1 00:42:36.027605 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Nov 1 00:42:36.027611 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 1 00:42:36.027618 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 1 00:42:36.027625 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 1 00:42:36.027634 kernel: Memory: 2436700K/2571752K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47496K init, 4084K bss, 134796K reserved, 0K cma-reserved) Nov 1 00:42:36.027641 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 1 00:42:36.027647 kernel: ftrace: allocating 34614 entries in 136 pages Nov 1 00:42:36.027654 kernel: ftrace: allocated 136 pages with 2 groups Nov 1 00:42:36.027661 kernel: rcu: Hierarchical RCU implementation. Nov 1 00:42:36.027668 kernel: rcu: RCU event tracing is enabled. Nov 1 00:42:36.027675 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 1 00:42:36.027682 kernel: Rude variant of Tasks RCU enabled. Nov 1 00:42:36.027688 kernel: Tracing variant of Tasks RCU enabled. Nov 1 00:42:36.027696 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 1 00:42:36.027703 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 1 00:42:36.027710 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Nov 1 00:42:36.027716 kernel: random: crng init done Nov 1 00:42:36.027723 kernel: Console: colour VGA+ 80x25 Nov 1 00:42:36.027730 kernel: printk: console [ttyS0] enabled Nov 1 00:42:36.027736 kernel: ACPI: Core revision 20210730 Nov 1 00:42:36.027743 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 1 00:42:36.027750 kernel: APIC: Switch to symmetric I/O mode setup Nov 1 00:42:36.027758 kernel: x2apic enabled Nov 1 00:42:36.027764 kernel: Switched APIC routing to physical x2apic. Nov 1 00:42:36.027783 kernel: kvm-guest: setup PV IPIs Nov 1 00:42:36.027790 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 1 00:42:36.027797 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Nov 1 00:42:36.027806 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Nov 1 00:42:36.027813 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 1 00:42:36.027819 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 1 00:42:36.027826 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 1 00:42:36.027840 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 1 00:42:36.027849 kernel: Spectre V2 : Mitigation: Retpolines Nov 1 00:42:36.027863 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 1 00:42:36.027875 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Nov 1 00:42:36.027883 kernel: active return thunk: retbleed_return_thunk Nov 1 00:42:36.027891 kernel: RETBleed: Mitigation: untrained return thunk Nov 1 00:42:36.027901 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 1 00:42:36.027910 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Nov 1 00:42:36.027918 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 1 00:42:36.027928 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 1 00:42:36.027935 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 1 00:42:36.027942 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 1 00:42:36.027949 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Nov 1 00:42:36.027956 kernel: Freeing SMP alternatives memory: 32K Nov 1 00:42:36.027963 kernel: pid_max: default: 32768 minimum: 301 Nov 1 00:42:36.027970 kernel: LSM: Security Framework initializing Nov 1 00:42:36.027978 kernel: SELinux: Initializing. Nov 1 00:42:36.027985 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 1 00:42:36.027992 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 1 00:42:36.027999 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Nov 1 00:42:36.028006 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 1 00:42:36.028013 kernel: ... version: 0 Nov 1 00:42:36.028020 kernel: ... bit width: 48 Nov 1 00:42:36.028027 kernel: ... generic registers: 6 Nov 1 00:42:36.028034 kernel: ... value mask: 0000ffffffffffff Nov 1 00:42:36.028042 kernel: ... max period: 00007fffffffffff Nov 1 00:42:36.028049 kernel: ... fixed-purpose events: 0 Nov 1 00:42:36.028056 kernel: ... event mask: 000000000000003f Nov 1 00:42:36.028063 kernel: signal: max sigframe size: 1776 Nov 1 00:42:36.028069 kernel: rcu: Hierarchical SRCU implementation. Nov 1 00:42:36.028076 kernel: smp: Bringing up secondary CPUs ... Nov 1 00:42:36.028083 kernel: x86: Booting SMP configuration: Nov 1 00:42:36.028090 kernel: .... node #0, CPUs: #1 Nov 1 00:42:36.028097 kernel: kvm-clock: cpu 1, msr 1d1a0041, secondary cpu clock Nov 1 00:42:36.028105 kernel: kvm-guest: setup async PF for cpu 1 Nov 1 00:42:36.028112 kernel: kvm-guest: stealtime: cpu 1, msr 9a49c0c0 Nov 1 00:42:36.028119 kernel: #2 Nov 1 00:42:36.028126 kernel: kvm-clock: cpu 2, msr 1d1a0081, secondary cpu clock Nov 1 00:42:36.028133 kernel: kvm-guest: setup async PF for cpu 2 Nov 1 00:42:36.028140 kernel: kvm-guest: stealtime: cpu 2, msr 9a51c0c0 Nov 1 00:42:36.028151 kernel: #3 Nov 1 00:42:36.028158 kernel: kvm-clock: cpu 3, msr 1d1a00c1, secondary cpu clock Nov 1 00:42:36.028165 kernel: kvm-guest: setup async PF for cpu 3 Nov 1 00:42:36.028172 kernel: kvm-guest: stealtime: cpu 3, msr 9a59c0c0 Nov 1 00:42:36.028180 kernel: smp: Brought up 1 node, 4 CPUs Nov 1 00:42:36.028187 kernel: smpboot: Max logical packages: 1 Nov 1 00:42:36.028194 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Nov 1 00:42:36.028201 kernel: devtmpfs: initialized Nov 1 00:42:36.028208 kernel: x86/mm: Memory block size: 128MB Nov 1 00:42:36.028215 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 1 00:42:36.028222 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 1 00:42:36.028229 kernel: pinctrl core: initialized pinctrl subsystem Nov 1 00:42:36.028236 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 1 00:42:36.028244 kernel: audit: initializing netlink subsys (disabled) Nov 1 00:42:36.028251 kernel: audit: type=2000 audit(1761957755.675:1): state=initialized audit_enabled=0 res=1 Nov 1 00:42:36.028258 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 1 00:42:36.028265 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 1 00:42:36.028272 kernel: cpuidle: using governor menu Nov 1 00:42:36.028279 kernel: ACPI: bus type PCI registered Nov 1 00:42:36.028286 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 1 00:42:36.028292 kernel: dca service started, version 1.12.1 Nov 1 00:42:36.028300 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Nov 1 00:42:36.028308 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 Nov 1 00:42:36.028315 kernel: PCI: Using configuration type 1 for base access Nov 1 00:42:36.028322 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 1 00:42:36.028329 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Nov 1 00:42:36.028336 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Nov 1 00:42:36.028343 kernel: ACPI: Added _OSI(Module Device) Nov 1 00:42:36.028350 kernel: ACPI: Added _OSI(Processor Device) Nov 1 00:42:36.028357 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 1 00:42:36.028364 kernel: ACPI: Added _OSI(Linux-Dell-Video) Nov 1 00:42:36.028372 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Nov 1 00:42:36.028380 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Nov 1 00:42:36.028411 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 1 00:42:36.028428 kernel: ACPI: Interpreter enabled Nov 1 00:42:36.028472 kernel: ACPI: PM: (supports S0 S3 S5) Nov 1 00:42:36.028480 kernel: ACPI: Using IOAPIC for interrupt routing Nov 1 00:42:36.028487 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 1 00:42:36.028495 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 1 00:42:36.028504 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 1 00:42:36.028815 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 1 00:42:36.028953 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 1 00:42:36.029065 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 1 00:42:36.029080 kernel: PCI host bridge to bus 0000:00 Nov 1 00:42:36.029205 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 1 00:42:36.029305 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 1 00:42:36.029397 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 1 00:42:36.029500 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Nov 1 00:42:36.029582 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 1 00:42:36.029648 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Nov 1 00:42:36.029715 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 1 00:42:36.029910 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Nov 1 00:42:36.030008 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Nov 1 00:42:36.030092 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Nov 1 00:42:36.035678 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Nov 1 00:42:36.035880 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Nov 1 00:42:36.036058 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 1 00:42:36.036271 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Nov 1 00:42:36.036481 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Nov 1 00:42:36.036670 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Nov 1 00:42:36.036878 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Nov 1 00:42:36.036999 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Nov 1 00:42:36.037080 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Nov 1 00:42:36.037175 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Nov 1 00:42:36.037252 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Nov 1 00:42:36.037345 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Nov 1 00:42:36.037427 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Nov 1 00:42:36.037544 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Nov 1 00:42:36.037700 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Nov 1 00:42:36.037906 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Nov 1 00:42:36.038139 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Nov 1 00:42:36.038348 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 1 00:42:36.038615 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Nov 1 00:42:36.038846 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Nov 1 00:42:36.039058 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Nov 1 00:42:36.039297 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Nov 1 00:42:36.039525 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Nov 1 00:42:36.039548 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 1 00:42:36.039567 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 1 00:42:36.039585 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 1 00:42:36.039602 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 1 00:42:36.039625 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 1 00:42:36.039643 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 1 00:42:36.039661 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 1 00:42:36.039679 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 1 00:42:36.039697 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 1 00:42:36.039715 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 1 00:42:36.039733 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 1 00:42:36.039751 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 1 00:42:36.039793 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 1 00:42:36.039838 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 1 00:42:36.039856 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 1 00:42:36.039875 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 1 00:42:36.039893 kernel: iommu: Default domain type: Translated Nov 1 00:42:36.039911 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 1 00:42:36.040173 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 1 00:42:36.040393 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 1 00:42:36.040628 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 1 00:42:36.040657 kernel: vgaarb: loaded Nov 1 00:42:36.040675 kernel: pps_core: LinuxPPS API ver. 1 registered Nov 1 00:42:36.040693 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Nov 1 00:42:36.040711 kernel: PTP clock support registered Nov 1 00:42:36.040729 kernel: PCI: Using ACPI for IRQ routing Nov 1 00:42:36.040748 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 1 00:42:36.040766 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Nov 1 00:42:36.040798 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Nov 1 00:42:36.040816 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 1 00:42:36.040838 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 1 00:42:36.040857 kernel: clocksource: Switched to clocksource kvm-clock Nov 1 00:42:36.040875 kernel: VFS: Disk quotas dquot_6.6.0 Nov 1 00:42:36.040894 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 1 00:42:36.040912 kernel: pnp: PnP ACPI init Nov 1 00:42:36.041163 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Nov 1 00:42:36.041187 kernel: pnp: PnP ACPI: found 6 devices Nov 1 00:42:36.041206 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 1 00:42:36.041230 kernel: NET: Registered PF_INET protocol family Nov 1 00:42:36.041248 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 1 00:42:36.041266 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 1 00:42:36.041285 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 1 00:42:36.041303 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 1 00:42:36.041321 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Nov 1 00:42:36.041339 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 1 00:42:36.041358 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 1 00:42:36.041375 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 1 00:42:36.041397 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 1 00:42:36.041416 kernel: NET: Registered PF_XDP protocol family Nov 1 00:42:36.041628 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 1 00:42:36.041839 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 1 00:42:36.042034 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 1 00:42:36.042229 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Nov 1 00:42:36.042424 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Nov 1 00:42:36.042635 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Nov 1 00:42:36.042663 kernel: PCI: CLS 0 bytes, default 64 Nov 1 00:42:36.042682 kernel: Initialise system trusted keyrings Nov 1 00:42:36.042700 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 1 00:42:36.042718 kernel: Key type asymmetric registered Nov 1 00:42:36.042736 kernel: Asymmetric key parser 'x509' registered Nov 1 00:42:36.042754 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Nov 1 00:42:36.042783 kernel: io scheduler mq-deadline registered Nov 1 00:42:36.042801 kernel: io scheduler kyber registered Nov 1 00:42:36.042819 kernel: io scheduler bfq registered Nov 1 00:42:36.042841 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 1 00:42:36.042861 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 1 00:42:36.042880 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 1 00:42:36.042899 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Nov 1 00:42:36.042917 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 1 00:42:36.042936 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 1 00:42:36.042954 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 1 00:42:36.042973 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 1 00:42:36.042991 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 1 00:42:36.043241 kernel: rtc_cmos 00:04: RTC can wake from S4 Nov 1 00:42:36.043264 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 1 00:42:36.043475 kernel: rtc_cmos 00:04: registered as rtc0 Nov 1 00:42:36.043679 kernel: rtc_cmos 00:04: setting system clock to 2025-11-01T00:42:35 UTC (1761957755) Nov 1 00:42:36.043893 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Nov 1 00:42:36.043915 kernel: NET: Registered PF_INET6 protocol family Nov 1 00:42:36.043933 kernel: Segment Routing with IPv6 Nov 1 00:42:36.043951 kernel: In-situ OAM (IOAM) with IPv6 Nov 1 00:42:36.043975 kernel: NET: Registered PF_PACKET protocol family Nov 1 00:42:36.043993 kernel: Key type dns_resolver registered Nov 1 00:42:36.044011 kernel: IPI shorthand broadcast: enabled Nov 1 00:42:36.044029 kernel: sched_clock: Marking stable (583259494, 200865456)->(964069334, -179944384) Nov 1 00:42:36.044046 kernel: registered taskstats version 1 Nov 1 00:42:36.044064 kernel: Loading compiled-in X.509 certificates Nov 1 00:42:36.044082 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.192-flatcar: f2055682e6899ad8548fd369019e7b47939b46a0' Nov 1 00:42:36.044100 kernel: Key type .fscrypt registered Nov 1 00:42:36.044118 kernel: Key type fscrypt-provisioning registered Nov 1 00:42:36.044142 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 1 00:42:36.044160 kernel: ima: Allocated hash algorithm: sha1 Nov 1 00:42:36.044179 kernel: ima: No architecture policies found Nov 1 00:42:36.044197 kernel: clk: Disabling unused clocks Nov 1 00:42:36.044214 kernel: Freeing unused kernel image (initmem) memory: 47496K Nov 1 00:42:36.044231 kernel: Write protecting the kernel read-only data: 28672k Nov 1 00:42:36.044249 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Nov 1 00:42:36.044267 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Nov 1 00:42:36.044285 kernel: Run /init as init process Nov 1 00:42:36.044309 kernel: with arguments: Nov 1 00:42:36.044326 kernel: /init Nov 1 00:42:36.044343 kernel: with environment: Nov 1 00:42:36.044360 kernel: HOME=/ Nov 1 00:42:36.044378 kernel: TERM=linux Nov 1 00:42:36.044397 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Nov 1 00:42:36.044419 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Nov 1 00:42:36.044441 systemd[1]: Detected virtualization kvm. Nov 1 00:42:36.044482 systemd[1]: Detected architecture x86-64. Nov 1 00:42:36.044501 systemd[1]: Running in initrd. Nov 1 00:42:36.044521 systemd[1]: No hostname configured, using default hostname. Nov 1 00:42:36.044540 systemd[1]: Hostname set to . Nov 1 00:42:36.044562 systemd[1]: Initializing machine ID from VM UUID. Nov 1 00:42:36.044581 systemd[1]: Queued start job for default target initrd.target. Nov 1 00:42:36.044601 systemd[1]: Started systemd-ask-password-console.path. Nov 1 00:42:36.044620 systemd[1]: Reached target cryptsetup.target. Nov 1 00:42:36.044643 systemd[1]: Reached target paths.target. Nov 1 00:42:36.044663 systemd[1]: Reached target slices.target. Nov 1 00:42:36.044697 systemd[1]: Reached target swap.target. Nov 1 00:42:36.044721 systemd[1]: Reached target timers.target. Nov 1 00:42:36.044742 systemd[1]: Listening on iscsid.socket. Nov 1 00:42:36.044765 systemd[1]: Listening on iscsiuio.socket. Nov 1 00:42:36.044797 systemd[1]: Listening on systemd-journald-audit.socket. Nov 1 00:42:36.044817 systemd[1]: Listening on systemd-journald-dev-log.socket. Nov 1 00:42:36.044837 systemd[1]: Listening on systemd-journald.socket. Nov 1 00:42:36.044857 systemd[1]: Listening on systemd-networkd.socket. Nov 1 00:42:36.044877 systemd[1]: Listening on systemd-udevd-control.socket. Nov 1 00:42:36.044897 systemd[1]: Listening on systemd-udevd-kernel.socket. Nov 1 00:42:36.044917 systemd[1]: Reached target sockets.target. Nov 1 00:42:36.044938 systemd[1]: Starting kmod-static-nodes.service... Nov 1 00:42:36.044961 systemd[1]: Finished network-cleanup.service. Nov 1 00:42:36.044981 systemd[1]: Starting systemd-fsck-usr.service... Nov 1 00:42:36.045001 systemd[1]: Starting systemd-journald.service... Nov 1 00:42:36.045021 systemd[1]: Starting systemd-modules-load.service... Nov 1 00:42:36.045041 systemd[1]: Starting systemd-resolved.service... Nov 1 00:42:36.045061 systemd[1]: Starting systemd-vconsole-setup.service... Nov 1 00:42:36.045081 systemd[1]: Finished kmod-static-nodes.service. Nov 1 00:42:36.045101 kernel: audit: type=1130 audit(1761957756.029:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:36.045120 systemd[1]: Finished systemd-fsck-usr.service. Nov 1 00:42:36.045145 systemd-journald[199]: Journal started Nov 1 00:42:36.045220 systemd-journald[199]: Runtime Journal (/run/log/journal/bc7a93b2b6544f86b07a782fc0210b3a) is 6.0M, max 48.5M, 42.5M free. Nov 1 00:42:36.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:36.029906 systemd-modules-load[200]: Inserted module 'overlay' Nov 1 00:42:36.124849 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 1 00:42:36.124897 kernel: Bridge firewalling registered Nov 1 00:42:36.124912 kernel: SCSI subsystem initialized Nov 1 00:42:36.124925 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 1 00:42:36.124938 kernel: device-mapper: uevent: version 1.0.3 Nov 1 00:42:36.124950 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Nov 1 00:42:36.124962 kernel: audit: type=1130 audit(1761957756.124:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:36.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:36.064972 systemd-resolved[201]: Positive Trust Anchors: Nov 1 00:42:36.165278 systemd[1]: Started systemd-journald.service. Nov 1 00:42:36.165308 kernel: audit: type=1130 audit(1761957756.132:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:36.165334 kernel: audit: type=1130 audit(1761957756.132:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:36.165348 kernel: audit: type=1130 audit(1761957756.133:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:36.165360 kernel: audit: type=1130 audit(1761957756.149:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:36.132000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:36.132000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:36.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:36.149000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:36.064994 systemd-resolved[201]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 00:42:36.065041 systemd-resolved[201]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Nov 1 00:42:36.068913 systemd-resolved[201]: Defaulting to hostname 'linux'. Nov 1 00:42:36.187890 kernel: audit: type=1130 audit(1761957756.179:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:36.179000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:36.073746 systemd-modules-load[200]: Inserted module 'br_netfilter' Nov 1 00:42:36.190000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:36.105727 systemd-modules-load[200]: Inserted module 'dm_multipath' Nov 1 00:42:36.197921 kernel: audit: type=1130 audit(1761957756.190:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:36.133070 systemd[1]: Started systemd-resolved.service. Nov 1 00:42:36.199000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:36.133360 systemd[1]: Finished systemd-modules-load.service. Nov 1 00:42:36.207616 kernel: audit: type=1130 audit(1761957756.199:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:36.134046 systemd[1]: Finished systemd-vconsole-setup.service. Nov 1 00:42:36.150485 systemd[1]: Reached target nss-lookup.target. Nov 1 00:42:36.157564 systemd[1]: Starting dracut-cmdline-ask.service... Nov 1 00:42:36.158262 systemd[1]: Starting systemd-sysctl.service... Nov 1 00:42:36.214591 dracut-cmdline[223]: dracut-dracut-053 Nov 1 00:42:36.158880 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Nov 1 00:42:36.170151 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Nov 1 00:42:36.180042 systemd[1]: Finished dracut-cmdline-ask.service. Nov 1 00:42:36.191229 systemd[1]: Finished systemd-sysctl.service. Nov 1 00:42:36.202323 systemd[1]: Starting dracut-cmdline.service... Nov 1 00:42:36.224244 dracut-cmdline[223]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c4c72a4f851a6da01cbc7150799371516ef8311ea786098908d8eb164df01ee2 Nov 1 00:42:36.286485 kernel: Loading iSCSI transport class v2.0-870. Nov 1 00:42:36.307497 kernel: iscsi: registered transport (tcp) Nov 1 00:42:36.335091 kernel: iscsi: registered transport (qla4xxx) Nov 1 00:42:36.335194 kernel: QLogic iSCSI HBA Driver Nov 1 00:42:36.369989 systemd[1]: Finished dracut-cmdline.service. Nov 1 00:42:36.372000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:36.373638 systemd[1]: Starting dracut-pre-udev.service... Nov 1 00:42:36.422507 kernel: raid6: avx2x4 gen() 25289 MB/s Nov 1 00:42:36.440492 kernel: raid6: avx2x4 xor() 6618 MB/s Nov 1 00:42:36.458485 kernel: raid6: avx2x2 gen() 26458 MB/s Nov 1 00:42:36.478496 kernel: raid6: avx2x2 xor() 15417 MB/s Nov 1 00:42:36.496483 kernel: raid6: avx2x1 gen() 25631 MB/s Nov 1 00:42:36.514497 kernel: raid6: avx2x1 xor() 14103 MB/s Nov 1 00:42:36.532490 kernel: raid6: sse2x4 gen() 14342 MB/s Nov 1 00:42:36.550480 kernel: raid6: sse2x4 xor() 6219 MB/s Nov 1 00:42:36.568502 kernel: raid6: sse2x2 gen() 15350 MB/s Nov 1 00:42:36.586494 kernel: raid6: sse2x2 xor() 9451 MB/s Nov 1 00:42:36.604498 kernel: raid6: sse2x1 gen() 11564 MB/s Nov 1 00:42:36.622963 kernel: raid6: sse2x1 xor() 7483 MB/s Nov 1 00:42:36.623043 kernel: raid6: using algorithm avx2x2 gen() 26458 MB/s Nov 1 00:42:36.623053 kernel: raid6: .... xor() 15417 MB/s, rmw enabled Nov 1 00:42:36.624351 kernel: raid6: using avx2x2 recovery algorithm Nov 1 00:42:36.639482 kernel: xor: automatically using best checksumming function avx Nov 1 00:42:36.742482 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Nov 1 00:42:36.752124 systemd[1]: Finished dracut-pre-udev.service. Nov 1 00:42:36.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:36.754000 audit: BPF prog-id=7 op=LOAD Nov 1 00:42:36.754000 audit: BPF prog-id=8 op=LOAD Nov 1 00:42:36.755628 systemd[1]: Starting systemd-udevd.service... Nov 1 00:42:36.773111 systemd-udevd[400]: Using default interface naming scheme 'v252'. Nov 1 00:42:36.778376 systemd[1]: Started systemd-udevd.service. Nov 1 00:42:36.778000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:36.779926 systemd[1]: Starting dracut-pre-trigger.service... Nov 1 00:42:36.792217 dracut-pre-trigger[405]: rd.md=0: removing MD RAID activation Nov 1 00:42:36.819781 systemd[1]: Finished dracut-pre-trigger.service. Nov 1 00:42:36.820000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:36.822053 systemd[1]: Starting systemd-udev-trigger.service... Nov 1 00:42:36.864054 systemd[1]: Finished systemd-udev-trigger.service. Nov 1 00:42:36.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:36.901120 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Nov 1 00:42:36.915130 kernel: cryptd: max_cpu_qlen set to 1000 Nov 1 00:42:36.915148 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 1 00:42:36.915159 kernel: GPT:9289727 != 19775487 Nov 1 00:42:36.915170 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 1 00:42:36.915181 kernel: GPT:9289727 != 19775487 Nov 1 00:42:36.915191 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 1 00:42:36.915207 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 00:42:36.916474 kernel: libata version 3.00 loaded. Nov 1 00:42:36.926038 kernel: ahci 0000:00:1f.2: version 3.0 Nov 1 00:42:36.971808 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 1 00:42:36.971830 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Nov 1 00:42:36.971965 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 1 00:42:36.972079 kernel: AVX2 version of gcm_enc/dec engaged. Nov 1 00:42:36.972100 kernel: AES CTR mode by8 optimization enabled Nov 1 00:42:36.972115 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (452) Nov 1 00:42:36.972128 kernel: scsi host0: ahci Nov 1 00:42:36.972255 kernel: scsi host1: ahci Nov 1 00:42:36.972361 kernel: scsi host2: ahci Nov 1 00:42:36.972480 kernel: scsi host3: ahci Nov 1 00:42:36.972642 kernel: scsi host4: ahci Nov 1 00:42:36.972809 kernel: scsi host5: ahci Nov 1 00:42:36.972943 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Nov 1 00:42:36.972958 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Nov 1 00:42:36.972970 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Nov 1 00:42:36.972983 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Nov 1 00:42:36.972996 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Nov 1 00:42:36.973008 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Nov 1 00:42:36.956654 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Nov 1 00:42:37.039095 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Nov 1 00:42:37.043506 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Nov 1 00:42:37.050461 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Nov 1 00:42:37.060099 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Nov 1 00:42:37.064179 systemd[1]: Starting disk-uuid.service... Nov 1 00:42:37.076953 disk-uuid[544]: Primary Header is updated. Nov 1 00:42:37.076953 disk-uuid[544]: Secondary Entries is updated. Nov 1 00:42:37.076953 disk-uuid[544]: Secondary Header is updated. Nov 1 00:42:37.083381 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 00:42:37.085501 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 00:42:37.278512 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 1 00:42:37.278595 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 1 00:42:37.286513 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 1 00:42:37.288485 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 1 00:42:37.288524 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 1 00:42:37.290484 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Nov 1 00:42:37.293013 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Nov 1 00:42:37.293033 kernel: ata3.00: applying bridge limits Nov 1 00:42:37.295272 kernel: ata3.00: configured for UDMA/100 Nov 1 00:42:37.298476 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Nov 1 00:42:37.328890 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Nov 1 00:42:37.346248 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 1 00:42:37.346263 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Nov 1 00:42:38.085483 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 00:42:38.085998 disk-uuid[545]: The operation has completed successfully. Nov 1 00:42:38.116358 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 1 00:42:38.116000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:38.116000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:38.116481 systemd[1]: Finished disk-uuid.service. Nov 1 00:42:38.125627 systemd[1]: Starting verity-setup.service... Nov 1 00:42:38.142488 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Nov 1 00:42:38.167426 systemd[1]: Found device dev-mapper-usr.device. Nov 1 00:42:38.170522 systemd[1]: Finished verity-setup.service. Nov 1 00:42:38.172000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:38.173816 systemd[1]: Mounting sysusr-usr.mount... Nov 1 00:42:38.246313 systemd[1]: Mounted sysusr-usr.mount. Nov 1 00:42:38.249214 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Nov 1 00:42:38.247288 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Nov 1 00:42:38.248164 systemd[1]: Starting ignition-setup.service... Nov 1 00:42:38.251368 systemd[1]: Starting parse-ip-for-networkd.service... Nov 1 00:42:38.269376 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:42:38.269437 kernel: BTRFS info (device vda6): using free space tree Nov 1 00:42:38.269464 kernel: BTRFS info (device vda6): has skinny extents Nov 1 00:42:38.281336 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 1 00:42:38.290501 systemd[1]: Finished ignition-setup.service. Nov 1 00:42:38.290000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:38.291978 systemd[1]: Starting ignition-fetch-offline.service... Nov 1 00:42:38.332568 ignition[661]: Ignition 2.14.0 Nov 1 00:42:38.332629 ignition[661]: Stage: fetch-offline Nov 1 00:42:38.332685 ignition[661]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:42:38.332696 ignition[661]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 00:42:38.332834 ignition[661]: parsed url from cmdline: "" Nov 1 00:42:38.332838 ignition[661]: no config URL provided Nov 1 00:42:38.332845 ignition[661]: reading system config file "/usr/lib/ignition/user.ign" Nov 1 00:42:38.332855 ignition[661]: no config at "/usr/lib/ignition/user.ign" Nov 1 00:42:38.332876 ignition[661]: op(1): [started] loading QEMU firmware config module Nov 1 00:42:38.332881 ignition[661]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 1 00:42:38.337179 ignition[661]: op(1): [finished] loading QEMU firmware config module Nov 1 00:42:38.357656 systemd[1]: Finished parse-ip-for-networkd.service. Nov 1 00:42:38.358000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:38.360000 audit: BPF prog-id=9 op=LOAD Nov 1 00:42:38.361552 systemd[1]: Starting systemd-networkd.service... Nov 1 00:42:38.430489 ignition[661]: parsing config with SHA512: 68dede8f11dc6b8accd4c9bc3bb899acd44b959579b81dd12cd9598907354b1ee614391ef5fe920f00f83d7940934858a92825f7d395e5fb3c26f59c668dc2a9 Nov 1 00:42:38.440086 unknown[661]: fetched base config from "system" Nov 1 00:42:38.440100 unknown[661]: fetched user config from "qemu" Nov 1 00:42:38.440762 ignition[661]: fetch-offline: fetch-offline passed Nov 1 00:42:38.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:38.442323 systemd[1]: Finished ignition-fetch-offline.service. Nov 1 00:42:38.440834 ignition[661]: Ignition finished successfully Nov 1 00:42:38.461748 systemd-networkd[728]: lo: Link UP Nov 1 00:42:38.461758 systemd-networkd[728]: lo: Gained carrier Nov 1 00:42:38.464388 systemd-networkd[728]: Enumeration completed Nov 1 00:42:38.464604 systemd[1]: Started systemd-networkd.service. Nov 1 00:42:38.464000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:38.465580 systemd[1]: Reached target network.target. Nov 1 00:42:38.469023 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 1 00:42:38.470068 systemd[1]: Starting ignition-kargs.service... Nov 1 00:42:38.472307 systemd[1]: Starting iscsiuio.service... Nov 1 00:42:38.479123 systemd-networkd[728]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 00:42:38.479286 systemd[1]: Started iscsiuio.service. Nov 1 00:42:38.478000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:38.482203 systemd[1]: Starting iscsid.service... Nov 1 00:42:38.486950 iscsid[737]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Nov 1 00:42:38.486950 iscsid[737]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Nov 1 00:42:38.486950 iscsid[737]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Nov 1 00:42:38.486950 iscsid[737]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Nov 1 00:42:38.486950 iscsid[737]: If using hardware iscsi like qla4xxx this message can be ignored. Nov 1 00:42:38.486950 iscsid[737]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Nov 1 00:42:38.486950 iscsid[737]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Nov 1 00:42:38.486000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:38.499000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:38.512000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:38.489077 ignition[730]: Ignition 2.14.0 Nov 1 00:42:38.487160 systemd[1]: Started iscsid.service. Nov 1 00:42:38.489093 ignition[730]: Stage: kargs Nov 1 00:42:38.489064 systemd[1]: Starting dracut-initqueue.service... Nov 1 00:42:38.518000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:38.489258 ignition[730]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:42:38.493946 systemd-networkd[728]: eth0: Link UP Nov 1 00:42:38.489282 ignition[730]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 00:42:38.493955 systemd-networkd[728]: eth0: Gained carrier Nov 1 00:42:38.490985 ignition[730]: kargs: kargs passed Nov 1 00:42:38.497661 systemd[1]: Finished ignition-kargs.service. Nov 1 00:42:38.491047 ignition[730]: Ignition finished successfully Nov 1 00:42:38.500024 systemd[1]: Starting ignition-disks.service... Nov 1 00:42:38.510964 ignition[742]: Ignition 2.14.0 Nov 1 00:42:38.508123 systemd[1]: Finished dracut-initqueue.service. Nov 1 00:42:38.510973 ignition[742]: Stage: disks Nov 1 00:42:38.516613 systemd[1]: Finished ignition-disks.service. Nov 1 00:42:38.511096 ignition[742]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:42:38.517572 systemd-networkd[728]: eth0: DHCPv4 address 10.0.0.111/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 1 00:42:38.511108 ignition[742]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 00:42:38.520564 systemd[1]: Reached target initrd-root-device.target. Nov 1 00:42:38.512422 ignition[742]: disks: disks passed Nov 1 00:42:38.523801 systemd[1]: Reached target local-fs-pre.target. Nov 1 00:42:38.512486 ignition[742]: Ignition finished successfully Nov 1 00:42:38.526254 systemd[1]: Reached target local-fs.target. Nov 1 00:42:38.529471 systemd[1]: Reached target remote-fs-pre.target. Nov 1 00:42:38.556873 systemd[1]: Reached target remote-cryptsetup.target. Nov 1 00:42:38.560535 systemd[1]: Reached target remote-fs.target. Nov 1 00:42:38.563254 systemd[1]: Reached target sysinit.target. Nov 1 00:42:38.565925 systemd[1]: Reached target basic.target. Nov 1 00:42:38.569603 systemd[1]: Starting dracut-pre-mount.service... Nov 1 00:42:38.579256 systemd[1]: Finished dracut-pre-mount.service. Nov 1 00:42:38.582000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:38.583820 systemd[1]: Starting systemd-fsck-root.service... Nov 1 00:42:38.597036 systemd-fsck[762]: ROOT: clean, 637/553520 files, 56032/553472 blocks Nov 1 00:42:38.605179 systemd[1]: Finished systemd-fsck-root.service. Nov 1 00:42:38.607000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:38.608694 systemd[1]: Mounting sysroot.mount... Nov 1 00:42:38.619495 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Nov 1 00:42:38.620148 systemd[1]: Mounted sysroot.mount. Nov 1 00:42:38.621642 systemd[1]: Reached target initrd-root-fs.target. Nov 1 00:42:38.625665 systemd[1]: Mounting sysroot-usr.mount... Nov 1 00:42:38.627187 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Nov 1 00:42:38.627237 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 1 00:42:38.627266 systemd[1]: Reached target ignition-diskful.target. Nov 1 00:42:38.630824 systemd[1]: Mounted sysroot-usr.mount. Nov 1 00:42:38.634761 systemd[1]: Starting initrd-setup-root.service... Nov 1 00:42:38.645241 initrd-setup-root[772]: cut: /sysroot/etc/passwd: No such file or directory Nov 1 00:42:38.647588 initrd-setup-root[780]: cut: /sysroot/etc/group: No such file or directory Nov 1 00:42:38.649905 initrd-setup-root[788]: cut: /sysroot/etc/shadow: No such file or directory Nov 1 00:42:38.652200 initrd-setup-root[796]: cut: /sysroot/etc/gshadow: No such file or directory Nov 1 00:42:38.682880 systemd[1]: Finished initrd-setup-root.service. Nov 1 00:42:38.685000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:38.687201 systemd[1]: Starting ignition-mount.service... Nov 1 00:42:38.691079 systemd[1]: Starting sysroot-boot.service... Nov 1 00:42:38.695532 bash[813]: umount: /sysroot/usr/share/oem: not mounted. Nov 1 00:42:38.704430 ignition[814]: INFO : Ignition 2.14.0 Nov 1 00:42:38.704430 ignition[814]: INFO : Stage: mount Nov 1 00:42:38.709549 ignition[814]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 00:42:38.709549 ignition[814]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 00:42:38.709549 ignition[814]: INFO : mount: mount passed Nov 1 00:42:38.709549 ignition[814]: INFO : Ignition finished successfully Nov 1 00:42:38.709000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:38.706667 systemd[1]: Finished ignition-mount.service. Nov 1 00:42:38.717000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:38.716650 systemd[1]: Finished sysroot-boot.service. Nov 1 00:42:39.184146 systemd[1]: Mounting sysroot-usr-share-oem.mount... Nov 1 00:42:39.197120 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (823) Nov 1 00:42:39.197170 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:42:39.197192 kernel: BTRFS info (device vda6): using free space tree Nov 1 00:42:39.198729 kernel: BTRFS info (device vda6): has skinny extents Nov 1 00:42:39.204049 systemd[1]: Mounted sysroot-usr-share-oem.mount. Nov 1 00:42:39.206362 systemd[1]: Starting ignition-files.service... Nov 1 00:42:39.221402 ignition[843]: INFO : Ignition 2.14.0 Nov 1 00:42:39.221402 ignition[843]: INFO : Stage: files Nov 1 00:42:39.224846 ignition[843]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 00:42:39.224846 ignition[843]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 00:42:39.224846 ignition[843]: DEBUG : files: compiled without relabeling support, skipping Nov 1 00:42:39.224846 ignition[843]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 1 00:42:39.224846 ignition[843]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 1 00:42:39.237325 ignition[843]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 1 00:42:39.237325 ignition[843]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 1 00:42:39.237325 ignition[843]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 1 00:42:39.237325 ignition[843]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 1 00:42:39.237325 ignition[843]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 1 00:42:39.226160 unknown[843]: wrote ssh authorized keys file for user: core Nov 1 00:42:39.287330 ignition[843]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 1 00:42:39.338512 ignition[843]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 1 00:42:39.341802 ignition[843]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 1 00:42:39.341802 ignition[843]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Nov 1 00:42:39.432218 ignition[843]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 1 00:42:39.525495 ignition[843]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 1 00:42:39.525495 ignition[843]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Nov 1 00:42:39.531812 ignition[843]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Nov 1 00:42:39.531812 ignition[843]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 1 00:42:39.531812 ignition[843]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 1 00:42:39.531812 ignition[843]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 00:42:39.531812 ignition[843]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 00:42:39.531812 ignition[843]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 00:42:39.531812 ignition[843]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 00:42:39.531812 ignition[843]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 00:42:39.531812 ignition[843]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 00:42:39.531812 ignition[843]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 1 00:42:39.531812 ignition[843]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 1 00:42:39.531812 ignition[843]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 1 00:42:39.531812 ignition[843]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Nov 1 00:42:39.806694 systemd-networkd[728]: eth0: Gained IPv6LL Nov 1 00:42:39.873165 ignition[843]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Nov 1 00:42:40.263832 ignition[843]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 1 00:42:40.263832 ignition[843]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Nov 1 00:42:40.263832 ignition[843]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 00:42:40.275946 ignition[843]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 00:42:40.275946 ignition[843]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Nov 1 00:42:40.275946 ignition[843]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Nov 1 00:42:40.275946 ignition[843]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 1 00:42:40.275946 ignition[843]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 1 00:42:40.275946 ignition[843]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Nov 1 00:42:40.275946 ignition[843]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Nov 1 00:42:40.275946 ignition[843]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Nov 1 00:42:40.275946 ignition[843]: INFO : files: op(11): [started] setting preset to disabled for "coreos-metadata.service" Nov 1 00:42:40.275946 ignition[843]: INFO : files: op(11): op(12): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 1 00:42:40.317325 kernel: kauditd_printk_skb: 25 callbacks suppressed Nov 1 00:42:40.317359 kernel: audit: type=1130 audit(1761957760.305:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:40.305000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:40.317470 ignition[843]: INFO : files: op(11): op(12): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 1 00:42:40.317470 ignition[843]: INFO : files: op(11): [finished] setting preset to disabled for "coreos-metadata.service" Nov 1 00:42:40.317470 ignition[843]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 1 00:42:40.317470 ignition[843]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 1 00:42:40.317470 ignition[843]: INFO : files: files passed Nov 1 00:42:40.317470 ignition[843]: INFO : Ignition finished successfully Nov 1 00:42:40.357202 kernel: audit: type=1130 audit(1761957760.325:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:40.357233 kernel: audit: type=1130 audit(1761957760.334:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:40.357244 kernel: audit: type=1131 audit(1761957760.334:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:40.325000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:40.334000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:40.334000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:40.302635 systemd[1]: Finished ignition-files.service. Nov 1 00:42:40.306403 systemd[1]: Starting initrd-setup-root-after-ignition.service... Nov 1 00:42:40.360620 initrd-setup-root-after-ignition[867]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Nov 1 00:42:40.317334 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Nov 1 00:42:40.366321 initrd-setup-root-after-ignition[869]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 00:42:40.318131 systemd[1]: Starting ignition-quench.service... Nov 1 00:42:40.321721 systemd[1]: Finished initrd-setup-root-after-ignition.service. Nov 1 00:42:40.373000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:40.326174 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 1 00:42:40.388131 kernel: audit: type=1130 audit(1761957760.373:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:40.389343 kernel: audit: type=1131 audit(1761957760.373:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:40.373000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:40.326270 systemd[1]: Finished ignition-quench.service. Nov 1 00:42:40.335382 systemd[1]: Reached target ignition-complete.target. Nov 1 00:42:40.351242 systemd[1]: Starting initrd-parse-etc.service... Nov 1 00:42:40.370598 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 1 00:42:40.370711 systemd[1]: Finished initrd-parse-etc.service. Nov 1 00:42:40.373714 systemd[1]: Reached target initrd-fs.target. Nov 1 00:42:40.388096 systemd[1]: Reached target initrd.target. Nov 1 00:42:40.389373 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Nov 1 00:42:40.411287 kernel: audit: type=1130 audit(1761957760.402:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:40.402000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:40.390220 systemd[1]: Starting dracut-pre-pivot.service... Nov 1 00:42:40.400788 systemd[1]: Finished dracut-pre-pivot.service. Nov 1 00:42:40.403846 systemd[1]: Starting initrd-cleanup.service... Nov 1 00:42:40.416419 systemd[1]: Stopped target nss-lookup.target. Nov 1 00:42:40.418308 systemd[1]: Stopped target remote-cryptsetup.target. Nov 1 00:42:40.421518 systemd[1]: Stopped target timers.target. Nov 1 00:42:40.424619 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 1 00:42:40.435412 kernel: audit: type=1131 audit(1761957760.427:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:40.427000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:40.424741 systemd[1]: Stopped dracut-pre-pivot.service. Nov 1 00:42:40.427773 systemd[1]: Stopped target initrd.target. Nov 1 00:42:40.435492 systemd[1]: Stopped target basic.target. Nov 1 00:42:40.436930 systemd[1]: Stopped target ignition-complete.target. Nov 1 00:42:40.439666 systemd[1]: Stopped target ignition-diskful.target. Nov 1 00:42:40.442435 systemd[1]: Stopped target initrd-root-device.target. Nov 1 00:42:40.445268 systemd[1]: Stopped target remote-fs.target. Nov 1 00:42:40.448360 systemd[1]: Stopped target remote-fs-pre.target. Nov 1 00:42:40.451370 systemd[1]: Stopped target sysinit.target. Nov 1 00:42:40.454091 systemd[1]: Stopped target local-fs.target. Nov 1 00:42:40.456665 systemd[1]: Stopped target local-fs-pre.target. Nov 1 00:42:40.474805 kernel: audit: type=1131 audit(1761957760.464:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:40.464000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:40.474000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:40.459319 systemd[1]: Stopped target swap.target. Nov 1 00:42:40.482819 kernel: audit: type=1131 audit(1761957760.474:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:40.462595 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 1 00:42:40.482000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:40.462772 systemd[1]: Stopped dracut-pre-mount.service. Nov 1 00:42:40.465468 systemd[1]: Stopped target cryptsetup.target. Nov 1 00:42:40.471985 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 1 00:42:40.472173 systemd[1]: Stopped dracut-initqueue.service. Nov 1 00:42:40.474904 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 1 00:42:40.475011 systemd[1]: Stopped ignition-fetch-offline.service. Nov 1 00:42:40.483015 systemd[1]: Stopped target paths.target. Nov 1 00:42:40.485895 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 1 00:42:40.502000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:40.490520 systemd[1]: Stopped systemd-ask-password-console.path. Nov 1 00:42:40.505000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:40.493232 systemd[1]: Stopped target slices.target. Nov 1 00:42:40.511694 iscsid[737]: iscsid shutting down. Nov 1 00:42:40.496559 systemd[1]: Stopped target sockets.target. Nov 1 00:42:40.515781 ignition[884]: INFO : Ignition 2.14.0 Nov 1 00:42:40.515781 ignition[884]: INFO : Stage: umount Nov 1 00:42:40.515781 ignition[884]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 00:42:40.515781 ignition[884]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 00:42:40.520000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:40.523000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:40.499516 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 1 00:42:40.528536 ignition[884]: INFO : umount: umount passed Nov 1 00:42:40.528536 ignition[884]: INFO : Ignition finished successfully Nov 1 00:42:40.531000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:40.499748 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Nov 1 00:42:40.534000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:40.502561 systemd[1]: ignition-files.service: Deactivated successfully. Nov 1 00:42:40.537000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:40.537000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:40.538000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:40.538000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:40.538000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:40.538000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:40.502750 systemd[1]: Stopped ignition-files.service. Nov 1 00:42:40.507137 systemd[1]: Stopping ignition-mount.service... Nov 1 00:42:40.508865 systemd[1]: Stopping iscsid.service... Nov 1 00:42:40.513630 systemd[1]: Stopping sysroot-boot.service... Nov 1 00:42:40.515759 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 1 00:42:40.516019 systemd[1]: Stopped systemd-udev-trigger.service. Nov 1 00:42:40.520266 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 1 00:42:40.561000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:40.520401 systemd[1]: Stopped dracut-pre-trigger.service. Nov 1 00:42:40.566000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:40.528478 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 1 00:42:40.529410 systemd[1]: iscsid.service: Deactivated successfully. Nov 1 00:42:40.529576 systemd[1]: Stopped iscsid.service. Nov 1 00:42:40.532024 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 1 00:42:40.576000 audit: BPF prog-id=6 op=UNLOAD Nov 1 00:42:40.578000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:40.578000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:40.532093 systemd[1]: Stopped ignition-mount.service. Nov 1 00:42:40.585000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:40.535011 systemd[1]: iscsid.socket: Deactivated successfully. Nov 1 00:42:40.535096 systemd[1]: Closed iscsid.socket. Nov 1 00:42:40.594000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:40.537703 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 1 00:42:40.537740 systemd[1]: Stopped ignition-disks.service. Nov 1 00:42:40.538402 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 1 00:42:40.538509 systemd[1]: Stopped ignition-kargs.service. Nov 1 00:42:40.538701 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 1 00:42:40.603000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:40.538734 systemd[1]: Stopped ignition-setup.service. Nov 1 00:42:40.539026 systemd[1]: Stopping iscsiuio.service... Nov 1 00:42:40.539416 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 1 00:42:40.539495 systemd[1]: Finished initrd-cleanup.service. Nov 1 00:42:40.542221 systemd[1]: iscsiuio.service: Deactivated successfully. Nov 1 00:42:40.615000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:40.542314 systemd[1]: Stopped iscsiuio.service. Nov 1 00:42:40.618000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:40.543010 systemd[1]: Stopped target network.target. Nov 1 00:42:40.543271 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 1 00:42:40.622000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:40.543298 systemd[1]: Closed iscsiuio.socket. Nov 1 00:42:40.543937 systemd[1]: Stopping systemd-networkd.service... Nov 1 00:42:40.544211 systemd[1]: Stopping systemd-resolved.service... Nov 1 00:42:40.557502 systemd-networkd[728]: eth0: DHCPv6 lease lost Nov 1 00:42:40.628000 audit: BPF prog-id=9 op=UNLOAD Nov 1 00:42:40.632000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:40.559153 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 1 00:42:40.559231 systemd[1]: Stopped systemd-networkd.service. Nov 1 00:42:40.564111 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 1 00:42:40.564193 systemd[1]: Stopped systemd-resolved.service. Nov 1 00:42:40.574072 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 1 00:42:40.639000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:40.574110 systemd[1]: Closed systemd-networkd.socket. Nov 1 00:42:40.577886 systemd[1]: Stopping network-cleanup.service... Nov 1 00:42:40.647000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:40.579820 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 1 00:42:40.648000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:40.579874 systemd[1]: Stopped parse-ip-for-networkd.service. Nov 1 00:42:40.652000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:40.652000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:40.580182 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 1 00:42:40.580220 systemd[1]: Stopped systemd-sysctl.service. Nov 1 00:42:40.660000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:40.583436 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 1 00:42:40.583507 systemd[1]: Stopped systemd-modules-load.service. Nov 1 00:42:40.585563 systemd[1]: Stopping systemd-udevd.service... Nov 1 00:42:40.587848 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Nov 1 00:42:40.591418 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 1 00:42:40.591611 systemd[1]: Stopped network-cleanup.service. Nov 1 00:42:40.597667 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 1 00:42:40.599691 systemd[1]: Stopped systemd-udevd.service. Nov 1 00:42:40.606775 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 1 00:42:40.606829 systemd[1]: Closed systemd-udevd-control.socket. Nov 1 00:42:40.608044 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 1 00:42:40.608074 systemd[1]: Closed systemd-udevd-kernel.socket. Nov 1 00:42:40.610793 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 1 00:42:40.610839 systemd[1]: Stopped dracut-pre-udev.service. Nov 1 00:42:40.615630 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 1 00:42:40.615701 systemd[1]: Stopped dracut-cmdline.service. Nov 1 00:42:40.618565 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 1 00:42:40.618639 systemd[1]: Stopped dracut-cmdline-ask.service. Nov 1 00:42:40.623780 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Nov 1 00:42:40.625324 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 1 00:42:40.625373 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Nov 1 00:42:40.639272 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 1 00:42:40.639331 systemd[1]: Stopped kmod-static-nodes.service. Nov 1 00:42:40.640279 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 00:42:40.640319 systemd[1]: Stopped systemd-vconsole-setup.service. Nov 1 00:42:40.704789 systemd-journald[199]: Received SIGTERM from PID 1 (n/a). Nov 1 00:42:40.647980 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Nov 1 00:42:40.648447 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 1 00:42:40.648544 systemd[1]: Stopped sysroot-boot.service. Nov 1 00:42:40.649482 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 1 00:42:40.649549 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Nov 1 00:42:40.653201 systemd[1]: Reached target initrd-switch-root.target. Nov 1 00:42:40.656046 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 1 00:42:40.656086 systemd[1]: Stopped initrd-setup-root.service. Nov 1 00:42:40.661071 systemd[1]: Starting initrd-switch-root.service... Nov 1 00:42:40.678511 systemd[1]: Switching root. Nov 1 00:42:40.712917 systemd-journald[199]: Journal stopped Nov 1 00:42:44.815025 kernel: SELinux: Class mctp_socket not defined in policy. Nov 1 00:42:44.815138 kernel: SELinux: Class anon_inode not defined in policy. Nov 1 00:42:44.815154 kernel: SELinux: the above unknown classes and permissions will be allowed Nov 1 00:42:44.815169 kernel: SELinux: policy capability network_peer_controls=1 Nov 1 00:42:44.815181 kernel: SELinux: policy capability open_perms=1 Nov 1 00:42:44.815196 kernel: SELinux: policy capability extended_socket_class=1 Nov 1 00:42:44.815211 kernel: SELinux: policy capability always_check_network=0 Nov 1 00:42:44.815227 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 1 00:42:44.815241 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 1 00:42:44.815255 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 1 00:42:44.815273 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 1 00:42:44.815286 systemd[1]: Successfully loaded SELinux policy in 45.869ms. Nov 1 00:42:44.815322 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.023ms. Nov 1 00:42:44.815336 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Nov 1 00:42:44.815355 systemd[1]: Detected virtualization kvm. Nov 1 00:42:44.815370 systemd[1]: Detected architecture x86-64. Nov 1 00:42:44.815383 systemd[1]: Detected first boot. Nov 1 00:42:44.815393 systemd[1]: Initializing machine ID from VM UUID. Nov 1 00:42:44.815416 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Nov 1 00:42:44.815435 systemd[1]: Populated /etc with preset unit settings. Nov 1 00:42:44.815464 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 00:42:44.815489 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 00:42:44.815510 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:42:44.815540 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 1 00:42:44.815553 systemd[1]: Stopped initrd-switch-root.service. Nov 1 00:42:44.815570 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 1 00:42:44.815594 systemd[1]: Created slice system-addon\x2dconfig.slice. Nov 1 00:42:44.815607 systemd[1]: Created slice system-addon\x2drun.slice. Nov 1 00:42:44.815619 systemd[1]: Created slice system-getty.slice. Nov 1 00:42:44.815632 systemd[1]: Created slice system-modprobe.slice. Nov 1 00:42:44.815643 systemd[1]: Created slice system-serial\x2dgetty.slice. Nov 1 00:42:44.815653 systemd[1]: Created slice system-system\x2dcloudinit.slice. Nov 1 00:42:44.815664 systemd[1]: Created slice system-systemd\x2dfsck.slice. Nov 1 00:42:44.815674 systemd[1]: Created slice user.slice. Nov 1 00:42:44.815687 systemd[1]: Started systemd-ask-password-console.path. Nov 1 00:42:44.815707 systemd[1]: Started systemd-ask-password-wall.path. Nov 1 00:42:44.815718 systemd[1]: Set up automount boot.automount. Nov 1 00:42:44.815733 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Nov 1 00:42:44.815744 systemd[1]: Stopped target initrd-switch-root.target. Nov 1 00:42:44.815754 systemd[1]: Stopped target initrd-fs.target. Nov 1 00:42:44.815770 systemd[1]: Stopped target initrd-root-fs.target. Nov 1 00:42:44.815780 systemd[1]: Reached target integritysetup.target. Nov 1 00:42:44.815791 systemd[1]: Reached target remote-cryptsetup.target. Nov 1 00:42:44.815810 systemd[1]: Reached target remote-fs.target. Nov 1 00:42:44.815824 systemd[1]: Reached target slices.target. Nov 1 00:42:44.815839 systemd[1]: Reached target swap.target. Nov 1 00:42:44.815849 systemd[1]: Reached target torcx.target. Nov 1 00:42:44.815867 systemd[1]: Reached target veritysetup.target. Nov 1 00:42:44.815883 systemd[1]: Listening on systemd-coredump.socket. Nov 1 00:42:44.815896 systemd[1]: Listening on systemd-initctl.socket. Nov 1 00:42:44.815906 systemd[1]: Listening on systemd-networkd.socket. Nov 1 00:42:44.815922 systemd[1]: Listening on systemd-udevd-control.socket. Nov 1 00:42:44.815937 systemd[1]: Listening on systemd-udevd-kernel.socket. Nov 1 00:42:44.815962 systemd[1]: Listening on systemd-userdbd.socket. Nov 1 00:42:44.815982 systemd[1]: Mounting dev-hugepages.mount... Nov 1 00:42:44.815995 systemd[1]: Mounting dev-mqueue.mount... Nov 1 00:42:44.816019 systemd[1]: Mounting media.mount... Nov 1 00:42:44.816030 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:42:44.816041 systemd[1]: Mounting sys-kernel-debug.mount... Nov 1 00:42:44.816051 systemd[1]: Mounting sys-kernel-tracing.mount... Nov 1 00:42:44.816062 systemd[1]: Mounting tmp.mount... Nov 1 00:42:44.816076 systemd[1]: Starting flatcar-tmpfiles.service... Nov 1 00:42:44.816097 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:42:44.816108 systemd[1]: Starting kmod-static-nodes.service... Nov 1 00:42:44.816123 systemd[1]: Starting modprobe@configfs.service... Nov 1 00:42:44.816138 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:42:44.816155 systemd[1]: Starting modprobe@drm.service... Nov 1 00:42:44.816172 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:42:44.816205 systemd[1]: Starting modprobe@fuse.service... Nov 1 00:42:44.816224 systemd[1]: Starting modprobe@loop.service... Nov 1 00:42:44.816236 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 1 00:42:44.816254 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 1 00:42:44.816265 systemd[1]: Stopped systemd-fsck-root.service. Nov 1 00:42:44.816275 kernel: fuse: init (API version 7.34) Nov 1 00:42:44.816285 kernel: loop: module loaded Nov 1 00:42:44.816299 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 1 00:42:44.816310 systemd[1]: Stopped systemd-fsck-usr.service. Nov 1 00:42:44.816322 systemd[1]: Stopped systemd-journald.service. Nov 1 00:42:44.816341 systemd[1]: systemd-journald.service: Consumed 1.106s CPU time. Nov 1 00:42:44.816354 systemd[1]: Starting systemd-journald.service... Nov 1 00:42:44.816371 systemd[1]: Starting systemd-modules-load.service... Nov 1 00:42:44.816382 systemd[1]: Starting systemd-network-generator.service... Nov 1 00:42:44.816399 systemd[1]: Starting systemd-remount-fs.service... Nov 1 00:42:44.816410 systemd[1]: Starting systemd-udev-trigger.service... Nov 1 00:42:44.816430 systemd[1]: verity-setup.service: Deactivated successfully. Nov 1 00:42:44.816472 systemd[1]: Stopped verity-setup.service. Nov 1 00:42:44.816496 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:42:44.816510 systemd-journald[1007]: Journal started Nov 1 00:42:44.816608 systemd-journald[1007]: Runtime Journal (/run/log/journal/bc7a93b2b6544f86b07a782fc0210b3a) is 6.0M, max 48.5M, 42.5M free. Nov 1 00:42:40.771000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 1 00:42:41.149000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Nov 1 00:42:41.149000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Nov 1 00:42:41.149000 audit: BPF prog-id=10 op=LOAD Nov 1 00:42:41.149000 audit: BPF prog-id=10 op=UNLOAD Nov 1 00:42:41.149000 audit: BPF prog-id=11 op=LOAD Nov 1 00:42:41.149000 audit: BPF prog-id=11 op=UNLOAD Nov 1 00:42:41.181000 audit[918]: AVC avc: denied { associate } for pid=918 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Nov 1 00:42:41.181000 audit[918]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001558a2 a1=c0000d8de0 a2=c0000e10c0 a3=32 items=0 ppid=901 pid=918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:41.181000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Nov 1 00:42:41.184000 audit[918]: AVC avc: denied { associate } for pid=918 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Nov 1 00:42:41.184000 audit[918]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c000155979 a2=1ed a3=0 items=2 ppid=901 pid=918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:41.184000 audit: CWD cwd="/" Nov 1 00:42:41.184000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:41.184000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:41.184000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Nov 1 00:42:44.590000 audit: BPF prog-id=12 op=LOAD Nov 1 00:42:44.590000 audit: BPF prog-id=3 op=UNLOAD Nov 1 00:42:44.590000 audit: BPF prog-id=13 op=LOAD Nov 1 00:42:44.590000 audit: BPF prog-id=14 op=LOAD Nov 1 00:42:44.590000 audit: BPF prog-id=4 op=UNLOAD Nov 1 00:42:44.590000 audit: BPF prog-id=5 op=UNLOAD Nov 1 00:42:44.592000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:44.599000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:44.599000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:44.606000 audit: BPF prog-id=12 op=UNLOAD Nov 1 00:42:44.771000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:44.778000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:44.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:44.781000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:44.784000 audit: BPF prog-id=15 op=LOAD Nov 1 00:42:44.785000 audit: BPF prog-id=16 op=LOAD Nov 1 00:42:44.785000 audit: BPF prog-id=17 op=LOAD Nov 1 00:42:44.785000 audit: BPF prog-id=13 op=UNLOAD Nov 1 00:42:44.785000 audit: BPF prog-id=14 op=UNLOAD Nov 1 00:42:44.813000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Nov 1 00:42:44.813000 audit[1007]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffe1cf1acf0 a2=4000 a3=7ffe1cf1ad8c items=0 ppid=1 pid=1007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:44.813000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Nov 1 00:42:44.816000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:44.586875 systemd[1]: Queued start job for default target multi-user.target. Nov 1 00:42:41.179968 /usr/lib/systemd/system-generators/torcx-generator[918]: time="2025-11-01T00:42:41Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Nov 1 00:42:44.586893 systemd[1]: Unnecessary job was removed for dev-vda6.device. Nov 1 00:42:41.180235 /usr/lib/systemd/system-generators/torcx-generator[918]: time="2025-11-01T00:42:41Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Nov 1 00:42:44.592146 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 1 00:42:41.180253 /usr/lib/systemd/system-generators/torcx-generator[918]: time="2025-11-01T00:42:41Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Nov 1 00:42:44.592644 systemd[1]: systemd-journald.service: Consumed 1.106s CPU time. Nov 1 00:42:41.180282 /usr/lib/systemd/system-generators/torcx-generator[918]: time="2025-11-01T00:42:41Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Nov 1 00:42:41.180292 /usr/lib/systemd/system-generators/torcx-generator[918]: time="2025-11-01T00:42:41Z" level=debug msg="skipped missing lower profile" missing profile=oem Nov 1 00:42:41.180323 /usr/lib/systemd/system-generators/torcx-generator[918]: time="2025-11-01T00:42:41Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Nov 1 00:42:41.180334 /usr/lib/systemd/system-generators/torcx-generator[918]: time="2025-11-01T00:42:41Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Nov 1 00:42:41.180563 /usr/lib/systemd/system-generators/torcx-generator[918]: time="2025-11-01T00:42:41Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Nov 1 00:42:41.180599 /usr/lib/systemd/system-generators/torcx-generator[918]: time="2025-11-01T00:42:41Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Nov 1 00:42:41.180611 /usr/lib/systemd/system-generators/torcx-generator[918]: time="2025-11-01T00:42:41Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Nov 1 00:42:41.181317 /usr/lib/systemd/system-generators/torcx-generator[918]: time="2025-11-01T00:42:41Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Nov 1 00:42:41.181348 /usr/lib/systemd/system-generators/torcx-generator[918]: time="2025-11-01T00:42:41Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Nov 1 00:42:41.181364 /usr/lib/systemd/system-generators/torcx-generator[918]: time="2025-11-01T00:42:41Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.8: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.8 Nov 1 00:42:41.181377 /usr/lib/systemd/system-generators/torcx-generator[918]: time="2025-11-01T00:42:41Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Nov 1 00:42:41.181393 /usr/lib/systemd/system-generators/torcx-generator[918]: time="2025-11-01T00:42:41Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.8: no such file or directory" path=/var/lib/torcx/store/3510.3.8 Nov 1 00:42:41.181406 /usr/lib/systemd/system-generators/torcx-generator[918]: time="2025-11-01T00:42:41Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Nov 1 00:42:43.354779 /usr/lib/systemd/system-generators/torcx-generator[918]: time="2025-11-01T00:42:43Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Nov 1 00:42:43.355175 /usr/lib/systemd/system-generators/torcx-generator[918]: time="2025-11-01T00:42:43Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Nov 1 00:42:43.355388 /usr/lib/systemd/system-generators/torcx-generator[918]: time="2025-11-01T00:42:43Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Nov 1 00:42:43.355678 /usr/lib/systemd/system-generators/torcx-generator[918]: time="2025-11-01T00:42:43Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Nov 1 00:42:43.355753 /usr/lib/systemd/system-generators/torcx-generator[918]: time="2025-11-01T00:42:43Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Nov 1 00:42:43.355869 /usr/lib/systemd/system-generators/torcx-generator[918]: time="2025-11-01T00:42:43Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Nov 1 00:42:44.824628 systemd[1]: Started systemd-journald.service. Nov 1 00:42:44.825000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:44.826136 systemd[1]: Mounted dev-hugepages.mount. Nov 1 00:42:44.827513 systemd[1]: Mounted dev-mqueue.mount. Nov 1 00:42:44.829007 systemd[1]: Mounted media.mount. Nov 1 00:42:44.830212 systemd[1]: Mounted sys-kernel-debug.mount. Nov 1 00:42:44.831975 systemd[1]: Mounted sys-kernel-tracing.mount. Nov 1 00:42:44.833602 systemd[1]: Mounted tmp.mount. Nov 1 00:42:44.835125 systemd[1]: Finished flatcar-tmpfiles.service. Nov 1 00:42:44.836000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:44.837096 systemd[1]: Finished kmod-static-nodes.service. Nov 1 00:42:44.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:44.838998 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 1 00:42:44.839202 systemd[1]: Finished modprobe@configfs.service. Nov 1 00:42:44.840000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:44.840000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:44.841098 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:42:44.841308 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:42:44.842000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:44.842000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:44.843164 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 00:42:44.843340 systemd[1]: Finished modprobe@drm.service. Nov 1 00:42:44.844000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:44.844000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:44.845236 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:42:44.845418 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:42:44.846000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:44.846000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:44.847286 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 1 00:42:44.847471 systemd[1]: Finished modprobe@fuse.service. Nov 1 00:42:44.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:44.848000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:44.849226 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:42:44.849398 systemd[1]: Finished modprobe@loop.service. Nov 1 00:42:44.850000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:44.850000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:44.851235 systemd[1]: Finished systemd-modules-load.service. Nov 1 00:42:44.852000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:44.853283 systemd[1]: Finished systemd-network-generator.service. Nov 1 00:42:44.854000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:44.855504 systemd[1]: Finished systemd-remount-fs.service. Nov 1 00:42:44.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:44.857732 systemd[1]: Reached target network-pre.target. Nov 1 00:42:44.860944 systemd[1]: Mounting sys-fs-fuse-connections.mount... Nov 1 00:42:44.863384 systemd[1]: Mounting sys-kernel-config.mount... Nov 1 00:42:44.864640 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 1 00:42:44.866236 systemd[1]: Starting systemd-hwdb-update.service... Nov 1 00:42:44.868502 systemd[1]: Starting systemd-journal-flush.service... Nov 1 00:42:44.870097 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:42:44.871577 systemd[1]: Starting systemd-random-seed.service... Nov 1 00:42:44.873378 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 00:42:44.875258 systemd[1]: Starting systemd-sysctl.service... Nov 1 00:42:44.879317 systemd-journald[1007]: Time spent on flushing to /var/log/journal/bc7a93b2b6544f86b07a782fc0210b3a is 24.145ms for 1102 entries. Nov 1 00:42:44.879317 systemd-journald[1007]: System Journal (/var/log/journal/bc7a93b2b6544f86b07a782fc0210b3a) is 8.0M, max 195.6M, 187.6M free. Nov 1 00:42:44.941837 systemd-journald[1007]: Received client request to flush runtime journal. Nov 1 00:42:44.891000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:44.927000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:44.930000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:44.879322 systemd[1]: Starting systemd-sysusers.service... Nov 1 00:42:44.886261 systemd[1]: Mounted sys-fs-fuse-connections.mount. Nov 1 00:42:44.888354 systemd[1]: Mounted sys-kernel-config.mount. Nov 1 00:42:44.890324 systemd[1]: Finished systemd-random-seed.service. Nov 1 00:42:44.892649 systemd[1]: Reached target first-boot-complete.target. Nov 1 00:42:44.910994 systemd[1]: Finished systemd-sysctl.service. Nov 1 00:42:44.943000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:44.928707 systemd[1]: Finished systemd-udev-trigger.service. Nov 1 00:42:44.932492 systemd[1]: Starting systemd-udev-settle.service... Nov 1 00:42:44.941981 systemd[1]: Finished systemd-sysusers.service. Nov 1 00:42:44.943834 systemd[1]: Finished systemd-journal-flush.service. Nov 1 00:42:44.945000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:44.946557 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Nov 1 00:42:44.952091 udevadm[1024]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Nov 1 00:42:44.970911 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Nov 1 00:42:44.972000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:45.724331 systemd[1]: Finished systemd-hwdb-update.service. Nov 1 00:42:45.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:45.727206 kernel: kauditd_printk_skb: 95 callbacks suppressed Nov 1 00:42:45.727267 kernel: audit: type=1130 audit(1761957765.725:132): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:45.732000 audit: BPF prog-id=18 op=LOAD Nov 1 00:42:45.734671 kernel: audit: type=1334 audit(1761957765.732:133): prog-id=18 op=LOAD Nov 1 00:42:45.734723 kernel: audit: type=1334 audit(1761957765.734:134): prog-id=19 op=LOAD Nov 1 00:42:45.734000 audit: BPF prog-id=19 op=LOAD Nov 1 00:42:45.735350 systemd[1]: Starting systemd-udevd.service... Nov 1 00:42:45.734000 audit: BPF prog-id=7 op=UNLOAD Nov 1 00:42:45.738781 kernel: audit: type=1334 audit(1761957765.734:135): prog-id=7 op=UNLOAD Nov 1 00:42:45.741844 kernel: audit: type=1334 audit(1761957765.734:136): prog-id=8 op=UNLOAD Nov 1 00:42:45.734000 audit: BPF prog-id=8 op=UNLOAD Nov 1 00:42:45.759112 systemd-udevd[1027]: Using default interface naming scheme 'v252'. Nov 1 00:42:45.773125 systemd[1]: Started systemd-udevd.service. Nov 1 00:42:45.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:45.781000 audit: BPF prog-id=20 op=LOAD Nov 1 00:42:45.781811 systemd[1]: Starting systemd-networkd.service... Nov 1 00:42:45.782991 kernel: audit: type=1130 audit(1761957765.774:137): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:45.783046 kernel: audit: type=1334 audit(1761957765.781:138): prog-id=20 op=LOAD Nov 1 00:42:45.788000 audit: BPF prog-id=21 op=LOAD Nov 1 00:42:45.790000 audit: BPF prog-id=22 op=LOAD Nov 1 00:42:45.792631 kernel: audit: type=1334 audit(1761957765.788:139): prog-id=21 op=LOAD Nov 1 00:42:45.792693 kernel: audit: type=1334 audit(1761957765.790:140): prog-id=22 op=LOAD Nov 1 00:42:45.792722 kernel: audit: type=1334 audit(1761957765.792:141): prog-id=23 op=LOAD Nov 1 00:42:45.792000 audit: BPF prog-id=23 op=LOAD Nov 1 00:42:45.793371 systemd[1]: Starting systemd-userdbd.service... Nov 1 00:42:45.816524 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Nov 1 00:42:45.828360 systemd[1]: Started systemd-userdbd.service. Nov 1 00:42:45.829000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:45.950627 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Nov 1 00:42:45.958482 kernel: ACPI: button: Power Button [PWRF] Nov 1 00:42:45.973755 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Nov 1 00:42:45.904000 audit[1053]: AVC avc: denied { confidentiality } for pid=1053 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Nov 1 00:42:45.904000 audit[1053]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=561d86e94960 a1=338ec a2=7fb8c984abc5 a3=5 items=110 ppid=1027 pid=1053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:45.904000 audit: CWD cwd="/" Nov 1 00:42:45.904000 audit: PATH item=0 name=(null) inode=44 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=1 name=(null) inode=11848 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=2 name=(null) inode=11848 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=3 name=(null) inode=11849 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=4 name=(null) inode=11848 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=5 name=(null) inode=11850 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=6 name=(null) inode=11848 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=7 name=(null) inode=11851 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=8 name=(null) inode=11851 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=9 name=(null) inode=11852 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=10 name=(null) inode=11851 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=11 name=(null) inode=11853 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=12 name=(null) inode=11851 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=13 name=(null) inode=11854 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=14 name=(null) inode=11851 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=15 name=(null) inode=11855 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=16 name=(null) inode=11851 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=17 name=(null) inode=11856 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=18 name=(null) inode=11848 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=19 name=(null) inode=11857 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=20 name=(null) inode=11857 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=21 name=(null) inode=11858 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=22 name=(null) inode=11857 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=23 name=(null) inode=11859 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=24 name=(null) inode=11857 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=25 name=(null) inode=11860 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=26 name=(null) inode=11857 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=27 name=(null) inode=11861 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=28 name=(null) inode=11857 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=29 name=(null) inode=11862 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=30 name=(null) inode=11848 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=31 name=(null) inode=11863 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=32 name=(null) inode=11863 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=33 name=(null) inode=11864 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=34 name=(null) inode=11863 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=35 name=(null) inode=11865 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=36 name=(null) inode=11863 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=37 name=(null) inode=11866 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=38 name=(null) inode=11863 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=39 name=(null) inode=11867 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=40 name=(null) inode=11863 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=41 name=(null) inode=11868 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=42 name=(null) inode=11848 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=43 name=(null) inode=11869 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=44 name=(null) inode=11869 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=45 name=(null) inode=11870 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=46 name=(null) inode=11869 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=47 name=(null) inode=11871 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=48 name=(null) inode=11869 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=49 name=(null) inode=11872 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=50 name=(null) inode=11869 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=51 name=(null) inode=11873 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=52 name=(null) inode=11869 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=53 name=(null) inode=11874 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=54 name=(null) inode=44 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=55 name=(null) inode=11875 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=56 name=(null) inode=11875 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=57 name=(null) inode=11876 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.996112 systemd-networkd[1035]: lo: Link UP Nov 1 00:42:45.997593 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 1 00:42:45.997805 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Nov 1 00:42:45.997974 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 1 00:42:45.996118 systemd-networkd[1035]: lo: Gained carrier Nov 1 00:42:45.996775 systemd-networkd[1035]: Enumeration completed Nov 1 00:42:45.996903 systemd-networkd[1035]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 00:42:45.904000 audit: PATH item=58 name=(null) inode=11875 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=59 name=(null) inode=11877 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=60 name=(null) inode=11875 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=61 name=(null) inode=11878 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=62 name=(null) inode=11878 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=63 name=(null) inode=11879 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=64 name=(null) inode=11878 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=65 name=(null) inode=11880 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=66 name=(null) inode=11878 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=67 name=(null) inode=11881 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.999142 systemd-networkd[1035]: eth0: Link UP Nov 1 00:42:45.999147 systemd-networkd[1035]: eth0: Gained carrier Nov 1 00:42:45.904000 audit: PATH item=68 name=(null) inode=11878 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=69 name=(null) inode=11882 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=70 name=(null) inode=11878 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=71 name=(null) inode=11883 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=72 name=(null) inode=11875 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=73 name=(null) inode=11884 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=74 name=(null) inode=11884 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=75 name=(null) inode=11885 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=76 name=(null) inode=11884 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=77 name=(null) inode=11886 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=78 name=(null) inode=11884 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=79 name=(null) inode=11887 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=80 name=(null) inode=11884 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=81 name=(null) inode=11888 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=82 name=(null) inode=11884 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=83 name=(null) inode=11889 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=84 name=(null) inode=11875 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=85 name=(null) inode=11890 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=86 name=(null) inode=11890 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=87 name=(null) inode=11891 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=88 name=(null) inode=11890 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=89 name=(null) inode=11892 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=90 name=(null) inode=11890 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=91 name=(null) inode=11893 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=92 name=(null) inode=11890 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=93 name=(null) inode=11894 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=94 name=(null) inode=11890 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=95 name=(null) inode=11895 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=96 name=(null) inode=11875 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=97 name=(null) inode=11896 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=98 name=(null) inode=11896 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=99 name=(null) inode=11897 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=100 name=(null) inode=11896 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=101 name=(null) inode=11898 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=102 name=(null) inode=11896 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=103 name=(null) inode=11899 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=104 name=(null) inode=11896 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=105 name=(null) inode=11900 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=106 name=(null) inode=11896 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=107 name=(null) inode=11901 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PATH item=109 name=(null) inode=11902 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:45.904000 audit: PROCTITLE proctitle="(udev-worker)" Nov 1 00:42:46.013476 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Nov 1 00:42:46.030748 systemd-networkd[1035]: eth0: DHCPv4 address 10.0.0.111/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 1 00:42:46.051486 kernel: mousedev: PS/2 mouse device common for all mice Nov 1 00:42:46.078523 kernel: kvm: Nested Virtualization enabled Nov 1 00:42:46.078639 kernel: SVM: kvm: Nested Paging enabled Nov 1 00:42:46.078655 kernel: SVM: Virtual VMLOAD VMSAVE supported Nov 1 00:42:46.078674 kernel: SVM: Virtual GIF supported Nov 1 00:42:46.083661 systemd[1]: Started systemd-networkd.service. Nov 1 00:42:46.085000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:46.116526 kernel: EDAC MC: Ver: 3.0.0 Nov 1 00:42:46.140479 systemd[1]: Finished systemd-udev-settle.service. Nov 1 00:42:46.141000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:46.143788 systemd[1]: Starting lvm2-activation-early.service... Nov 1 00:42:46.159549 lvm[1062]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 00:42:46.189780 systemd[1]: Finished lvm2-activation-early.service. Nov 1 00:42:46.191000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:46.191615 systemd[1]: Reached target cryptsetup.target. Nov 1 00:42:46.194547 systemd[1]: Starting lvm2-activation.service... Nov 1 00:42:46.199367 lvm[1063]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 00:42:46.233010 systemd[1]: Finished lvm2-activation.service. Nov 1 00:42:46.234000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:46.235248 systemd[1]: Reached target local-fs-pre.target. Nov 1 00:42:46.236827 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 1 00:42:46.236864 systemd[1]: Reached target local-fs.target. Nov 1 00:42:46.238545 systemd[1]: Reached target machines.target. Nov 1 00:42:46.241425 systemd[1]: Starting ldconfig.service... Nov 1 00:42:46.243640 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:42:46.243711 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:42:46.245211 systemd[1]: Starting systemd-boot-update.service... Nov 1 00:42:46.247901 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Nov 1 00:42:46.252846 systemd[1]: Starting systemd-machine-id-commit.service... Nov 1 00:42:46.256264 systemd[1]: Starting systemd-sysext.service... Nov 1 00:42:46.259180 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Nov 1 00:42:46.260000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:46.261531 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1065 (bootctl) Nov 1 00:42:46.262775 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Nov 1 00:42:46.268584 systemd[1]: Unmounting usr-share-oem.mount... Nov 1 00:42:46.275387 systemd[1]: usr-share-oem.mount: Deactivated successfully. Nov 1 00:42:46.275651 systemd[1]: Unmounted usr-share-oem.mount. Nov 1 00:42:46.288501 kernel: loop0: detected capacity change from 0 to 219144 Nov 1 00:42:46.325855 systemd-fsck[1073]: fsck.fat 4.2 (2021-01-31) Nov 1 00:42:46.325855 systemd-fsck[1073]: /dev/vda1: 790 files, 120773/258078 clusters Nov 1 00:42:46.328211 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Nov 1 00:42:46.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:46.331961 systemd[1]: Mounting boot.mount... Nov 1 00:42:46.541926 systemd[1]: Mounted boot.mount. Nov 1 00:42:46.555506 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 1 00:42:46.555765 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 1 00:42:46.556355 systemd[1]: Finished systemd-machine-id-commit.service. Nov 1 00:42:46.558000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:46.558897 systemd[1]: Finished systemd-boot-update.service. Nov 1 00:42:46.560000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:46.573487 kernel: loop1: detected capacity change from 0 to 219144 Nov 1 00:42:46.577620 (sd-sysext)[1078]: Using extensions 'kubernetes'. Nov 1 00:42:46.578010 (sd-sysext)[1078]: Merged extensions into '/usr'. Nov 1 00:42:46.595182 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:42:46.597091 systemd[1]: Mounting usr-share-oem.mount... Nov 1 00:42:46.599391 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:42:46.601970 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:42:46.605026 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:42:46.608268 systemd[1]: Starting modprobe@loop.service... Nov 1 00:42:46.610018 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:42:46.610175 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:42:46.610327 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:42:46.613471 systemd[1]: Mounted usr-share-oem.mount. Nov 1 00:42:46.615541 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:42:46.615692 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:42:46.616000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:46.616000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:46.617621 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:42:46.617740 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:42:46.620000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:46.620000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:46.620703 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:42:46.620885 systemd[1]: Finished modprobe@loop.service. Nov 1 00:42:46.622000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:46.622000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:46.623142 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:42:46.623298 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 00:42:46.624286 systemd[1]: Finished systemd-sysext.service. Nov 1 00:42:46.626000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:46.628016 systemd[1]: Starting ensure-sysext.service... Nov 1 00:42:46.630831 systemd[1]: Starting systemd-tmpfiles-setup.service... Nov 1 00:42:46.692639 systemd[1]: Reloading. Nov 1 00:42:46.695810 ldconfig[1064]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 1 00:42:46.705226 systemd-tmpfiles[1085]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Nov 1 00:42:46.707092 systemd-tmpfiles[1085]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 1 00:42:46.711045 systemd-tmpfiles[1085]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 1 00:42:46.783497 /usr/lib/systemd/system-generators/torcx-generator[1105]: time="2025-11-01T00:42:46Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Nov 1 00:42:46.783852 /usr/lib/systemd/system-generators/torcx-generator[1105]: time="2025-11-01T00:42:46Z" level=info msg="torcx already run" Nov 1 00:42:46.831699 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 00:42:46.831719 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 00:42:46.852463 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:42:46.916000 audit: BPF prog-id=24 op=LOAD Nov 1 00:42:46.916000 audit: BPF prog-id=20 op=UNLOAD Nov 1 00:42:46.917000 audit: BPF prog-id=25 op=LOAD Nov 1 00:42:46.917000 audit: BPF prog-id=26 op=LOAD Nov 1 00:42:46.917000 audit: BPF prog-id=18 op=UNLOAD Nov 1 00:42:46.917000 audit: BPF prog-id=19 op=UNLOAD Nov 1 00:42:46.918000 audit: BPF prog-id=27 op=LOAD Nov 1 00:42:46.918000 audit: BPF prog-id=21 op=UNLOAD Nov 1 00:42:46.918000 audit: BPF prog-id=28 op=LOAD Nov 1 00:42:46.918000 audit: BPF prog-id=29 op=LOAD Nov 1 00:42:46.918000 audit: BPF prog-id=22 op=UNLOAD Nov 1 00:42:46.918000 audit: BPF prog-id=23 op=UNLOAD Nov 1 00:42:46.919000 audit: BPF prog-id=30 op=LOAD Nov 1 00:42:46.919000 audit: BPF prog-id=15 op=UNLOAD Nov 1 00:42:46.919000 audit: BPF prog-id=31 op=LOAD Nov 1 00:42:46.919000 audit: BPF prog-id=32 op=LOAD Nov 1 00:42:46.919000 audit: BPF prog-id=16 op=UNLOAD Nov 1 00:42:46.919000 audit: BPF prog-id=17 op=UNLOAD Nov 1 00:42:46.923667 systemd[1]: Finished ldconfig.service. Nov 1 00:42:46.924000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:46.926017 systemd[1]: Finished systemd-tmpfiles-setup.service. Nov 1 00:42:46.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:46.930263 systemd[1]: Starting audit-rules.service... Nov 1 00:42:46.932694 systemd[1]: Starting clean-ca-certificates.service... Nov 1 00:42:46.935193 systemd[1]: Starting systemd-journal-catalog-update.service... Nov 1 00:42:46.936000 audit: BPF prog-id=33 op=LOAD Nov 1 00:42:46.938024 systemd[1]: Starting systemd-resolved.service... Nov 1 00:42:46.939000 audit: BPF prog-id=34 op=LOAD Nov 1 00:42:46.940673 systemd[1]: Starting systemd-timesyncd.service... Nov 1 00:42:46.943117 systemd[1]: Starting systemd-update-utmp.service... Nov 1 00:42:46.947379 systemd[1]: Finished clean-ca-certificates.service. Nov 1 00:42:46.949000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:46.949869 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 00:42:46.949000 audit[1158]: SYSTEM_BOOT pid=1158 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Nov 1 00:42:46.958636 systemd[1]: Finished systemd-journal-catalog-update.service. Nov 1 00:42:46.960000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:46.961657 systemd[1]: Finished systemd-update-utmp.service. Nov 1 00:42:46.962000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:46.967794 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:42:46.969192 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:42:46.971523 systemd[1]: Starting modprobe@drm.service... Nov 1 00:42:46.973599 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:42:46.976895 systemd[1]: Starting modprobe@loop.service... Nov 1 00:42:46.978494 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:42:46.978706 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:42:46.980599 systemd[1]: Starting systemd-networkd-wait-online.service... Nov 1 00:42:46.981273 augenrules[1171]: No rules Nov 1 00:42:46.980000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Nov 1 00:42:46.980000 audit[1171]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffe37cce80 a2=420 a3=0 items=0 ppid=1147 pid=1171 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:46.980000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Nov 1 00:42:46.983780 systemd[1]: Starting systemd-update-done.service... Nov 1 00:42:46.985399 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 00:42:46.990306 systemd[1]: Finished audit-rules.service. Nov 1 00:42:46.992332 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:42:46.992556 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:42:46.995167 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 00:42:46.995319 systemd[1]: Finished modprobe@drm.service. Nov 1 00:42:46.997320 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:42:46.997573 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:42:47.001763 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:42:47.001941 systemd[1]: Finished modprobe@loop.service. Nov 1 00:42:47.004082 systemd[1]: Finished systemd-update-done.service. Nov 1 00:42:47.006266 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:42:47.006366 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 00:42:47.007594 systemd[1]: Finished ensure-sysext.service. Nov 1 00:42:47.014459 systemd-resolved[1153]: Positive Trust Anchors: Nov 1 00:42:47.014480 systemd-resolved[1153]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 00:42:47.014506 systemd-resolved[1153]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Nov 1 00:42:47.027270 systemd-resolved[1153]: Defaulting to hostname 'linux'. Nov 1 00:42:47.028832 systemd[1]: Started systemd-timesyncd.service. Nov 1 00:42:47.030433 systemd[1]: Started systemd-resolved.service. Nov 1 00:42:47.032037 systemd[1]: Reached target network.target. Nov 1 00:42:47.033615 systemd[1]: Reached target nss-lookup.target. Nov 1 00:42:48.029252 systemd-resolved[1153]: Clock change detected. Flushing caches. Nov 1 00:42:48.029926 systemd-timesyncd[1155]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 1 00:42:48.030016 systemd-timesyncd[1155]: Initial clock synchronization to Sat 2025-11-01 00:42:48.029212 UTC. Nov 1 00:42:48.030665 systemd[1]: Reached target sysinit.target. Nov 1 00:42:48.032115 systemd[1]: Started motdgen.path. Nov 1 00:42:48.033328 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Nov 1 00:42:48.035233 systemd[1]: Started systemd-tmpfiles-clean.timer. Nov 1 00:42:48.036696 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 1 00:42:48.036729 systemd[1]: Reached target paths.target. Nov 1 00:42:48.039070 systemd[1]: Reached target time-set.target. Nov 1 00:42:48.040572 systemd[1]: Started logrotate.timer. Nov 1 00:42:48.041887 systemd[1]: Started mdadm.timer. Nov 1 00:42:48.043061 systemd[1]: Reached target timers.target. Nov 1 00:42:48.044681 systemd[1]: Listening on dbus.socket. Nov 1 00:42:48.046928 systemd[1]: Starting docker.socket... Nov 1 00:42:48.049988 systemd[1]: Listening on sshd.socket. Nov 1 00:42:48.051380 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:42:48.051787 systemd[1]: Listening on docker.socket. Nov 1 00:42:48.053342 systemd[1]: Reached target sockets.target. Nov 1 00:42:48.054715 systemd[1]: Reached target basic.target. Nov 1 00:42:48.056099 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Nov 1 00:42:48.056133 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Nov 1 00:42:48.056987 systemd[1]: Starting containerd.service... Nov 1 00:42:48.059066 systemd[1]: Starting dbus.service... Nov 1 00:42:48.061067 systemd[1]: Starting enable-oem-cloudinit.service... Nov 1 00:42:48.063414 systemd[1]: Starting extend-filesystems.service... Nov 1 00:42:48.064788 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Nov 1 00:42:48.065803 systemd[1]: Starting motdgen.service... Nov 1 00:42:48.076573 jq[1183]: false Nov 1 00:42:48.068766 systemd[1]: Starting prepare-helm.service... Nov 1 00:42:48.072483 systemd[1]: Starting ssh-key-proc-cmdline.service... Nov 1 00:42:48.075096 systemd[1]: Starting sshd-keygen.service... Nov 1 00:42:48.079274 systemd[1]: Starting systemd-logind.service... Nov 1 00:42:48.088078 extend-filesystems[1184]: Found loop1 Nov 1 00:42:48.088078 extend-filesystems[1184]: Found sr0 Nov 1 00:42:48.088078 extend-filesystems[1184]: Found vda Nov 1 00:42:48.088078 extend-filesystems[1184]: Found vda1 Nov 1 00:42:48.088078 extend-filesystems[1184]: Found vda2 Nov 1 00:42:48.088078 extend-filesystems[1184]: Found vda3 Nov 1 00:42:48.088078 extend-filesystems[1184]: Found usr Nov 1 00:42:48.088078 extend-filesystems[1184]: Found vda4 Nov 1 00:42:48.088078 extend-filesystems[1184]: Found vda6 Nov 1 00:42:48.088078 extend-filesystems[1184]: Found vda7 Nov 1 00:42:48.088078 extend-filesystems[1184]: Found vda9 Nov 1 00:42:48.088078 extend-filesystems[1184]: Checking size of /dev/vda9 Nov 1 00:42:48.171104 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Nov 1 00:42:48.081132 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:42:48.149598 dbus-daemon[1182]: [system] SELinux support is enabled Nov 1 00:42:48.172170 extend-filesystems[1184]: Resized partition /dev/vda9 Nov 1 00:42:48.081219 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 1 00:42:48.175736 extend-filesystems[1210]: resize2fs 1.46.5 (30-Dec-2021) Nov 1 00:42:48.081774 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 1 00:42:48.179184 jq[1201]: true Nov 1 00:42:48.082585 systemd[1]: Starting update-engine.service... Nov 1 00:42:48.085184 systemd[1]: Starting update-ssh-keys-after-ignition.service... Nov 1 00:42:48.229951 tar[1204]: linux-amd64/LICENSE Nov 1 00:42:48.229951 tar[1204]: linux-amd64/helm Nov 1 00:42:48.088850 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 1 00:42:48.231277 jq[1211]: true Nov 1 00:42:48.089138 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Nov 1 00:42:48.090401 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 1 00:42:48.090726 systemd[1]: Finished ssh-key-proc-cmdline.service. Nov 1 00:42:48.146612 systemd[1]: motdgen.service: Deactivated successfully. Nov 1 00:42:48.146889 systemd[1]: Finished motdgen.service. Nov 1 00:42:48.150536 systemd[1]: Started dbus.service. Nov 1 00:42:48.155589 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 1 00:42:48.155619 systemd[1]: Reached target system-config.target. Nov 1 00:42:48.157589 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 1 00:42:48.157617 systemd[1]: Reached target user-config.target. Nov 1 00:42:48.243064 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Nov 1 00:42:48.248711 update_engine[1200]: I1101 00:42:48.247443 1200 main.cc:92] Flatcar Update Engine starting Nov 1 00:42:48.276174 update_engine[1200]: I1101 00:42:48.250783 1200 update_check_scheduler.cc:74] Next update check in 9m13s Nov 1 00:42:48.251860 systemd[1]: Started update-engine.service. Nov 1 00:42:48.255853 systemd[1]: Started locksmithd.service. Nov 1 00:42:48.272950 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:42:48.272981 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:42:48.276077 systemd-logind[1194]: Watching system buttons on /dev/input/event1 (Power Button) Nov 1 00:42:48.276112 systemd-logind[1194]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 1 00:42:48.277139 systemd-logind[1194]: New seat seat0. Nov 1 00:42:48.279907 systemd[1]: Started systemd-logind.service. Nov 1 00:42:48.285341 extend-filesystems[1210]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 1 00:42:48.285341 extend-filesystems[1210]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 1 00:42:48.285341 extend-filesystems[1210]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Nov 1 00:42:48.282515 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 1 00:42:48.298785 env[1212]: time="2025-11-01T00:42:48.292856424Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Nov 1 00:42:48.299088 extend-filesystems[1184]: Resized filesystem in /dev/vda9 Nov 1 00:42:48.300855 bash[1232]: Updated "/home/core/.ssh/authorized_keys" Nov 1 00:42:48.282935 systemd[1]: Finished extend-filesystems.service. Nov 1 00:42:48.292652 systemd[1]: Finished update-ssh-keys-after-ignition.service. Nov 1 00:42:48.351813 env[1212]: time="2025-11-01T00:42:48.351685860Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 1 00:42:48.351944 env[1212]: time="2025-11-01T00:42:48.351888350Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:42:48.353600 env[1212]: time="2025-11-01T00:42:48.353567269Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.192-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:42:48.353600 env[1212]: time="2025-11-01T00:42:48.353593137Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:42:48.353805 env[1212]: time="2025-11-01T00:42:48.353779427Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:42:48.353805 env[1212]: time="2025-11-01T00:42:48.353800626Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 1 00:42:48.353880 env[1212]: time="2025-11-01T00:42:48.353811827Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Nov 1 00:42:48.353880 env[1212]: time="2025-11-01T00:42:48.353822578Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 1 00:42:48.353933 env[1212]: time="2025-11-01T00:42:48.353888080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:42:48.354167 env[1212]: time="2025-11-01T00:42:48.354142317Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:42:48.354299 env[1212]: time="2025-11-01T00:42:48.354264977Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:42:48.354299 env[1212]: time="2025-11-01T00:42:48.354292849Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 1 00:42:48.354387 env[1212]: time="2025-11-01T00:42:48.354337694Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Nov 1 00:42:48.354387 env[1212]: time="2025-11-01T00:42:48.354352471Z" level=info msg="metadata content store policy set" policy=shared Nov 1 00:42:48.361287 env[1212]: time="2025-11-01T00:42:48.361257030Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 1 00:42:48.361349 env[1212]: time="2025-11-01T00:42:48.361290773Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 1 00:42:48.361349 env[1212]: time="2025-11-01T00:42:48.361304749Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 1 00:42:48.361349 env[1212]: time="2025-11-01T00:42:48.361339074Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 1 00:42:48.361464 env[1212]: time="2025-11-01T00:42:48.361351397Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 1 00:42:48.361464 env[1212]: time="2025-11-01T00:42:48.361365163Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 1 00:42:48.361464 env[1212]: time="2025-11-01T00:42:48.361376364Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 1 00:42:48.361464 env[1212]: time="2025-11-01T00:42:48.361389538Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 1 00:42:48.361464 env[1212]: time="2025-11-01T00:42:48.361403605Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Nov 1 00:42:48.361464 env[1212]: time="2025-11-01T00:42:48.361415918Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 1 00:42:48.361464 env[1212]: time="2025-11-01T00:42:48.361437809Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 1 00:42:48.361464 env[1212]: time="2025-11-01T00:42:48.361458999Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 1 00:42:48.361768 env[1212]: time="2025-11-01T00:42:48.361594402Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 1 00:42:48.361768 env[1212]: time="2025-11-01T00:42:48.361714458Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 1 00:42:48.361966 env[1212]: time="2025-11-01T00:42:48.361942946Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 1 00:42:48.362021 env[1212]: time="2025-11-01T00:42:48.361974485Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 1 00:42:48.362021 env[1212]: time="2025-11-01T00:42:48.361987209Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 1 00:42:48.362126 env[1212]: time="2025-11-01T00:42:48.362097887Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 1 00:42:48.362126 env[1212]: time="2025-11-01T00:42:48.362112123Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 1 00:42:48.362126 env[1212]: time="2025-11-01T00:42:48.362125719Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 1 00:42:48.362223 env[1212]: time="2025-11-01T00:42:48.362135668Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 1 00:42:48.362223 env[1212]: time="2025-11-01T00:42:48.362146057Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 1 00:42:48.362223 env[1212]: time="2025-11-01T00:42:48.362164622Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 1 00:42:48.362223 env[1212]: time="2025-11-01T00:42:48.362175622Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 1 00:42:48.362223 env[1212]: time="2025-11-01T00:42:48.362187966Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 1 00:42:48.362223 env[1212]: time="2025-11-01T00:42:48.362199277Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 1 00:42:48.362389 env[1212]: time="2025-11-01T00:42:48.362330693Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 1 00:42:48.362389 env[1212]: time="2025-11-01T00:42:48.362344569Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 1 00:42:48.362389 env[1212]: time="2025-11-01T00:42:48.362354979Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 1 00:42:48.362389 env[1212]: time="2025-11-01T00:42:48.362365659Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 1 00:42:48.362389 env[1212]: time="2025-11-01T00:42:48.362377571Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Nov 1 00:42:48.362389 env[1212]: time="2025-11-01T00:42:48.362386648Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 1 00:42:48.362593 env[1212]: time="2025-11-01T00:42:48.362405463Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Nov 1 00:42:48.362593 env[1212]: time="2025-11-01T00:42:48.362438856Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 1 00:42:48.362718 env[1212]: time="2025-11-01T00:42:48.362646325Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 1 00:42:48.362718 env[1212]: time="2025-11-01T00:42:48.362702370Z" level=info msg="Connect containerd service" Nov 1 00:42:48.364544 env[1212]: time="2025-11-01T00:42:48.362739450Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 1 00:42:48.364544 env[1212]: time="2025-11-01T00:42:48.363297887Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 1 00:42:48.364544 env[1212]: time="2025-11-01T00:42:48.363543057Z" level=info msg="Start subscribing containerd event" Nov 1 00:42:48.364544 env[1212]: time="2025-11-01T00:42:48.363583152Z" level=info msg="Start recovering state" Nov 1 00:42:48.364544 env[1212]: time="2025-11-01T00:42:48.363649737Z" level=info msg="Start event monitor" Nov 1 00:42:48.364544 env[1212]: time="2025-11-01T00:42:48.363669514Z" level=info msg="Start snapshots syncer" Nov 1 00:42:48.364544 env[1212]: time="2025-11-01T00:42:48.363678882Z" level=info msg="Start cni network conf syncer for default" Nov 1 00:42:48.364544 env[1212]: time="2025-11-01T00:42:48.363685404Z" level=info msg="Start streaming server" Nov 1 00:42:48.364544 env[1212]: time="2025-11-01T00:42:48.364357093Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 1 00:42:48.364544 env[1212]: time="2025-11-01T00:42:48.364394313Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 1 00:42:48.364642 systemd[1]: Started containerd.service. Nov 1 00:42:48.370668 locksmithd[1234]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 1 00:42:48.373704 env[1212]: time="2025-11-01T00:42:48.373665720Z" level=info msg="containerd successfully booted in 0.102613s" Nov 1 00:42:48.739353 systemd-networkd[1035]: eth0: Gained IPv6LL Nov 1 00:42:48.741858 systemd[1]: Finished systemd-networkd-wait-online.service. Nov 1 00:42:48.744092 systemd[1]: Reached target network-online.target. Nov 1 00:42:48.747919 systemd[1]: Starting kubelet.service... Nov 1 00:42:48.967999 sshd_keygen[1206]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 1 00:42:49.018355 systemd[1]: Finished sshd-keygen.service. Nov 1 00:42:49.057170 tar[1204]: linux-amd64/README.md Nov 1 00:42:49.277318 systemd[1]: Starting issuegen.service... Nov 1 00:42:49.279815 systemd[1]: Finished prepare-helm.service. Nov 1 00:42:49.283811 systemd[1]: issuegen.service: Deactivated successfully. Nov 1 00:42:49.284086 systemd[1]: Finished issuegen.service. Nov 1 00:42:49.287564 systemd[1]: Starting systemd-user-sessions.service... Nov 1 00:42:49.294719 systemd[1]: Finished systemd-user-sessions.service. Nov 1 00:42:49.298271 systemd[1]: Started getty@tty1.service. Nov 1 00:42:49.301394 systemd[1]: Started serial-getty@ttyS0.service. Nov 1 00:42:49.303159 systemd[1]: Reached target getty.target. Nov 1 00:42:50.106670 systemd[1]: Created slice system-sshd.slice. Nov 1 00:42:50.118475 systemd[1]: Started sshd@0-10.0.0.111:22-10.0.0.1:53238.service. Nov 1 00:42:50.271836 sshd[1262]: Accepted publickey for core from 10.0.0.1 port 53238 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:42:50.282325 sshd[1262]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:42:50.308041 systemd[1]: Created slice user-500.slice. Nov 1 00:42:50.314224 systemd[1]: Starting user-runtime-dir@500.service... Nov 1 00:42:50.319788 systemd-logind[1194]: New session 1 of user core. Nov 1 00:42:50.342549 systemd[1]: Finished user-runtime-dir@500.service. Nov 1 00:42:50.355216 systemd[1]: Starting user@500.service... Nov 1 00:42:50.360186 (systemd)[1265]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:42:50.538533 systemd[1265]: Queued start job for default target default.target. Nov 1 00:42:50.541152 systemd[1265]: Reached target paths.target. Nov 1 00:42:50.541717 systemd[1265]: Reached target sockets.target. Nov 1 00:42:50.543051 systemd[1265]: Reached target timers.target. Nov 1 00:42:50.543073 systemd[1265]: Reached target basic.target. Nov 1 00:42:50.543138 systemd[1265]: Reached target default.target. Nov 1 00:42:50.543196 systemd[1265]: Startup finished in 128ms. Nov 1 00:42:50.543411 systemd[1]: Started user@500.service. Nov 1 00:42:50.547664 systemd[1]: Started session-1.scope. Nov 1 00:42:50.661879 systemd[1]: Started sshd@1-10.0.0.111:22-10.0.0.1:53248.service. Nov 1 00:42:50.845145 sshd[1274]: Accepted publickey for core from 10.0.0.1 port 53248 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:42:50.847756 sshd[1274]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:42:50.864477 systemd-logind[1194]: New session 2 of user core. Nov 1 00:42:50.865182 systemd[1]: Started session-2.scope. Nov 1 00:42:51.021299 sshd[1274]: pam_unix(sshd:session): session closed for user core Nov 1 00:42:51.030201 systemd[1]: sshd@1-10.0.0.111:22-10.0.0.1:53248.service: Deactivated successfully. Nov 1 00:42:51.030881 systemd[1]: session-2.scope: Deactivated successfully. Nov 1 00:42:51.036234 systemd-logind[1194]: Session 2 logged out. Waiting for processes to exit. Nov 1 00:42:51.039717 systemd[1]: Started sshd@2-10.0.0.111:22-10.0.0.1:53258.service. Nov 1 00:42:51.044464 systemd-logind[1194]: Removed session 2. Nov 1 00:42:51.178629 sshd[1280]: Accepted publickey for core from 10.0.0.1 port 53258 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:42:51.184999 sshd[1280]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:42:51.198520 systemd[1]: Started session-3.scope. Nov 1 00:42:51.203840 systemd-logind[1194]: New session 3 of user core. Nov 1 00:42:51.296370 sshd[1280]: pam_unix(sshd:session): session closed for user core Nov 1 00:42:51.309939 systemd-logind[1194]: Session 3 logged out. Waiting for processes to exit. Nov 1 00:42:51.310359 systemd[1]: sshd@2-10.0.0.111:22-10.0.0.1:53258.service: Deactivated successfully. Nov 1 00:42:51.311658 systemd[1]: session-3.scope: Deactivated successfully. Nov 1 00:42:51.313417 systemd-logind[1194]: Removed session 3. Nov 1 00:42:51.912856 systemd[1]: Started kubelet.service. Nov 1 00:42:51.914815 systemd[1]: Reached target multi-user.target. Nov 1 00:42:51.917937 systemd[1]: Starting systemd-update-utmp-runlevel.service... Nov 1 00:42:51.927241 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Nov 1 00:42:51.927410 systemd[1]: Finished systemd-update-utmp-runlevel.service. Nov 1 00:42:51.929308 systemd[1]: Startup finished in 910ms (kernel) + 4.884s (initrd) + 10.209s (userspace) = 16.005s. Nov 1 00:42:52.565350 kubelet[1287]: E1101 00:42:52.565273 1287 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:42:52.567521 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:42:52.567711 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:42:52.568002 systemd[1]: kubelet.service: Consumed 3.203s CPU time. Nov 1 00:43:01.298276 systemd[1]: Started sshd@3-10.0.0.111:22-10.0.0.1:59074.service. Nov 1 00:43:01.339282 sshd[1296]: Accepted publickey for core from 10.0.0.1 port 59074 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:43:01.340957 sshd[1296]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:43:01.344571 systemd-logind[1194]: New session 4 of user core. Nov 1 00:43:01.345406 systemd[1]: Started session-4.scope. Nov 1 00:43:01.400703 sshd[1296]: pam_unix(sshd:session): session closed for user core Nov 1 00:43:01.403192 systemd[1]: sshd@3-10.0.0.111:22-10.0.0.1:59074.service: Deactivated successfully. Nov 1 00:43:01.403708 systemd[1]: session-4.scope: Deactivated successfully. Nov 1 00:43:01.404189 systemd-logind[1194]: Session 4 logged out. Waiting for processes to exit. Nov 1 00:43:01.405344 systemd[1]: Started sshd@4-10.0.0.111:22-10.0.0.1:59076.service. Nov 1 00:43:01.406088 systemd-logind[1194]: Removed session 4. Nov 1 00:43:01.445575 sshd[1302]: Accepted publickey for core from 10.0.0.1 port 59076 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:43:01.446754 sshd[1302]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:43:01.450403 systemd-logind[1194]: New session 5 of user core. Nov 1 00:43:01.451242 systemd[1]: Started session-5.scope. Nov 1 00:43:01.500715 sshd[1302]: pam_unix(sshd:session): session closed for user core Nov 1 00:43:01.503802 systemd[1]: sshd@4-10.0.0.111:22-10.0.0.1:59076.service: Deactivated successfully. Nov 1 00:43:01.505269 systemd[1]: session-5.scope: Deactivated successfully. Nov 1 00:43:01.505884 systemd-logind[1194]: Session 5 logged out. Waiting for processes to exit. Nov 1 00:43:01.507214 systemd[1]: Started sshd@5-10.0.0.111:22-10.0.0.1:59090.service. Nov 1 00:43:01.508021 systemd-logind[1194]: Removed session 5. Nov 1 00:43:01.549395 sshd[1308]: Accepted publickey for core from 10.0.0.1 port 59090 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:43:01.550793 sshd[1308]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:43:01.555013 systemd-logind[1194]: New session 6 of user core. Nov 1 00:43:01.555753 systemd[1]: Started session-6.scope. Nov 1 00:43:01.610564 sshd[1308]: pam_unix(sshd:session): session closed for user core Nov 1 00:43:01.613783 systemd[1]: sshd@5-10.0.0.111:22-10.0.0.1:59090.service: Deactivated successfully. Nov 1 00:43:01.614387 systemd[1]: session-6.scope: Deactivated successfully. Nov 1 00:43:01.614972 systemd-logind[1194]: Session 6 logged out. Waiting for processes to exit. Nov 1 00:43:01.616264 systemd[1]: Started sshd@6-10.0.0.111:22-10.0.0.1:59104.service. Nov 1 00:43:01.617107 systemd-logind[1194]: Removed session 6. Nov 1 00:43:01.658921 sshd[1314]: Accepted publickey for core from 10.0.0.1 port 59104 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:43:01.660777 sshd[1314]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:43:01.664829 systemd-logind[1194]: New session 7 of user core. Nov 1 00:43:01.665918 systemd[1]: Started session-7.scope. Nov 1 00:43:01.725724 sudo[1317]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 1 00:43:01.725973 sudo[1317]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Nov 1 00:43:01.753199 systemd[1]: Starting docker.service... Nov 1 00:43:01.878700 env[1329]: time="2025-11-01T00:43:01.878531675Z" level=info msg="Starting up" Nov 1 00:43:01.880231 env[1329]: time="2025-11-01T00:43:01.880168404Z" level=info msg="parsed scheme: \"unix\"" module=grpc Nov 1 00:43:01.880231 env[1329]: time="2025-11-01T00:43:01.880205644Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Nov 1 00:43:01.880231 env[1329]: time="2025-11-01T00:43:01.880233055Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Nov 1 00:43:01.880231 env[1329]: time="2025-11-01T00:43:01.880247412Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Nov 1 00:43:01.882450 env[1329]: time="2025-11-01T00:43:01.882419466Z" level=info msg="parsed scheme: \"unix\"" module=grpc Nov 1 00:43:01.882450 env[1329]: time="2025-11-01T00:43:01.882438391Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Nov 1 00:43:01.882450 env[1329]: time="2025-11-01T00:43:01.882449522Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Nov 1 00:43:01.882450 env[1329]: time="2025-11-01T00:43:01.882456565Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Nov 1 00:43:01.962972 env[1329]: time="2025-11-01T00:43:01.962921225Z" level=info msg="Loading containers: start." Nov 1 00:43:02.114075 kernel: Initializing XFRM netlink socket Nov 1 00:43:02.153251 env[1329]: time="2025-11-01T00:43:02.153144862Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Nov 1 00:43:02.214363 systemd-networkd[1035]: docker0: Link UP Nov 1 00:43:02.230838 env[1329]: time="2025-11-01T00:43:02.230783703Z" level=info msg="Loading containers: done." Nov 1 00:43:02.395234 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3143937103-merged.mount: Deactivated successfully. Nov 1 00:43:02.396664 env[1329]: time="2025-11-01T00:43:02.396593546Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 1 00:43:02.396852 env[1329]: time="2025-11-01T00:43:02.396821754Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Nov 1 00:43:02.396929 env[1329]: time="2025-11-01T00:43:02.396915369Z" level=info msg="Daemon has completed initialization" Nov 1 00:43:02.419545 systemd[1]: Started docker.service. Nov 1 00:43:02.425302 env[1329]: time="2025-11-01T00:43:02.425235820Z" level=info msg="API listen on /run/docker.sock" Nov 1 00:43:02.792921 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 1 00:43:02.793191 systemd[1]: Stopped kubelet.service. Nov 1 00:43:02.793245 systemd[1]: kubelet.service: Consumed 3.203s CPU time. Nov 1 00:43:02.794965 systemd[1]: Starting kubelet.service... Nov 1 00:43:02.963741 systemd[1]: Started kubelet.service. Nov 1 00:43:03.179529 kubelet[1461]: E1101 00:43:03.179374 1461 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:43:03.182289 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:43:03.182423 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:43:03.361537 env[1212]: time="2025-11-01T00:43:03.361471516Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\"" Nov 1 00:43:04.237065 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2206813311.mount: Deactivated successfully. Nov 1 00:43:05.955134 env[1212]: time="2025-11-01T00:43:05.955052056Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.34.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:05.957150 env[1212]: time="2025-11-01T00:43:05.957082133Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:05.958856 env[1212]: time="2025-11-01T00:43:05.958816034Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.34.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:05.960594 env[1212]: time="2025-11-01T00:43:05.960549535Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:05.961328 env[1212]: time="2025-11-01T00:43:05.961288832Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\" returns image reference \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\"" Nov 1 00:43:05.962042 env[1212]: time="2025-11-01T00:43:05.961993263Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\"" Nov 1 00:43:07.988327 env[1212]: time="2025-11-01T00:43:07.988259445Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.34.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:07.991139 env[1212]: time="2025-11-01T00:43:07.991099882Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:07.994115 env[1212]: time="2025-11-01T00:43:07.994016001Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.34.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:07.996416 env[1212]: time="2025-11-01T00:43:07.996380565Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:07.997404 env[1212]: time="2025-11-01T00:43:07.997339584Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\" returns image reference \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\"" Nov 1 00:43:07.998187 env[1212]: time="2025-11-01T00:43:07.998139814Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\"" Nov 1 00:43:10.234155 env[1212]: time="2025-11-01T00:43:10.234073428Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.34.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:10.238927 env[1212]: time="2025-11-01T00:43:10.238879150Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:10.242193 env[1212]: time="2025-11-01T00:43:10.242152319Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.34.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:10.246323 env[1212]: time="2025-11-01T00:43:10.246271404Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:10.247305 env[1212]: time="2025-11-01T00:43:10.247263444Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\" returns image reference \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\"" Nov 1 00:43:10.247896 env[1212]: time="2025-11-01T00:43:10.247862718Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\"" Nov 1 00:43:11.791188 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount934588346.mount: Deactivated successfully. Nov 1 00:43:12.363207 env[1212]: time="2025-11-01T00:43:12.363130280Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.34.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:12.367733 env[1212]: time="2025-11-01T00:43:12.367676566Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:12.369672 env[1212]: time="2025-11-01T00:43:12.369619960Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.34.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:12.370949 env[1212]: time="2025-11-01T00:43:12.370910911Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:12.371299 env[1212]: time="2025-11-01T00:43:12.371269203Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\" returns image reference \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\"" Nov 1 00:43:12.371873 env[1212]: time="2025-11-01T00:43:12.371846486Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Nov 1 00:43:12.964292 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount871200512.mount: Deactivated successfully. Nov 1 00:43:13.286455 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 1 00:43:13.286645 systemd[1]: Stopped kubelet.service. Nov 1 00:43:13.288095 systemd[1]: Starting kubelet.service... Nov 1 00:43:13.383704 systemd[1]: Started kubelet.service. Nov 1 00:43:13.420613 kubelet[1476]: E1101 00:43:13.420525 1476 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:43:13.423106 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:43:13.423275 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:43:15.158048 env[1212]: time="2025-11-01T00:43:15.157942610Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.12.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:15.160569 env[1212]: time="2025-11-01T00:43:15.160504194Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:15.162258 env[1212]: time="2025-11-01T00:43:15.162206927Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.12.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:15.164179 env[1212]: time="2025-11-01T00:43:15.164141245Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:15.164905 env[1212]: time="2025-11-01T00:43:15.164877295Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Nov 1 00:43:15.165472 env[1212]: time="2025-11-01T00:43:15.165416216Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Nov 1 00:43:15.626151 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2008637888.mount: Deactivated successfully. Nov 1 00:43:15.633020 env[1212]: time="2025-11-01T00:43:15.632973493Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:15.638299 env[1212]: time="2025-11-01T00:43:15.638240881Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:15.640065 env[1212]: time="2025-11-01T00:43:15.640006713Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:15.641682 env[1212]: time="2025-11-01T00:43:15.641637822Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:15.642252 env[1212]: time="2025-11-01T00:43:15.642214413Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Nov 1 00:43:15.642903 env[1212]: time="2025-11-01T00:43:15.642876224Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Nov 1 00:43:20.065737 env[1212]: time="2025-11-01T00:43:20.065656265Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.6.4-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:20.068349 env[1212]: time="2025-11-01T00:43:20.068303299Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:20.070844 env[1212]: time="2025-11-01T00:43:20.070770897Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.6.4-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:20.073092 env[1212]: time="2025-11-01T00:43:20.073064248Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:20.073908 env[1212]: time="2025-11-01T00:43:20.073874537Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Nov 1 00:43:22.738924 systemd[1]: Stopped kubelet.service. Nov 1 00:43:22.741473 systemd[1]: Starting kubelet.service... Nov 1 00:43:22.765190 systemd[1]: Reloading. Nov 1 00:43:22.849802 /usr/lib/systemd/system-generators/torcx-generator[1530]: time="2025-11-01T00:43:22Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Nov 1 00:43:22.850183 /usr/lib/systemd/system-generators/torcx-generator[1530]: time="2025-11-01T00:43:22Z" level=info msg="torcx already run" Nov 1 00:43:23.107981 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 00:43:23.108000 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 00:43:23.126048 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:43:23.208568 systemd[1]: Started kubelet.service. Nov 1 00:43:23.210410 systemd[1]: Stopping kubelet.service... Nov 1 00:43:23.215959 systemd[1]: kubelet.service: Deactivated successfully. Nov 1 00:43:23.216278 systemd[1]: Stopped kubelet.service. Nov 1 00:43:23.218857 systemd[1]: Starting kubelet.service... Nov 1 00:43:23.318968 systemd[1]: Started kubelet.service. Nov 1 00:43:23.395046 kubelet[1578]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 00:43:23.395046 kubelet[1578]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:43:23.395416 kubelet[1578]: I1101 00:43:23.395012 1578 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 00:43:24.162994 kubelet[1578]: I1101 00:43:24.162902 1578 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 1 00:43:24.162994 kubelet[1578]: I1101 00:43:24.162955 1578 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 00:43:24.163673 kubelet[1578]: I1101 00:43:24.163641 1578 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 1 00:43:24.163673 kubelet[1578]: I1101 00:43:24.163656 1578 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 00:43:24.164481 kubelet[1578]: I1101 00:43:24.164448 1578 server.go:956] "Client rotation is on, will bootstrap in background" Nov 1 00:43:24.919148 kubelet[1578]: I1101 00:43:24.919065 1578 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 00:43:24.923534 kubelet[1578]: E1101 00:43:24.923477 1578 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.111:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.111:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 1 00:43:24.930872 kubelet[1578]: E1101 00:43:24.930811 1578 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 00:43:24.931060 kubelet[1578]: I1101 00:43:24.930894 1578 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Nov 1 00:43:24.935855 kubelet[1578]: I1101 00:43:24.935808 1578 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 1 00:43:24.936755 kubelet[1578]: I1101 00:43:24.936697 1578 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 00:43:24.936969 kubelet[1578]: I1101 00:43:24.936752 1578 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 1 00:43:24.937099 kubelet[1578]: I1101 00:43:24.936979 1578 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 00:43:24.937099 kubelet[1578]: I1101 00:43:24.936992 1578 container_manager_linux.go:306] "Creating device plugin manager" Nov 1 00:43:24.937154 kubelet[1578]: I1101 00:43:24.937133 1578 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 1 00:43:24.940928 kubelet[1578]: I1101 00:43:24.940888 1578 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:43:24.942485 kubelet[1578]: I1101 00:43:24.942455 1578 kubelet.go:475] "Attempting to sync node with API server" Nov 1 00:43:24.942485 kubelet[1578]: I1101 00:43:24.942486 1578 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 00:43:24.942561 kubelet[1578]: I1101 00:43:24.942518 1578 kubelet.go:387] "Adding apiserver pod source" Nov 1 00:43:24.942561 kubelet[1578]: I1101 00:43:24.942539 1578 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 00:43:24.943631 kubelet[1578]: E1101 00:43:24.943584 1578 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.111:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.111:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 1 00:43:24.943756 kubelet[1578]: E1101 00:43:24.943704 1578 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.111:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.111:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 1 00:43:24.944754 kubelet[1578]: I1101 00:43:24.944727 1578 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Nov 1 00:43:24.945480 kubelet[1578]: I1101 00:43:24.945435 1578 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 1 00:43:24.945657 kubelet[1578]: I1101 00:43:24.945485 1578 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 1 00:43:24.945687 kubelet[1578]: W1101 00:43:24.945668 1578 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 1 00:43:24.948288 kubelet[1578]: I1101 00:43:24.948259 1578 server.go:1262] "Started kubelet" Nov 1 00:43:24.948398 kubelet[1578]: I1101 00:43:24.948359 1578 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 00:43:24.951858 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Nov 1 00:43:24.951989 kubelet[1578]: I1101 00:43:24.951955 1578 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 00:43:24.954911 kubelet[1578]: I1101 00:43:24.954874 1578 server.go:310] "Adding debug handlers to kubelet server" Nov 1 00:43:24.956772 kubelet[1578]: E1101 00:43:24.956736 1578 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 00:43:24.957014 kubelet[1578]: I1101 00:43:24.956969 1578 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 00:43:24.957204 kubelet[1578]: I1101 00:43:24.957064 1578 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 1 00:43:24.957445 kubelet[1578]: I1101 00:43:24.957411 1578 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 00:43:24.957527 kubelet[1578]: I1101 00:43:24.957441 1578 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 00:43:24.962107 kubelet[1578]: I1101 00:43:24.962076 1578 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 1 00:43:24.962386 kubelet[1578]: E1101 00:43:24.962356 1578 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 1 00:43:24.962721 kubelet[1578]: I1101 00:43:24.962697 1578 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 1 00:43:24.962800 kubelet[1578]: I1101 00:43:24.962784 1578 reconciler.go:29] "Reconciler: start to sync state" Nov 1 00:43:24.964649 kubelet[1578]: E1101 00:43:24.964601 1578 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.111:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.111:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 1 00:43:24.964820 kubelet[1578]: E1101 00:43:24.964683 1578 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.111:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.111:6443: connect: connection refused" interval="200ms" Nov 1 00:43:24.965712 kubelet[1578]: I1101 00:43:24.965686 1578 factory.go:223] Registration of the systemd container factory successfully Nov 1 00:43:24.966179 kubelet[1578]: I1101 00:43:24.966150 1578 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 00:43:24.966913 kubelet[1578]: E1101 00:43:24.965797 1578 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.111:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.111:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1873bb49d12a4b9b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-01 00:43:24.948212635 +0000 UTC m=+1.623104067,LastTimestamp:2025-11-01 00:43:24.948212635 +0000 UTC m=+1.623104067,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 1 00:43:24.967583 kubelet[1578]: I1101 00:43:24.967565 1578 factory.go:223] Registration of the containerd container factory successfully Nov 1 00:43:24.975566 kubelet[1578]: I1101 00:43:24.975500 1578 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 1 00:43:24.978126 kubelet[1578]: I1101 00:43:24.978089 1578 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 00:43:24.978126 kubelet[1578]: I1101 00:43:24.978111 1578 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 00:43:24.978126 kubelet[1578]: I1101 00:43:24.978132 1578 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:43:24.978441 kubelet[1578]: I1101 00:43:24.978416 1578 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 1 00:43:24.978486 kubelet[1578]: I1101 00:43:24.978446 1578 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 1 00:43:24.978578 kubelet[1578]: I1101 00:43:24.978480 1578 kubelet.go:2427] "Starting kubelet main sync loop" Nov 1 00:43:24.978578 kubelet[1578]: E1101 00:43:24.978558 1578 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 00:43:24.979429 kubelet[1578]: E1101 00:43:24.979388 1578 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.111:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.111:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 1 00:43:24.981779 kubelet[1578]: I1101 00:43:24.981750 1578 policy_none.go:49] "None policy: Start" Nov 1 00:43:24.981876 kubelet[1578]: I1101 00:43:24.981787 1578 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 1 00:43:24.981876 kubelet[1578]: I1101 00:43:24.981805 1578 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 1 00:43:24.984404 kubelet[1578]: I1101 00:43:24.984376 1578 policy_none.go:47] "Start" Nov 1 00:43:24.988714 systemd[1]: Created slice kubepods.slice. Nov 1 00:43:24.993800 systemd[1]: Created slice kubepods-burstable.slice. Nov 1 00:43:24.996727 systemd[1]: Created slice kubepods-besteffort.slice. Nov 1 00:43:25.010466 kubelet[1578]: E1101 00:43:25.010404 1578 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 1 00:43:25.010698 kubelet[1578]: I1101 00:43:25.010594 1578 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 00:43:25.010698 kubelet[1578]: I1101 00:43:25.010609 1578 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 00:43:25.011002 kubelet[1578]: I1101 00:43:25.010977 1578 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 00:43:25.012255 kubelet[1578]: E1101 00:43:25.012173 1578 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 00:43:25.012318 kubelet[1578]: E1101 00:43:25.012259 1578 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 1 00:43:25.090264 systemd[1]: Created slice kubepods-burstable-podce161b3b11c90b0b844f2e4f86b4e8cd.slice. Nov 1 00:43:25.101793 kubelet[1578]: E1101 00:43:25.101743 1578 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:43:25.103864 systemd[1]: Created slice kubepods-burstable-pod72ae43bf624d285361487631af8a6ba6.slice. Nov 1 00:43:25.105512 kubelet[1578]: E1101 00:43:25.105450 1578 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:43:25.112528 kubelet[1578]: I1101 00:43:25.112504 1578 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 00:43:25.113108 kubelet[1578]: E1101 00:43:25.113069 1578 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.111:6443/api/v1/nodes\": dial tcp 10.0.0.111:6443: connect: connection refused" node="localhost" Nov 1 00:43:25.139293 systemd[1]: Created slice kubepods-burstable-pod342d9d5b604ed27ba9f8d3ee49d74d12.slice. Nov 1 00:43:25.144017 kubelet[1578]: E1101 00:43:25.143988 1578 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:43:25.164067 kubelet[1578]: I1101 00:43:25.163988 1578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/342d9d5b604ed27ba9f8d3ee49d74d12-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"342d9d5b604ed27ba9f8d3ee49d74d12\") " pod="kube-system/kube-apiserver-localhost" Nov 1 00:43:25.164218 kubelet[1578]: I1101 00:43:25.164077 1578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:43:25.164270 kubelet[1578]: I1101 00:43:25.164179 1578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:43:25.164297 kubelet[1578]: I1101 00:43:25.164252 1578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:43:25.164324 kubelet[1578]: I1101 00:43:25.164296 1578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae43bf624d285361487631af8a6ba6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae43bf624d285361487631af8a6ba6\") " pod="kube-system/kube-scheduler-localhost" Nov 1 00:43:25.164324 kubelet[1578]: I1101 00:43:25.164312 1578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/342d9d5b604ed27ba9f8d3ee49d74d12-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"342d9d5b604ed27ba9f8d3ee49d74d12\") " pod="kube-system/kube-apiserver-localhost" Nov 1 00:43:25.164372 kubelet[1578]: I1101 00:43:25.164328 1578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/342d9d5b604ed27ba9f8d3ee49d74d12-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"342d9d5b604ed27ba9f8d3ee49d74d12\") " pod="kube-system/kube-apiserver-localhost" Nov 1 00:43:25.164400 kubelet[1578]: I1101 00:43:25.164353 1578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:43:25.164426 kubelet[1578]: I1101 00:43:25.164399 1578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:43:25.165904 kubelet[1578]: E1101 00:43:25.165864 1578 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.111:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.111:6443: connect: connection refused" interval="400ms" Nov 1 00:43:25.314953 kubelet[1578]: I1101 00:43:25.314915 1578 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 00:43:25.315447 kubelet[1578]: E1101 00:43:25.315399 1578 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.111:6443/api/v1/nodes\": dial tcp 10.0.0.111:6443: connect: connection refused" node="localhost" Nov 1 00:43:25.406607 kubelet[1578]: E1101 00:43:25.406563 1578 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:43:25.407534 env[1212]: time="2025-11-01T00:43:25.407476687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ce161b3b11c90b0b844f2e4f86b4e8cd,Namespace:kube-system,Attempt:0,}" Nov 1 00:43:25.409862 kubelet[1578]: E1101 00:43:25.409834 1578 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:43:25.410462 env[1212]: time="2025-11-01T00:43:25.410420819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae43bf624d285361487631af8a6ba6,Namespace:kube-system,Attempt:0,}" Nov 1 00:43:25.448272 kubelet[1578]: E1101 00:43:25.448236 1578 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:43:25.448897 env[1212]: time="2025-11-01T00:43:25.448853486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:342d9d5b604ed27ba9f8d3ee49d74d12,Namespace:kube-system,Attempt:0,}" Nov 1 00:43:25.566834 kubelet[1578]: E1101 00:43:25.566683 1578 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.111:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.111:6443: connect: connection refused" interval="800ms" Nov 1 00:43:25.717142 kubelet[1578]: I1101 00:43:25.717106 1578 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 00:43:25.717470 kubelet[1578]: E1101 00:43:25.717435 1578 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.111:6443/api/v1/nodes\": dial tcp 10.0.0.111:6443: connect: connection refused" node="localhost" Nov 1 00:43:25.839182 kubelet[1578]: E1101 00:43:25.839016 1578 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.111:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.111:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 1 00:43:25.883853 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3780004713.mount: Deactivated successfully. Nov 1 00:43:25.890189 env[1212]: time="2025-11-01T00:43:25.890111919Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:25.894493 env[1212]: time="2025-11-01T00:43:25.894427320Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:25.895808 env[1212]: time="2025-11-01T00:43:25.895755595Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:25.896401 env[1212]: time="2025-11-01T00:43:25.896362950Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:25.897379 env[1212]: time="2025-11-01T00:43:25.897324873Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:25.900138 env[1212]: time="2025-11-01T00:43:25.900086105Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:25.901442 env[1212]: time="2025-11-01T00:43:25.901396476Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:25.903331 env[1212]: time="2025-11-01T00:43:25.903296457Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:25.904556 env[1212]: time="2025-11-01T00:43:25.904503170Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:25.906337 env[1212]: time="2025-11-01T00:43:25.906291127Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:25.907822 env[1212]: time="2025-11-01T00:43:25.907786062Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:25.909584 env[1212]: time="2025-11-01T00:43:25.909557587Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:25.937570 env[1212]: time="2025-11-01T00:43:25.937448972Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:43:25.937570 env[1212]: time="2025-11-01T00:43:25.937493116Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:43:25.937570 env[1212]: time="2025-11-01T00:43:25.937505079Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:43:25.937826 env[1212]: time="2025-11-01T00:43:25.937698499Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/34040f41e04ecfb2eb80b8b709dce9d12c89de82b7e6d1bb2e1b5792f5aa2a81 pid=1623 runtime=io.containerd.runc.v2 Nov 1 00:43:25.992118 env[1212]: time="2025-11-01T00:43:25.974847386Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:43:25.992118 env[1212]: time="2025-11-01T00:43:25.974889427Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:43:25.992118 env[1212]: time="2025-11-01T00:43:25.974899466Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:43:25.992118 env[1212]: time="2025-11-01T00:43:25.975231362Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9051f5fd0e5e02053a1e7fa15c86af0e24471d061109b272d81a7008a640b912 pid=1641 runtime=io.containerd.runc.v2 Nov 1 00:43:26.007878 systemd[1]: Started cri-containerd-9051f5fd0e5e02053a1e7fa15c86af0e24471d061109b272d81a7008a640b912.scope. Nov 1 00:43:26.014999 env[1212]: time="2025-11-01T00:43:26.014916694Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:43:26.015253 env[1212]: time="2025-11-01T00:43:26.015209725Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:43:26.015453 env[1212]: time="2025-11-01T00:43:26.015399879Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:43:26.015699 env[1212]: time="2025-11-01T00:43:26.015669635Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/71e52c80acde1321cab98d08db34485947ab926b271da8c9d63a3fb401f1568d pid=1665 runtime=io.containerd.runc.v2 Nov 1 00:43:26.035706 kubelet[1578]: E1101 00:43:26.035535 1578 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.111:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.111:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 1 00:43:26.062501 systemd[1]: Started cri-containerd-34040f41e04ecfb2eb80b8b709dce9d12c89de82b7e6d1bb2e1b5792f5aa2a81.scope. Nov 1 00:43:26.141338 systemd[1]: Started cri-containerd-71e52c80acde1321cab98d08db34485947ab926b271da8c9d63a3fb401f1568d.scope. Nov 1 00:43:26.154913 kubelet[1578]: E1101 00:43:26.154865 1578 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.111:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.111:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 1 00:43:26.166860 env[1212]: time="2025-11-01T00:43:26.166127274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:342d9d5b604ed27ba9f8d3ee49d74d12,Namespace:kube-system,Attempt:0,} returns sandbox id \"9051f5fd0e5e02053a1e7fa15c86af0e24471d061109b272d81a7008a640b912\"" Nov 1 00:43:26.167173 kubelet[1578]: E1101 00:43:26.167130 1578 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:43:26.188447 env[1212]: time="2025-11-01T00:43:26.188384643Z" level=info msg="CreateContainer within sandbox \"9051f5fd0e5e02053a1e7fa15c86af0e24471d061109b272d81a7008a640b912\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 1 00:43:26.207852 env[1212]: time="2025-11-01T00:43:26.207772861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ce161b3b11c90b0b844f2e4f86b4e8cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"34040f41e04ecfb2eb80b8b709dce9d12c89de82b7e6d1bb2e1b5792f5aa2a81\"" Nov 1 00:43:26.208917 kubelet[1578]: E1101 00:43:26.208830 1578 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:43:26.214423 env[1212]: time="2025-11-01T00:43:26.211736738Z" level=info msg="CreateContainer within sandbox \"9051f5fd0e5e02053a1e7fa15c86af0e24471d061109b272d81a7008a640b912\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f27deee2223844f6e83e327f1d384fb1b33d9d96615cd2fb53f7b640b981c7a6\"" Nov 1 00:43:26.214762 env[1212]: time="2025-11-01T00:43:26.214720849Z" level=info msg="StartContainer for \"f27deee2223844f6e83e327f1d384fb1b33d9d96615cd2fb53f7b640b981c7a6\"" Nov 1 00:43:26.215730 env[1212]: time="2025-11-01T00:43:26.215703520Z" level=info msg="CreateContainer within sandbox \"34040f41e04ecfb2eb80b8b709dce9d12c89de82b7e6d1bb2e1b5792f5aa2a81\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 1 00:43:26.235497 env[1212]: time="2025-11-01T00:43:26.235440375Z" level=info msg="CreateContainer within sandbox \"34040f41e04ecfb2eb80b8b709dce9d12c89de82b7e6d1bb2e1b5792f5aa2a81\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5548770548713f31507c866d9df71f34f6db2bf394c2476a916e82bb41cc8218\"" Nov 1 00:43:26.235788 systemd[1]: Started cri-containerd-f27deee2223844f6e83e327f1d384fb1b33d9d96615cd2fb53f7b640b981c7a6.scope. Nov 1 00:43:26.241296 env[1212]: time="2025-11-01T00:43:26.237877751Z" level=info msg="StartContainer for \"5548770548713f31507c866d9df71f34f6db2bf394c2476a916e82bb41cc8218\"" Nov 1 00:43:26.249134 env[1212]: time="2025-11-01T00:43:26.249087475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae43bf624d285361487631af8a6ba6,Namespace:kube-system,Attempt:0,} returns sandbox id \"71e52c80acde1321cab98d08db34485947ab926b271da8c9d63a3fb401f1568d\"" Nov 1 00:43:26.250658 kubelet[1578]: E1101 00:43:26.250427 1578 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:43:26.257594 systemd[1]: Started cri-containerd-5548770548713f31507c866d9df71f34f6db2bf394c2476a916e82bb41cc8218.scope. Nov 1 00:43:26.261669 env[1212]: time="2025-11-01T00:43:26.261623789Z" level=info msg="CreateContainer within sandbox \"71e52c80acde1321cab98d08db34485947ab926b271da8c9d63a3fb401f1568d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 1 00:43:26.343083 env[1212]: time="2025-11-01T00:43:26.343012714Z" level=info msg="CreateContainer within sandbox \"71e52c80acde1321cab98d08db34485947ab926b271da8c9d63a3fb401f1568d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"87b089365ddb800187ed1e24cea6f938603d1cd2b9bb3e871ac3a03027ca39c4\"" Nov 1 00:43:26.343685 env[1212]: time="2025-11-01T00:43:26.343650163Z" level=info msg="StartContainer for \"87b089365ddb800187ed1e24cea6f938603d1cd2b9bb3e871ac3a03027ca39c4\"" Nov 1 00:43:26.344731 env[1212]: time="2025-11-01T00:43:26.344702147Z" level=info msg="StartContainer for \"f27deee2223844f6e83e327f1d384fb1b33d9d96615cd2fb53f7b640b981c7a6\" returns successfully" Nov 1 00:43:26.354774 env[1212]: time="2025-11-01T00:43:26.354728106Z" level=info msg="StartContainer for \"5548770548713f31507c866d9df71f34f6db2bf394c2476a916e82bb41cc8218\" returns successfully" Nov 1 00:43:26.365505 systemd[1]: Started cri-containerd-87b089365ddb800187ed1e24cea6f938603d1cd2b9bb3e871ac3a03027ca39c4.scope. Nov 1 00:43:26.367576 kubelet[1578]: E1101 00:43:26.367498 1578 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.111:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.111:6443: connect: connection refused" interval="1.6s" Nov 1 00:43:26.415438 env[1212]: time="2025-11-01T00:43:26.415324293Z" level=info msg="StartContainer for \"87b089365ddb800187ed1e24cea6f938603d1cd2b9bb3e871ac3a03027ca39c4\" returns successfully" Nov 1 00:43:26.519193 kubelet[1578]: I1101 00:43:26.519158 1578 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 00:43:26.992645 kubelet[1578]: E1101 00:43:26.992523 1578 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:43:26.992941 kubelet[1578]: E1101 00:43:26.992862 1578 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:43:26.994666 kubelet[1578]: E1101 00:43:26.994652 1578 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:43:26.994867 kubelet[1578]: E1101 00:43:26.994853 1578 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:43:26.996562 kubelet[1578]: E1101 00:43:26.996549 1578 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:43:26.996752 kubelet[1578]: E1101 00:43:26.996739 1578 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:43:27.905385 kubelet[1578]: I1101 00:43:27.905337 1578 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 1 00:43:27.905840 kubelet[1578]: E1101 00:43:27.905397 1578 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Nov 1 00:43:27.944136 kubelet[1578]: I1101 00:43:27.944086 1578 apiserver.go:52] "Watching apiserver" Nov 1 00:43:27.963258 kubelet[1578]: I1101 00:43:27.963202 1578 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 1 00:43:27.963435 kubelet[1578]: I1101 00:43:27.963298 1578 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 1 00:43:27.967271 kubelet[1578]: E1101 00:43:27.967247 1578 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 1 00:43:27.967271 kubelet[1578]: I1101 00:43:27.967271 1578 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 1 00:43:27.968645 kubelet[1578]: E1101 00:43:27.968618 1578 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 1 00:43:27.968645 kubelet[1578]: I1101 00:43:27.968638 1578 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 1 00:43:27.970581 kubelet[1578]: E1101 00:43:27.970560 1578 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Nov 1 00:43:27.997715 kubelet[1578]: I1101 00:43:27.997691 1578 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 1 00:43:27.997863 kubelet[1578]: I1101 00:43:27.997781 1578 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 1 00:43:27.999528 kubelet[1578]: E1101 00:43:27.999500 1578 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 1 00:43:27.999595 kubelet[1578]: E1101 00:43:27.999507 1578 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 1 00:43:27.999676 kubelet[1578]: E1101 00:43:27.999655 1578 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:43:27.999744 kubelet[1578]: E1101 00:43:27.999701 1578 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:43:29.771887 systemd[1]: Reloading. Nov 1 00:43:29.856419 /usr/lib/systemd/system-generators/torcx-generator[1890]: time="2025-11-01T00:43:29Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Nov 1 00:43:29.856445 /usr/lib/systemd/system-generators/torcx-generator[1890]: time="2025-11-01T00:43:29Z" level=info msg="torcx already run" Nov 1 00:43:29.921775 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 00:43:29.921792 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 00:43:29.942181 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:43:30.043418 kubelet[1578]: I1101 00:43:30.043386 1578 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 00:43:30.043582 systemd[1]: Stopping kubelet.service... Nov 1 00:43:30.068434 systemd[1]: kubelet.service: Deactivated successfully. Nov 1 00:43:30.068614 systemd[1]: Stopped kubelet.service. Nov 1 00:43:30.068665 systemd[1]: kubelet.service: Consumed 1.345s CPU time. Nov 1 00:43:30.070256 systemd[1]: Starting kubelet.service... Nov 1 00:43:30.163834 systemd[1]: Started kubelet.service. Nov 1 00:43:30.216533 kubelet[1934]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 00:43:30.216533 kubelet[1934]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:43:30.216963 kubelet[1934]: I1101 00:43:30.216557 1934 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 00:43:30.222251 kubelet[1934]: I1101 00:43:30.222163 1934 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 1 00:43:30.222251 kubelet[1934]: I1101 00:43:30.222191 1934 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 00:43:30.222251 kubelet[1934]: I1101 00:43:30.222225 1934 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 1 00:43:30.222251 kubelet[1934]: I1101 00:43:30.222251 1934 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 00:43:30.222507 kubelet[1934]: I1101 00:43:30.222482 1934 server.go:956] "Client rotation is on, will bootstrap in background" Nov 1 00:43:30.223699 kubelet[1934]: I1101 00:43:30.223675 1934 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 1 00:43:30.225472 kubelet[1934]: I1101 00:43:30.225443 1934 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 00:43:30.230878 kubelet[1934]: E1101 00:43:30.230847 1934 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 00:43:30.230986 kubelet[1934]: I1101 00:43:30.230898 1934 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Nov 1 00:43:30.234403 kubelet[1934]: I1101 00:43:30.234354 1934 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 1 00:43:30.234708 kubelet[1934]: I1101 00:43:30.234658 1934 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 00:43:30.234892 kubelet[1934]: I1101 00:43:30.234699 1934 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 1 00:43:30.234986 kubelet[1934]: I1101 00:43:30.234892 1934 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 00:43:30.234986 kubelet[1934]: I1101 00:43:30.234904 1934 container_manager_linux.go:306] "Creating device plugin manager" Nov 1 00:43:30.234986 kubelet[1934]: I1101 00:43:30.234938 1934 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 1 00:43:30.236226 kubelet[1934]: I1101 00:43:30.236207 1934 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:43:30.236425 kubelet[1934]: I1101 00:43:30.236409 1934 kubelet.go:475] "Attempting to sync node with API server" Nov 1 00:43:30.236492 kubelet[1934]: I1101 00:43:30.236434 1934 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 00:43:30.236492 kubelet[1934]: I1101 00:43:30.236462 1934 kubelet.go:387] "Adding apiserver pod source" Nov 1 00:43:30.236492 kubelet[1934]: I1101 00:43:30.236480 1934 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 00:43:30.237517 kubelet[1934]: I1101 00:43:30.237477 1934 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Nov 1 00:43:30.237983 kubelet[1934]: I1101 00:43:30.237955 1934 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 1 00:43:30.237983 kubelet[1934]: I1101 00:43:30.237984 1934 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 1 00:43:30.239960 kubelet[1934]: I1101 00:43:30.239934 1934 server.go:1262] "Started kubelet" Nov 1 00:43:30.241632 kubelet[1934]: I1101 00:43:30.241602 1934 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 00:43:30.246404 kubelet[1934]: I1101 00:43:30.245389 1934 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 00:43:30.250389 kubelet[1934]: I1101 00:43:30.247326 1934 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 00:43:30.250389 kubelet[1934]: I1101 00:43:30.247406 1934 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 1 00:43:30.250389 kubelet[1934]: I1101 00:43:30.247698 1934 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 00:43:30.250389 kubelet[1934]: E1101 00:43:30.248097 1934 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 00:43:30.250389 kubelet[1934]: I1101 00:43:30.248306 1934 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 1 00:43:30.250389 kubelet[1934]: I1101 00:43:30.248388 1934 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 1 00:43:30.250389 kubelet[1934]: I1101 00:43:30.248495 1934 reconciler.go:29] "Reconciler: start to sync state" Nov 1 00:43:30.250389 kubelet[1934]: I1101 00:43:30.249268 1934 factory.go:223] Registration of the systemd container factory successfully Nov 1 00:43:30.250389 kubelet[1934]: I1101 00:43:30.249376 1934 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 00:43:30.250697 kubelet[1934]: I1101 00:43:30.250641 1934 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 00:43:30.251555 kubelet[1934]: I1101 00:43:30.251520 1934 factory.go:223] Registration of the containerd container factory successfully Nov 1 00:43:30.254210 kubelet[1934]: I1101 00:43:30.254159 1934 server.go:310] "Adding debug handlers to kubelet server" Nov 1 00:43:30.255651 kubelet[1934]: E1101 00:43:30.254544 1934 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 1 00:43:30.274948 kubelet[1934]: I1101 00:43:30.274896 1934 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 1 00:43:30.276769 kubelet[1934]: I1101 00:43:30.276745 1934 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 1 00:43:30.276769 kubelet[1934]: I1101 00:43:30.276766 1934 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 1 00:43:30.276860 kubelet[1934]: I1101 00:43:30.276790 1934 kubelet.go:2427] "Starting kubelet main sync loop" Nov 1 00:43:30.276860 kubelet[1934]: E1101 00:43:30.276829 1934 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 00:43:30.292199 kubelet[1934]: I1101 00:43:30.292173 1934 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 00:43:30.292345 kubelet[1934]: I1101 00:43:30.292327 1934 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 00:43:30.292440 kubelet[1934]: I1101 00:43:30.292427 1934 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:43:30.293502 kubelet[1934]: I1101 00:43:30.293436 1934 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 1 00:43:30.293601 kubelet[1934]: I1101 00:43:30.293572 1934 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 1 00:43:30.293701 kubelet[1934]: I1101 00:43:30.293681 1934 policy_none.go:49] "None policy: Start" Nov 1 00:43:30.293805 kubelet[1934]: I1101 00:43:30.293786 1934 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 1 00:43:30.293917 kubelet[1934]: I1101 00:43:30.293899 1934 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 1 00:43:30.294114 kubelet[1934]: I1101 00:43:30.294099 1934 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Nov 1 00:43:30.294201 kubelet[1934]: I1101 00:43:30.294187 1934 policy_none.go:47] "Start" Nov 1 00:43:30.298218 kubelet[1934]: E1101 00:43:30.298187 1934 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 1 00:43:30.298409 kubelet[1934]: I1101 00:43:30.298391 1934 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 00:43:30.298455 kubelet[1934]: I1101 00:43:30.298407 1934 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 00:43:30.298817 kubelet[1934]: I1101 00:43:30.298783 1934 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 00:43:30.299508 kubelet[1934]: E1101 00:43:30.299485 1934 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 00:43:30.378669 kubelet[1934]: I1101 00:43:30.378604 1934 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 1 00:43:30.378669 kubelet[1934]: I1101 00:43:30.378642 1934 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 1 00:43:30.378941 kubelet[1934]: I1101 00:43:30.378696 1934 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 1 00:43:30.402065 kubelet[1934]: I1101 00:43:30.402020 1934 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 00:43:30.408841 kubelet[1934]: I1101 00:43:30.408796 1934 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Nov 1 00:43:30.409081 kubelet[1934]: I1101 00:43:30.408881 1934 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 1 00:43:30.449445 kubelet[1934]: I1101 00:43:30.449381 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/342d9d5b604ed27ba9f8d3ee49d74d12-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"342d9d5b604ed27ba9f8d3ee49d74d12\") " pod="kube-system/kube-apiserver-localhost" Nov 1 00:43:30.449445 kubelet[1934]: I1101 00:43:30.449422 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/342d9d5b604ed27ba9f8d3ee49d74d12-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"342d9d5b604ed27ba9f8d3ee49d74d12\") " pod="kube-system/kube-apiserver-localhost" Nov 1 00:43:30.449445 kubelet[1934]: I1101 00:43:30.449438 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:43:30.449445 kubelet[1934]: I1101 00:43:30.449453 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:43:30.449713 kubelet[1934]: I1101 00:43:30.449465 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/342d9d5b604ed27ba9f8d3ee49d74d12-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"342d9d5b604ed27ba9f8d3ee49d74d12\") " pod="kube-system/kube-apiserver-localhost" Nov 1 00:43:30.449713 kubelet[1934]: I1101 00:43:30.449479 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:43:30.449713 kubelet[1934]: I1101 00:43:30.449514 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:43:30.449713 kubelet[1934]: I1101 00:43:30.449549 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:43:30.449713 kubelet[1934]: I1101 00:43:30.449572 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae43bf624d285361487631af8a6ba6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae43bf624d285361487631af8a6ba6\") " pod="kube-system/kube-scheduler-localhost" Nov 1 00:43:30.685150 kubelet[1934]: E1101 00:43:30.684989 1934 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:43:30.685150 kubelet[1934]: E1101 00:43:30.684993 1934 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:43:30.685339 kubelet[1934]: E1101 00:43:30.685186 1934 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:43:30.990047 sudo[1974]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Nov 1 00:43:30.990279 sudo[1974]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Nov 1 00:43:31.237616 kubelet[1934]: I1101 00:43:31.237561 1934 apiserver.go:52] "Watching apiserver" Nov 1 00:43:31.248863 kubelet[1934]: I1101 00:43:31.248720 1934 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 1 00:43:31.287580 kubelet[1934]: I1101 00:43:31.287532 1934 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 1 00:43:31.287580 kubelet[1934]: I1101 00:43:31.287595 1934 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 1 00:43:31.287968 kubelet[1934]: I1101 00:43:31.287915 1934 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 1 00:43:31.295440 kubelet[1934]: E1101 00:43:31.295388 1934 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Nov 1 00:43:31.295634 kubelet[1934]: E1101 00:43:31.295602 1934 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:43:31.296573 kubelet[1934]: E1101 00:43:31.296087 1934 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 1 00:43:31.296573 kubelet[1934]: E1101 00:43:31.296197 1934 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:43:31.296573 kubelet[1934]: E1101 00:43:31.296279 1934 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 1 00:43:31.296573 kubelet[1934]: E1101 00:43:31.296376 1934 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:43:31.320137 kubelet[1934]: I1101 00:43:31.320055 1934 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.320015236 podStartE2EDuration="1.320015236s" podCreationTimestamp="2025-11-01 00:43:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:43:31.310812991 +0000 UTC m=+1.142844294" watchObservedRunningTime="2025-11-01 00:43:31.320015236 +0000 UTC m=+1.152046539" Nov 1 00:43:31.483213 kubelet[1934]: I1101 00:43:31.483112 1934 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.483088915 podStartE2EDuration="1.483088915s" podCreationTimestamp="2025-11-01 00:43:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:43:31.320427381 +0000 UTC m=+1.152458704" watchObservedRunningTime="2025-11-01 00:43:31.483088915 +0000 UTC m=+1.315120208" Nov 1 00:43:31.519918 sudo[1974]: pam_unix(sudo:session): session closed for user root Nov 1 00:43:31.898156 kubelet[1934]: I1101 00:43:31.898091 1934 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.898058928 podStartE2EDuration="1.898058928s" podCreationTimestamp="2025-11-01 00:43:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:43:31.483336716 +0000 UTC m=+1.315368009" watchObservedRunningTime="2025-11-01 00:43:31.898058928 +0000 UTC m=+1.730090221" Nov 1 00:43:32.288454 kubelet[1934]: E1101 00:43:32.288412 1934 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:43:32.288772 kubelet[1934]: E1101 00:43:32.288535 1934 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:43:32.289040 kubelet[1934]: E1101 00:43:32.289010 1934 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:43:33.289897 kubelet[1934]: E1101 00:43:33.289854 1934 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:43:33.585587 sudo[1317]: pam_unix(sudo:session): session closed for user root Nov 1 00:43:33.587212 sshd[1314]: pam_unix(sshd:session): session closed for user core Nov 1 00:43:33.590533 systemd[1]: sshd@6-10.0.0.111:22-10.0.0.1:59104.service: Deactivated successfully. Nov 1 00:43:33.591389 systemd[1]: session-7.scope: Deactivated successfully. Nov 1 00:43:33.591541 systemd[1]: session-7.scope: Consumed 5.272s CPU time. Nov 1 00:43:33.591991 systemd-logind[1194]: Session 7 logged out. Waiting for processes to exit. Nov 1 00:43:33.593053 systemd-logind[1194]: Removed session 7. Nov 1 00:43:33.925272 update_engine[1200]: I1101 00:43:33.925042 1200 update_attempter.cc:509] Updating boot flags... Nov 1 00:43:34.458570 kubelet[1934]: E1101 00:43:34.458513 1934 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:43:36.047183 kubelet[1934]: I1101 00:43:36.047146 1934 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 1 00:43:36.047734 kubelet[1934]: I1101 00:43:36.047674 1934 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 1 00:43:36.047784 env[1212]: time="2025-11-01T00:43:36.047491679Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 1 00:43:36.843499 systemd[1]: Created slice kubepods-besteffort-pod76ba467d_d470_4395_9d8b_96debbcdaa86.slice. Nov 1 00:43:36.858741 systemd[1]: Created slice kubepods-burstable-podccf5477c_955c_4f12_a373_2f4712c8e970.slice. Nov 1 00:43:36.899157 kubelet[1934]: I1101 00:43:36.899095 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ccf5477c-955c-4f12-a373-2f4712c8e970-cni-path\") pod \"cilium-frlh8\" (UID: \"ccf5477c-955c-4f12-a373-2f4712c8e970\") " pod="kube-system/cilium-frlh8" Nov 1 00:43:36.899157 kubelet[1934]: I1101 00:43:36.899137 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/76ba467d-d470-4395-9d8b-96debbcdaa86-xtables-lock\") pod \"kube-proxy-9d7jc\" (UID: \"76ba467d-d470-4395-9d8b-96debbcdaa86\") " pod="kube-system/kube-proxy-9d7jc" Nov 1 00:43:36.899157 kubelet[1934]: I1101 00:43:36.899152 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/76ba467d-d470-4395-9d8b-96debbcdaa86-lib-modules\") pod \"kube-proxy-9d7jc\" (UID: \"76ba467d-d470-4395-9d8b-96debbcdaa86\") " pod="kube-system/kube-proxy-9d7jc" Nov 1 00:43:36.899157 kubelet[1934]: I1101 00:43:36.899166 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwxwn\" (UniqueName: \"kubernetes.io/projected/76ba467d-d470-4395-9d8b-96debbcdaa86-kube-api-access-wwxwn\") pod \"kube-proxy-9d7jc\" (UID: \"76ba467d-d470-4395-9d8b-96debbcdaa86\") " pod="kube-system/kube-proxy-9d7jc" Nov 1 00:43:36.899459 kubelet[1934]: I1101 00:43:36.899185 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ccf5477c-955c-4f12-a373-2f4712c8e970-cilium-run\") pod \"cilium-frlh8\" (UID: \"ccf5477c-955c-4f12-a373-2f4712c8e970\") " pod="kube-system/cilium-frlh8" Nov 1 00:43:36.899459 kubelet[1934]: I1101 00:43:36.899225 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ccf5477c-955c-4f12-a373-2f4712c8e970-bpf-maps\") pod \"cilium-frlh8\" (UID: \"ccf5477c-955c-4f12-a373-2f4712c8e970\") " pod="kube-system/cilium-frlh8" Nov 1 00:43:36.899459 kubelet[1934]: I1101 00:43:36.899260 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ccf5477c-955c-4f12-a373-2f4712c8e970-etc-cni-netd\") pod \"cilium-frlh8\" (UID: \"ccf5477c-955c-4f12-a373-2f4712c8e970\") " pod="kube-system/cilium-frlh8" Nov 1 00:43:36.899459 kubelet[1934]: I1101 00:43:36.899292 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ccf5477c-955c-4f12-a373-2f4712c8e970-cilium-config-path\") pod \"cilium-frlh8\" (UID: \"ccf5477c-955c-4f12-a373-2f4712c8e970\") " pod="kube-system/cilium-frlh8" Nov 1 00:43:36.899459 kubelet[1934]: I1101 00:43:36.899307 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/76ba467d-d470-4395-9d8b-96debbcdaa86-kube-proxy\") pod \"kube-proxy-9d7jc\" (UID: \"76ba467d-d470-4395-9d8b-96debbcdaa86\") " pod="kube-system/kube-proxy-9d7jc" Nov 1 00:43:36.899459 kubelet[1934]: I1101 00:43:36.899324 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ccf5477c-955c-4f12-a373-2f4712c8e970-hostproc\") pod \"cilium-frlh8\" (UID: \"ccf5477c-955c-4f12-a373-2f4712c8e970\") " pod="kube-system/cilium-frlh8" Nov 1 00:43:36.899598 kubelet[1934]: I1101 00:43:36.899349 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ccf5477c-955c-4f12-a373-2f4712c8e970-cilium-cgroup\") pod \"cilium-frlh8\" (UID: \"ccf5477c-955c-4f12-a373-2f4712c8e970\") " pod="kube-system/cilium-frlh8" Nov 1 00:43:36.899598 kubelet[1934]: I1101 00:43:36.899363 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ccf5477c-955c-4f12-a373-2f4712c8e970-clustermesh-secrets\") pod \"cilium-frlh8\" (UID: \"ccf5477c-955c-4f12-a373-2f4712c8e970\") " pod="kube-system/cilium-frlh8" Nov 1 00:43:36.899598 kubelet[1934]: I1101 00:43:36.899386 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ccf5477c-955c-4f12-a373-2f4712c8e970-lib-modules\") pod \"cilium-frlh8\" (UID: \"ccf5477c-955c-4f12-a373-2f4712c8e970\") " pod="kube-system/cilium-frlh8" Nov 1 00:43:36.899598 kubelet[1934]: I1101 00:43:36.899405 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ccf5477c-955c-4f12-a373-2f4712c8e970-xtables-lock\") pod \"cilium-frlh8\" (UID: \"ccf5477c-955c-4f12-a373-2f4712c8e970\") " pod="kube-system/cilium-frlh8" Nov 1 00:43:36.899598 kubelet[1934]: I1101 00:43:36.899482 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ccf5477c-955c-4f12-a373-2f4712c8e970-host-proc-sys-net\") pod \"cilium-frlh8\" (UID: \"ccf5477c-955c-4f12-a373-2f4712c8e970\") " pod="kube-system/cilium-frlh8" Nov 1 00:43:36.899598 kubelet[1934]: I1101 00:43:36.899552 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ccf5477c-955c-4f12-a373-2f4712c8e970-host-proc-sys-kernel\") pod \"cilium-frlh8\" (UID: \"ccf5477c-955c-4f12-a373-2f4712c8e970\") " pod="kube-system/cilium-frlh8" Nov 1 00:43:36.899837 kubelet[1934]: I1101 00:43:36.899591 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ccf5477c-955c-4f12-a373-2f4712c8e970-hubble-tls\") pod \"cilium-frlh8\" (UID: \"ccf5477c-955c-4f12-a373-2f4712c8e970\") " pod="kube-system/cilium-frlh8" Nov 1 00:43:36.899837 kubelet[1934]: I1101 00:43:36.899665 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxq79\" (UniqueName: \"kubernetes.io/projected/ccf5477c-955c-4f12-a373-2f4712c8e970-kube-api-access-mxq79\") pod \"cilium-frlh8\" (UID: \"ccf5477c-955c-4f12-a373-2f4712c8e970\") " pod="kube-system/cilium-frlh8" Nov 1 00:43:37.001188 kubelet[1934]: I1101 00:43:37.001142 1934 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Nov 1 00:43:37.156667 kubelet[1934]: E1101 00:43:37.156537 1934 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:43:37.157474 env[1212]: time="2025-11-01T00:43:37.157280042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9d7jc,Uid:76ba467d-d470-4395-9d8b-96debbcdaa86,Namespace:kube-system,Attempt:0,}" Nov 1 00:43:37.164527 kubelet[1934]: E1101 00:43:37.164496 1934 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:43:37.165339 env[1212]: time="2025-11-01T00:43:37.165101509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-frlh8,Uid:ccf5477c-955c-4f12-a373-2f4712c8e970,Namespace:kube-system,Attempt:0,}" Nov 1 00:43:37.183751 env[1212]: time="2025-11-01T00:43:37.183676777Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:43:37.183931 env[1212]: time="2025-11-01T00:43:37.183778369Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:43:37.183931 env[1212]: time="2025-11-01T00:43:37.183834155Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:43:37.184218 env[1212]: time="2025-11-01T00:43:37.184114355Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/354ff375c0ad2533f871c7a9f94cd8bd55592735207bf5fb16eba89f415599a1 pid=2048 runtime=io.containerd.runc.v2 Nov 1 00:43:37.186739 env[1212]: time="2025-11-01T00:43:37.186575127Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:43:37.186947 env[1212]: time="2025-11-01T00:43:37.186901605Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:43:37.187185 env[1212]: time="2025-11-01T00:43:37.187114699Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:43:37.190213 env[1212]: time="2025-11-01T00:43:37.190124931Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/751f0729a1b177c18c486bb5d178fc8720d86e99a4c04194a47ca6a191c59248 pid=2059 runtime=io.containerd.runc.v2 Nov 1 00:43:37.198149 systemd[1]: Started cri-containerd-354ff375c0ad2533f871c7a9f94cd8bd55592735207bf5fb16eba89f415599a1.scope. Nov 1 00:43:37.202724 systemd[1]: Started cri-containerd-751f0729a1b177c18c486bb5d178fc8720d86e99a4c04194a47ca6a191c59248.scope. Nov 1 00:43:37.231118 env[1212]: time="2025-11-01T00:43:37.231074614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9d7jc,Uid:76ba467d-d470-4395-9d8b-96debbcdaa86,Namespace:kube-system,Attempt:0,} returns sandbox id \"354ff375c0ad2533f871c7a9f94cd8bd55592735207bf5fb16eba89f415599a1\"" Nov 1 00:43:37.232124 kubelet[1934]: E1101 00:43:37.231641 1934 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:43:37.234359 env[1212]: time="2025-11-01T00:43:37.234296988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-frlh8,Uid:ccf5477c-955c-4f12-a373-2f4712c8e970,Namespace:kube-system,Attempt:0,} returns sandbox id \"751f0729a1b177c18c486bb5d178fc8720d86e99a4c04194a47ca6a191c59248\"" Nov 1 00:43:37.235140 kubelet[1934]: E1101 00:43:37.234933 1934 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:43:37.236489 env[1212]: time="2025-11-01T00:43:37.236411283Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Nov 1 00:43:37.273449 env[1212]: time="2025-11-01T00:43:37.273006517Z" level=info msg="CreateContainer within sandbox \"354ff375c0ad2533f871c7a9f94cd8bd55592735207bf5fb16eba89f415599a1\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 1 00:43:37.294865 systemd[1]: Created slice kubepods-besteffort-pod6d126056_436b_42bf_96ea_1cc20d0761cd.slice. Nov 1 00:43:37.308467 env[1212]: time="2025-11-01T00:43:37.308409131Z" level=info msg="CreateContainer within sandbox \"354ff375c0ad2533f871c7a9f94cd8bd55592735207bf5fb16eba89f415599a1\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"fe27fa94176a7cf8f876c82952e22ef3e4c2af55d2b158a3a80b199791dd5401\"" Nov 1 00:43:37.309067 env[1212]: time="2025-11-01T00:43:37.309011101Z" level=info msg="StartContainer for \"fe27fa94176a7cf8f876c82952e22ef3e4c2af55d2b158a3a80b199791dd5401\"" Nov 1 00:43:37.325772 systemd[1]: Started cri-containerd-fe27fa94176a7cf8f876c82952e22ef3e4c2af55d2b158a3a80b199791dd5401.scope. Nov 1 00:43:37.354977 env[1212]: time="2025-11-01T00:43:37.354904850Z" level=info msg="StartContainer for \"fe27fa94176a7cf8f876c82952e22ef3e4c2af55d2b158a3a80b199791dd5401\" returns successfully" Nov 1 00:43:37.403503 kubelet[1934]: I1101 00:43:37.403434 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmhp2\" (UniqueName: \"kubernetes.io/projected/6d126056-436b-42bf-96ea-1cc20d0761cd-kube-api-access-rmhp2\") pod \"cilium-operator-6f9c7c5859-4prdl\" (UID: \"6d126056-436b-42bf-96ea-1cc20d0761cd\") " pod="kube-system/cilium-operator-6f9c7c5859-4prdl" Nov 1 00:43:37.403503 kubelet[1934]: I1101 00:43:37.403475 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6d126056-436b-42bf-96ea-1cc20d0761cd-cilium-config-path\") pod \"cilium-operator-6f9c7c5859-4prdl\" (UID: \"6d126056-436b-42bf-96ea-1cc20d0761cd\") " pod="kube-system/cilium-operator-6f9c7c5859-4prdl" Nov 1 00:43:37.601189 kubelet[1934]: E1101 00:43:37.601151 1934 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:43:37.601769 env[1212]: time="2025-11-01T00:43:37.601700382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-4prdl,Uid:6d126056-436b-42bf-96ea-1cc20d0761cd,Namespace:kube-system,Attempt:0,}" Nov 1 00:43:37.619932 env[1212]: time="2025-11-01T00:43:37.619860353Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:43:37.620107 env[1212]: time="2025-11-01T00:43:37.619943762Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:43:37.620107 env[1212]: time="2025-11-01T00:43:37.619975451Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:43:37.620300 env[1212]: time="2025-11-01T00:43:37.620254580Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/91dadb33e64288dc6a9b2986acbf09bc13c28e09f5de970d6e5f70ea1a0eb19c pid=2212 runtime=io.containerd.runc.v2 Nov 1 00:43:37.638999 systemd[1]: Started cri-containerd-91dadb33e64288dc6a9b2986acbf09bc13c28e09f5de970d6e5f70ea1a0eb19c.scope. Nov 1 00:43:37.688415 env[1212]: time="2025-11-01T00:43:37.688360906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-4prdl,Uid:6d126056-436b-42bf-96ea-1cc20d0761cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"91dadb33e64288dc6a9b2986acbf09bc13c28e09f5de970d6e5f70ea1a0eb19c\"" Nov 1 00:43:37.689459 kubelet[1934]: E1101 00:43:37.689008 1934 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:43:38.301654 kubelet[1934]: E1101 00:43:38.301609 1934 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:43:39.304246 kubelet[1934]: E1101 00:43:39.304212 1934 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:43:40.191058 kubelet[1934]: E1101 00:43:40.190606 1934 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:43:40.206578 kubelet[1934]: I1101 00:43:40.206503 1934 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9d7jc" podStartSLOduration=4.20648025 podStartE2EDuration="4.20648025s" podCreationTimestamp="2025-11-01 00:43:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:43:38.33067959 +0000 UTC m=+8.162710914" watchObservedRunningTime="2025-11-01 00:43:40.20648025 +0000 UTC m=+10.038511543" Nov 1 00:43:40.308649 kubelet[1934]: E1101 00:43:40.308600 1934 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:43:41.693741 kubelet[1934]: E1101 00:43:41.692801 1934 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:43:42.311464 kubelet[1934]: E1101 00:43:42.311429 1934 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:43:44.463864 kubelet[1934]: E1101 00:43:44.463768 1934 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:43:45.588389 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount784538693.mount: Deactivated successfully. Nov 1 00:43:50.315397 env[1212]: time="2025-11-01T00:43:50.315333937Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:50.319357 env[1212]: time="2025-11-01T00:43:50.319228944Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:50.323159 env[1212]: time="2025-11-01T00:43:50.323046083Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Nov 1 00:43:50.323533 env[1212]: time="2025-11-01T00:43:50.323459342Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:50.325507 env[1212]: time="2025-11-01T00:43:50.325458908Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Nov 1 00:43:50.340895 env[1212]: time="2025-11-01T00:43:50.340829259Z" level=info msg="CreateContainer within sandbox \"751f0729a1b177c18c486bb5d178fc8720d86e99a4c04194a47ca6a191c59248\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 1 00:43:50.371248 env[1212]: time="2025-11-01T00:43:50.371157744Z" level=info msg="CreateContainer within sandbox \"751f0729a1b177c18c486bb5d178fc8720d86e99a4c04194a47ca6a191c59248\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5ba2c4cba99b862da509ac9749ab924a552a01e33c5a178c60dd4f2cd0cf6956\"" Nov 1 00:43:50.371988 env[1212]: time="2025-11-01T00:43:50.371925100Z" level=info msg="StartContainer for \"5ba2c4cba99b862da509ac9749ab924a552a01e33c5a178c60dd4f2cd0cf6956\"" Nov 1 00:43:50.412395 systemd[1]: Started cri-containerd-5ba2c4cba99b862da509ac9749ab924a552a01e33c5a178c60dd4f2cd0cf6956.scope. Nov 1 00:43:50.686745 systemd[1]: cri-containerd-5ba2c4cba99b862da509ac9749ab924a552a01e33c5a178c60dd4f2cd0cf6956.scope: Deactivated successfully. Nov 1 00:43:50.687602 env[1212]: time="2025-11-01T00:43:50.687492849Z" level=info msg="StartContainer for \"5ba2c4cba99b862da509ac9749ab924a552a01e33c5a178c60dd4f2cd0cf6956\" returns successfully" Nov 1 00:43:50.799303 env[1212]: time="2025-11-01T00:43:50.799234571Z" level=info msg="shim disconnected" id=5ba2c4cba99b862da509ac9749ab924a552a01e33c5a178c60dd4f2cd0cf6956 Nov 1 00:43:50.799303 env[1212]: time="2025-11-01T00:43:50.799278614Z" level=warning msg="cleaning up after shim disconnected" id=5ba2c4cba99b862da509ac9749ab924a552a01e33c5a178c60dd4f2cd0cf6956 namespace=k8s.io Nov 1 00:43:50.799303 env[1212]: time="2025-11-01T00:43:50.799288253Z" level=info msg="cleaning up dead shim" Nov 1 00:43:50.805591 env[1212]: time="2025-11-01T00:43:50.805530721Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:43:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2390 runtime=io.containerd.runc.v2\n" Nov 1 00:43:51.338470 kubelet[1934]: E1101 00:43:51.338384 1934 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:43:51.345241 env[1212]: time="2025-11-01T00:43:51.345183752Z" level=info msg="CreateContainer within sandbox \"751f0729a1b177c18c486bb5d178fc8720d86e99a4c04194a47ca6a191c59248\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 1 00:43:51.361963 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5ba2c4cba99b862da509ac9749ab924a552a01e33c5a178c60dd4f2cd0cf6956-rootfs.mount: Deactivated successfully. Nov 1 00:43:51.362577 env[1212]: time="2025-11-01T00:43:51.362492154Z" level=info msg="CreateContainer within sandbox \"751f0729a1b177c18c486bb5d178fc8720d86e99a4c04194a47ca6a191c59248\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d73ea7e0d04e64442e5358f0648369d65a2b10252511ad0addfecdb7c69feff5\"" Nov 1 00:43:51.364142 env[1212]: time="2025-11-01T00:43:51.363262825Z" level=info msg="StartContainer for \"d73ea7e0d04e64442e5358f0648369d65a2b10252511ad0addfecdb7c69feff5\"" Nov 1 00:43:51.384294 systemd[1]: Started cri-containerd-d73ea7e0d04e64442e5358f0648369d65a2b10252511ad0addfecdb7c69feff5.scope. Nov 1 00:43:51.415975 env[1212]: time="2025-11-01T00:43:51.415680380Z" level=info msg="StartContainer for \"d73ea7e0d04e64442e5358f0648369d65a2b10252511ad0addfecdb7c69feff5\" returns successfully" Nov 1 00:43:51.424644 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 1 00:43:51.424853 systemd[1]: Stopped systemd-sysctl.service. Nov 1 00:43:51.425080 systemd[1]: Stopping systemd-sysctl.service... Nov 1 00:43:51.427365 systemd[1]: Starting systemd-sysctl.service... Nov 1 00:43:51.429715 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Nov 1 00:43:51.431000 systemd[1]: cri-containerd-d73ea7e0d04e64442e5358f0648369d65a2b10252511ad0addfecdb7c69feff5.scope: Deactivated successfully. Nov 1 00:43:51.437529 systemd[1]: Finished systemd-sysctl.service. Nov 1 00:43:51.449892 env[1212]: time="2025-11-01T00:43:51.449835075Z" level=info msg="shim disconnected" id=d73ea7e0d04e64442e5358f0648369d65a2b10252511ad0addfecdb7c69feff5 Nov 1 00:43:51.449892 env[1212]: time="2025-11-01T00:43:51.449887964Z" level=warning msg="cleaning up after shim disconnected" id=d73ea7e0d04e64442e5358f0648369d65a2b10252511ad0addfecdb7c69feff5 namespace=k8s.io Nov 1 00:43:51.450103 env[1212]: time="2025-11-01T00:43:51.449901540Z" level=info msg="cleaning up dead shim" Nov 1 00:43:51.456756 env[1212]: time="2025-11-01T00:43:51.456704791Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:43:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2453 runtime=io.containerd.runc.v2\n" Nov 1 00:43:52.342988 kubelet[1934]: E1101 00:43:52.341425 1934 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:43:52.353078 env[1212]: time="2025-11-01T00:43:52.350450776Z" level=info msg="CreateContainer within sandbox \"751f0729a1b177c18c486bb5d178fc8720d86e99a4c04194a47ca6a191c59248\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 1 00:43:52.362279 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d73ea7e0d04e64442e5358f0648369d65a2b10252511ad0addfecdb7c69feff5-rootfs.mount: Deactivated successfully. Nov 1 00:43:52.372926 env[1212]: time="2025-11-01T00:43:52.372860384Z" level=info msg="CreateContainer within sandbox \"751f0729a1b177c18c486bb5d178fc8720d86e99a4c04194a47ca6a191c59248\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ee13319e5220e59ddbbcf66c558a5a3e55253a21edecfb57f192f3747a1a45f6\"" Nov 1 00:43:52.373582 env[1212]: time="2025-11-01T00:43:52.373535966Z" level=info msg="StartContainer for \"ee13319e5220e59ddbbcf66c558a5a3e55253a21edecfb57f192f3747a1a45f6\"" Nov 1 00:43:52.399309 systemd[1]: Started cri-containerd-ee13319e5220e59ddbbcf66c558a5a3e55253a21edecfb57f192f3747a1a45f6.scope. Nov 1 00:43:52.405007 env[1212]: time="2025-11-01T00:43:52.404954794Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:52.409978 env[1212]: time="2025-11-01T00:43:52.409910573Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:52.411240 env[1212]: time="2025-11-01T00:43:52.411217984Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:52.411649 env[1212]: time="2025-11-01T00:43:52.411625061Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Nov 1 00:43:52.417981 env[1212]: time="2025-11-01T00:43:52.417938887Z" level=info msg="CreateContainer within sandbox \"91dadb33e64288dc6a9b2986acbf09bc13c28e09f5de970d6e5f70ea1a0eb19c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Nov 1 00:43:52.433669 env[1212]: time="2025-11-01T00:43:52.433524327Z" level=info msg="StartContainer for \"ee13319e5220e59ddbbcf66c558a5a3e55253a21edecfb57f192f3747a1a45f6\" returns successfully" Nov 1 00:43:52.434137 systemd[1]: cri-containerd-ee13319e5220e59ddbbcf66c558a5a3e55253a21edecfb57f192f3747a1a45f6.scope: Deactivated successfully. Nov 1 00:43:52.437832 env[1212]: time="2025-11-01T00:43:52.437785219Z" level=info msg="CreateContainer within sandbox \"91dadb33e64288dc6a9b2986acbf09bc13c28e09f5de970d6e5f70ea1a0eb19c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"be60f2fc6e35cbb8655b42d804f2ea531e217ce17c94a8489c85c0d8102adf54\"" Nov 1 00:43:52.438456 env[1212]: time="2025-11-01T00:43:52.438399305Z" level=info msg="StartContainer for \"be60f2fc6e35cbb8655b42d804f2ea531e217ce17c94a8489c85c0d8102adf54\"" Nov 1 00:43:52.454414 systemd[1]: Started cri-containerd-be60f2fc6e35cbb8655b42d804f2ea531e217ce17c94a8489c85c0d8102adf54.scope. Nov 1 00:43:52.729798 env[1212]: time="2025-11-01T00:43:52.729653559Z" level=info msg="StartContainer for \"be60f2fc6e35cbb8655b42d804f2ea531e217ce17c94a8489c85c0d8102adf54\" returns successfully" Nov 1 00:43:52.730943 env[1212]: time="2025-11-01T00:43:52.730888164Z" level=info msg="shim disconnected" id=ee13319e5220e59ddbbcf66c558a5a3e55253a21edecfb57f192f3747a1a45f6 Nov 1 00:43:52.730943 env[1212]: time="2025-11-01T00:43:52.730940543Z" level=warning msg="cleaning up after shim disconnected" id=ee13319e5220e59ddbbcf66c558a5a3e55253a21edecfb57f192f3747a1a45f6 namespace=k8s.io Nov 1 00:43:52.731080 env[1212]: time="2025-11-01T00:43:52.730963586Z" level=info msg="cleaning up dead shim" Nov 1 00:43:52.749673 env[1212]: time="2025-11-01T00:43:52.749601483Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:43:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2548 runtime=io.containerd.runc.v2\n" Nov 1 00:43:53.345447 kubelet[1934]: E1101 00:43:53.345415 1934 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:43:53.351167 kubelet[1934]: E1101 00:43:53.351131 1934 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:43:53.351953 env[1212]: time="2025-11-01T00:43:53.351892336Z" level=info msg="CreateContainer within sandbox \"751f0729a1b177c18c486bb5d178fc8720d86e99a4c04194a47ca6a191c59248\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 1 00:43:53.363247 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ee13319e5220e59ddbbcf66c558a5a3e55253a21edecfb57f192f3747a1a45f6-rootfs.mount: Deactivated successfully. Nov 1 00:43:53.381784 env[1212]: time="2025-11-01T00:43:53.381631909Z" level=info msg="CreateContainer within sandbox \"751f0729a1b177c18c486bb5d178fc8720d86e99a4c04194a47ca6a191c59248\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e3eecafc981be478a751ee979bd12ac3693eb6d8f423a98c3f53fcd1fc2b44d0\"" Nov 1 00:43:53.382784 env[1212]: time="2025-11-01T00:43:53.382747429Z" level=info msg="StartContainer for \"e3eecafc981be478a751ee979bd12ac3693eb6d8f423a98c3f53fcd1fc2b44d0\"" Nov 1 00:43:53.407848 systemd[1]: Started cri-containerd-e3eecafc981be478a751ee979bd12ac3693eb6d8f423a98c3f53fcd1fc2b44d0.scope. Nov 1 00:43:53.412904 kubelet[1934]: I1101 00:43:53.412533 1934 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6f9c7c5859-4prdl" podStartSLOduration=1.689555752 podStartE2EDuration="16.412517439s" podCreationTimestamp="2025-11-01 00:43:37 +0000 UTC" firstStartedPulling="2025-11-01 00:43:37.689992376 +0000 UTC m=+7.522023669" lastFinishedPulling="2025-11-01 00:43:52.412954073 +0000 UTC m=+22.244985356" observedRunningTime="2025-11-01 00:43:53.412462837 +0000 UTC m=+23.244494130" watchObservedRunningTime="2025-11-01 00:43:53.412517439 +0000 UTC m=+23.244548732" Nov 1 00:43:53.435962 systemd[1]: cri-containerd-e3eecafc981be478a751ee979bd12ac3693eb6d8f423a98c3f53fcd1fc2b44d0.scope: Deactivated successfully. Nov 1 00:43:53.436672 env[1212]: time="2025-11-01T00:43:53.436623891Z" level=info msg="StartContainer for \"e3eecafc981be478a751ee979bd12ac3693eb6d8f423a98c3f53fcd1fc2b44d0\" returns successfully" Nov 1 00:43:53.465317 env[1212]: time="2025-11-01T00:43:53.464928753Z" level=info msg="shim disconnected" id=e3eecafc981be478a751ee979bd12ac3693eb6d8f423a98c3f53fcd1fc2b44d0 Nov 1 00:43:53.465317 env[1212]: time="2025-11-01T00:43:53.465008273Z" level=warning msg="cleaning up after shim disconnected" id=e3eecafc981be478a751ee979bd12ac3693eb6d8f423a98c3f53fcd1fc2b44d0 namespace=k8s.io Nov 1 00:43:53.465317 env[1212]: time="2025-11-01T00:43:53.465043690Z" level=info msg="cleaning up dead shim" Nov 1 00:43:53.474355 env[1212]: time="2025-11-01T00:43:53.474285764Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:43:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2604 runtime=io.containerd.runc.v2\n" Nov 1 00:43:54.353715 kubelet[1934]: E1101 00:43:54.353679 1934 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:43:54.354133 kubelet[1934]: E1101 00:43:54.353677 1934 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:43:54.361784 systemd[1]: run-containerd-runc-k8s.io-e3eecafc981be478a751ee979bd12ac3693eb6d8f423a98c3f53fcd1fc2b44d0-runc.eKVf0v.mount: Deactivated successfully. Nov 1 00:43:54.361879 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e3eecafc981be478a751ee979bd12ac3693eb6d8f423a98c3f53fcd1fc2b44d0-rootfs.mount: Deactivated successfully. Nov 1 00:43:54.370112 env[1212]: time="2025-11-01T00:43:54.370019632Z" level=info msg="CreateContainer within sandbox \"751f0729a1b177c18c486bb5d178fc8720d86e99a4c04194a47ca6a191c59248\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 1 00:43:54.393499 env[1212]: time="2025-11-01T00:43:54.393442315Z" level=info msg="CreateContainer within sandbox \"751f0729a1b177c18c486bb5d178fc8720d86e99a4c04194a47ca6a191c59248\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"868b2f3c769d58b1f6b7f68f8bc89b745ff1ad6a790698b43eac52c1d81152be\"" Nov 1 00:43:54.394128 env[1212]: time="2025-11-01T00:43:54.393907130Z" level=info msg="StartContainer for \"868b2f3c769d58b1f6b7f68f8bc89b745ff1ad6a790698b43eac52c1d81152be\"" Nov 1 00:43:54.419178 systemd[1]: Started cri-containerd-868b2f3c769d58b1f6b7f68f8bc89b745ff1ad6a790698b43eac52c1d81152be.scope. Nov 1 00:43:54.454152 env[1212]: time="2025-11-01T00:43:54.454095666Z" level=info msg="StartContainer for \"868b2f3c769d58b1f6b7f68f8bc89b745ff1ad6a790698b43eac52c1d81152be\" returns successfully" Nov 1 00:43:54.595954 kubelet[1934]: I1101 00:43:54.595899 1934 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Nov 1 00:43:54.651214 systemd[1]: Created slice kubepods-burstable-podd3cd7fec_d304_4a2d_afe7_8a5f8d90479d.slice. Nov 1 00:43:54.657930 systemd[1]: Created slice kubepods-burstable-pode76adb5f_3673_4038_9731_2073f1dcfad0.slice. Nov 1 00:43:54.806267 kubelet[1934]: I1101 00:43:54.806207 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfthf\" (UniqueName: \"kubernetes.io/projected/d3cd7fec-d304-4a2d-afe7-8a5f8d90479d-kube-api-access-wfthf\") pod \"coredns-66bc5c9577-78bnr\" (UID: \"d3cd7fec-d304-4a2d-afe7-8a5f8d90479d\") " pod="kube-system/coredns-66bc5c9577-78bnr" Nov 1 00:43:54.806436 kubelet[1934]: I1101 00:43:54.806302 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d3cd7fec-d304-4a2d-afe7-8a5f8d90479d-config-volume\") pod \"coredns-66bc5c9577-78bnr\" (UID: \"d3cd7fec-d304-4a2d-afe7-8a5f8d90479d\") " pod="kube-system/coredns-66bc5c9577-78bnr" Nov 1 00:43:54.806436 kubelet[1934]: I1101 00:43:54.806322 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xtm6\" (UniqueName: \"kubernetes.io/projected/e76adb5f-3673-4038-9731-2073f1dcfad0-kube-api-access-7xtm6\") pod \"coredns-66bc5c9577-tqjb6\" (UID: \"e76adb5f-3673-4038-9731-2073f1dcfad0\") " pod="kube-system/coredns-66bc5c9577-tqjb6" Nov 1 00:43:54.806436 kubelet[1934]: I1101 00:43:54.806337 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e76adb5f-3673-4038-9731-2073f1dcfad0-config-volume\") pod \"coredns-66bc5c9577-tqjb6\" (UID: \"e76adb5f-3673-4038-9731-2073f1dcfad0\") " pod="kube-system/coredns-66bc5c9577-tqjb6" Nov 1 00:43:54.960779 kubelet[1934]: E1101 00:43:54.960634 1934 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:43:54.961809 env[1212]: time="2025-11-01T00:43:54.961751358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-78bnr,Uid:d3cd7fec-d304-4a2d-afe7-8a5f8d90479d,Namespace:kube-system,Attempt:0,}" Nov 1 00:43:54.964247 kubelet[1934]: E1101 00:43:54.964196 1934 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:43:54.964685 env[1212]: time="2025-11-01T00:43:54.964641806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-tqjb6,Uid:e76adb5f-3673-4038-9731-2073f1dcfad0,Namespace:kube-system,Attempt:0,}" Nov 1 00:43:55.358776 kubelet[1934]: E1101 00:43:55.358724 1934 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:43:55.369855 systemd[1]: run-containerd-runc-k8s.io-868b2f3c769d58b1f6b7f68f8bc89b745ff1ad6a790698b43eac52c1d81152be-runc.fNk4K9.mount: Deactivated successfully. Nov 1 00:43:55.375282 kubelet[1934]: I1101 00:43:55.375212 1934 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-frlh8" podStartSLOduration=6.286064908 podStartE2EDuration="19.375191887s" podCreationTimestamp="2025-11-01 00:43:36 +0000 UTC" firstStartedPulling="2025-11-01 00:43:37.235993241 +0000 UTC m=+7.068024534" lastFinishedPulling="2025-11-01 00:43:50.32512022 +0000 UTC m=+20.157151513" observedRunningTime="2025-11-01 00:43:55.374634639 +0000 UTC m=+25.206665932" watchObservedRunningTime="2025-11-01 00:43:55.375191887 +0000 UTC m=+25.207223170" Nov 1 00:43:56.144727 systemd[1]: Started sshd@7-10.0.0.111:22-10.0.0.1:53644.service. Nov 1 00:43:56.210855 sshd[2787]: Accepted publickey for core from 10.0.0.1 port 53644 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:43:56.212106 sshd[2787]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:43:56.215950 systemd-logind[1194]: New session 8 of user core. Nov 1 00:43:56.216788 systemd[1]: Started session-8.scope. Nov 1 00:43:56.336344 sshd[2787]: pam_unix(sshd:session): session closed for user core Nov 1 00:43:56.339016 systemd[1]: sshd@7-10.0.0.111:22-10.0.0.1:53644.service: Deactivated successfully. Nov 1 00:43:56.339839 systemd[1]: session-8.scope: Deactivated successfully. Nov 1 00:43:56.340673 systemd-logind[1194]: Session 8 logged out. Waiting for processes to exit. Nov 1 00:43:56.341391 systemd-logind[1194]: Removed session 8. Nov 1 00:43:56.360688 kubelet[1934]: E1101 00:43:56.360662 1934 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:43:56.537840 systemd-networkd[1035]: cilium_host: Link UP Nov 1 00:43:56.538797 systemd-networkd[1035]: cilium_net: Link UP Nov 1 00:43:56.544109 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Nov 1 00:43:56.546618 systemd-networkd[1035]: cilium_net: Gained carrier Nov 1 00:43:56.551075 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Nov 1 00:43:56.551335 systemd-networkd[1035]: cilium_host: Gained carrier Nov 1 00:43:56.551580 systemd-networkd[1035]: cilium_net: Gained IPv6LL Nov 1 00:43:56.551776 systemd-networkd[1035]: cilium_host: Gained IPv6LL Nov 1 00:43:56.632351 systemd-networkd[1035]: cilium_vxlan: Link UP Nov 1 00:43:56.632361 systemd-networkd[1035]: cilium_vxlan: Gained carrier Nov 1 00:43:56.823081 kernel: NET: Registered PF_ALG protocol family Nov 1 00:43:57.362451 kubelet[1934]: E1101 00:43:57.362400 1934 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:43:57.430008 systemd-networkd[1035]: lxc_health: Link UP Nov 1 00:43:57.434095 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Nov 1 00:43:57.434395 systemd-networkd[1035]: lxc_health: Gained carrier Nov 1 00:43:58.010436 systemd-networkd[1035]: lxcc019759b2042: Link UP Nov 1 00:43:58.018073 kernel: eth0: renamed from tmpc7bf2 Nov 1 00:43:58.027432 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Nov 1 00:43:58.027545 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcc019759b2042: link becomes ready Nov 1 00:43:58.027562 systemd-networkd[1035]: lxcc019759b2042: Gained carrier Nov 1 00:43:58.033432 systemd-networkd[1035]: lxcd499007e8637: Link UP Nov 1 00:43:58.043052 kernel: eth0: renamed from tmpaffc2 Nov 1 00:43:58.052242 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcd499007e8637: link becomes ready Nov 1 00:43:58.053453 systemd-networkd[1035]: lxcd499007e8637: Gained carrier Nov 1 00:43:58.114264 systemd-networkd[1035]: cilium_vxlan: Gained IPv6LL Nov 1 00:43:58.818190 systemd-networkd[1035]: lxc_health: Gained IPv6LL Nov 1 00:43:59.164014 kubelet[1934]: E1101 00:43:59.163853 1934 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:43:59.458378 systemd-networkd[1035]: lxcc019759b2042: Gained IPv6LL Nov 1 00:43:59.970213 systemd-networkd[1035]: lxcd499007e8637: Gained IPv6LL Nov 1 00:44:01.340808 systemd[1]: Started sshd@8-10.0.0.111:22-10.0.0.1:36202.service. Nov 1 00:44:01.384181 sshd[3182]: Accepted publickey for core from 10.0.0.1 port 36202 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:44:01.384658 sshd[3182]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:44:01.389454 systemd[1]: Started session-9.scope. Nov 1 00:44:01.396882 systemd-logind[1194]: New session 9 of user core. Nov 1 00:44:01.557264 sshd[3182]: pam_unix(sshd:session): session closed for user core Nov 1 00:44:01.560450 systemd-logind[1194]: Session 9 logged out. Waiting for processes to exit. Nov 1 00:44:01.561443 systemd[1]: sshd@8-10.0.0.111:22-10.0.0.1:36202.service: Deactivated successfully. Nov 1 00:44:01.562102 systemd[1]: session-9.scope: Deactivated successfully. Nov 1 00:44:01.562996 systemd-logind[1194]: Removed session 9. Nov 1 00:44:01.797020 env[1212]: time="2025-11-01T00:44:01.796920603Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:44:01.797020 env[1212]: time="2025-11-01T00:44:01.796986036Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:44:01.797020 env[1212]: time="2025-11-01T00:44:01.796998138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:44:01.797545 env[1212]: time="2025-11-01T00:44:01.797397640Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c7bf26e7dfef5698bc40498e526a0fb2d8135af7dfefd82f13b0b262ccfc5b1c pid=3215 runtime=io.containerd.runc.v2 Nov 1 00:44:01.798135 env[1212]: time="2025-11-01T00:44:01.798044766Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:44:01.798135 env[1212]: time="2025-11-01T00:44:01.798092796Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:44:01.798135 env[1212]: time="2025-11-01T00:44:01.798114487Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:44:01.802133 env[1212]: time="2025-11-01T00:44:01.802065142Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/affc2d8e90e9385ea13312218c8983cef3f1bfa2ff0e604a4742879a74071c10 pid=3224 runtime=io.containerd.runc.v2 Nov 1 00:44:01.816394 systemd[1]: Started cri-containerd-affc2d8e90e9385ea13312218c8983cef3f1bfa2ff0e604a4742879a74071c10.scope. Nov 1 00:44:01.826623 systemd[1]: Started cri-containerd-c7bf26e7dfef5698bc40498e526a0fb2d8135af7dfefd82f13b0b262ccfc5b1c.scope. Nov 1 00:44:01.835402 systemd-resolved[1153]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 00:44:01.839455 systemd-resolved[1153]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 00:44:01.862094 env[1212]: time="2025-11-01T00:44:01.861990605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-tqjb6,Uid:e76adb5f-3673-4038-9731-2073f1dcfad0,Namespace:kube-system,Attempt:0,} returns sandbox id \"c7bf26e7dfef5698bc40498e526a0fb2d8135af7dfefd82f13b0b262ccfc5b1c\"" Nov 1 00:44:01.862977 kubelet[1934]: E1101 00:44:01.862938 1934 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:44:01.871724 env[1212]: time="2025-11-01T00:44:01.871677751Z" level=info msg="CreateContainer within sandbox \"c7bf26e7dfef5698bc40498e526a0fb2d8135af7dfefd82f13b0b262ccfc5b1c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 00:44:01.879382 env[1212]: time="2025-11-01T00:44:01.879277213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-78bnr,Uid:d3cd7fec-d304-4a2d-afe7-8a5f8d90479d,Namespace:kube-system,Attempt:0,} returns sandbox id \"affc2d8e90e9385ea13312218c8983cef3f1bfa2ff0e604a4742879a74071c10\"" Nov 1 00:44:01.880932 kubelet[1934]: E1101 00:44:01.880467 1934 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:44:01.887410 env[1212]: time="2025-11-01T00:44:01.887356176Z" level=info msg="CreateContainer within sandbox \"affc2d8e90e9385ea13312218c8983cef3f1bfa2ff0e604a4742879a74071c10\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 00:44:01.890675 env[1212]: time="2025-11-01T00:44:01.890620382Z" level=info msg="CreateContainer within sandbox \"c7bf26e7dfef5698bc40498e526a0fb2d8135af7dfefd82f13b0b262ccfc5b1c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"95ebfb62ed3f96ad221c8c007986811d71a4078804f5aed9a4242f260832ec05\"" Nov 1 00:44:01.891474 env[1212]: time="2025-11-01T00:44:01.891448027Z" level=info msg="StartContainer for \"95ebfb62ed3f96ad221c8c007986811d71a4078804f5aed9a4242f260832ec05\"" Nov 1 00:44:01.902235 env[1212]: time="2025-11-01T00:44:01.902168996Z" level=info msg="CreateContainer within sandbox \"affc2d8e90e9385ea13312218c8983cef3f1bfa2ff0e604a4742879a74071c10\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e1c893214fda288e644e187be4e7f95ab4213e972f84528ee6c8e471a0539a7e\"" Nov 1 00:44:01.902901 env[1212]: time="2025-11-01T00:44:01.902817264Z" level=info msg="StartContainer for \"e1c893214fda288e644e187be4e7f95ab4213e972f84528ee6c8e471a0539a7e\"" Nov 1 00:44:01.911011 systemd[1]: Started cri-containerd-95ebfb62ed3f96ad221c8c007986811d71a4078804f5aed9a4242f260832ec05.scope. Nov 1 00:44:01.925189 systemd[1]: Started cri-containerd-e1c893214fda288e644e187be4e7f95ab4213e972f84528ee6c8e471a0539a7e.scope. Nov 1 00:44:01.945907 env[1212]: time="2025-11-01T00:44:01.945819412Z" level=info msg="StartContainer for \"95ebfb62ed3f96ad221c8c007986811d71a4078804f5aed9a4242f260832ec05\" returns successfully" Nov 1 00:44:01.956514 env[1212]: time="2025-11-01T00:44:01.956416558Z" level=info msg="StartContainer for \"e1c893214fda288e644e187be4e7f95ab4213e972f84528ee6c8e471a0539a7e\" returns successfully" Nov 1 00:44:02.374669 kubelet[1934]: E1101 00:44:02.374627 1934 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:44:02.376605 kubelet[1934]: E1101 00:44:02.376581 1934 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:44:02.385954 kubelet[1934]: I1101 00:44:02.385885 1934 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-tqjb6" podStartSLOduration=25.38586948 podStartE2EDuration="25.38586948s" podCreationTimestamp="2025-11-01 00:43:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:44:02.385365202 +0000 UTC m=+32.217396485" watchObservedRunningTime="2025-11-01 00:44:02.38586948 +0000 UTC m=+32.217900773" Nov 1 00:44:02.410524 kubelet[1934]: I1101 00:44:02.410437 1934 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-78bnr" podStartSLOduration=25.41041238 podStartE2EDuration="25.41041238s" podCreationTimestamp="2025-11-01 00:43:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:44:02.396645099 +0000 UTC m=+32.228676402" watchObservedRunningTime="2025-11-01 00:44:02.41041238 +0000 UTC m=+32.242443683" Nov 1 00:44:03.378799 kubelet[1934]: E1101 00:44:03.378766 1934 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:44:03.379233 kubelet[1934]: E1101 00:44:03.378843 1934 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:44:04.040206 kubelet[1934]: I1101 00:44:04.040164 1934 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 00:44:04.040577 kubelet[1934]: E1101 00:44:04.040562 1934 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:44:04.381126 kubelet[1934]: E1101 00:44:04.380996 1934 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:44:04.381126 kubelet[1934]: E1101 00:44:04.381061 1934 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:44:04.381487 kubelet[1934]: E1101 00:44:04.381205 1934 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:44:06.561860 systemd[1]: Started sshd@9-10.0.0.111:22-10.0.0.1:36204.service. Nov 1 00:44:06.605930 sshd[3376]: Accepted publickey for core from 10.0.0.1 port 36204 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:44:06.607409 sshd[3376]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:44:06.611211 systemd-logind[1194]: New session 10 of user core. Nov 1 00:44:06.612074 systemd[1]: Started session-10.scope. Nov 1 00:44:06.722637 sshd[3376]: pam_unix(sshd:session): session closed for user core Nov 1 00:44:06.725402 systemd[1]: sshd@9-10.0.0.111:22-10.0.0.1:36204.service: Deactivated successfully. Nov 1 00:44:06.726320 systemd[1]: session-10.scope: Deactivated successfully. Nov 1 00:44:06.726978 systemd-logind[1194]: Session 10 logged out. Waiting for processes to exit. Nov 1 00:44:06.727785 systemd-logind[1194]: Removed session 10. Nov 1 00:44:11.726923 systemd[1]: Started sshd@10-10.0.0.111:22-10.0.0.1:47812.service. Nov 1 00:44:11.767366 sshd[3392]: Accepted publickey for core from 10.0.0.1 port 47812 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:44:11.768731 sshd[3392]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:44:11.772459 systemd-logind[1194]: New session 11 of user core. Nov 1 00:44:11.773668 systemd[1]: Started session-11.scope. Nov 1 00:44:11.879692 sshd[3392]: pam_unix(sshd:session): session closed for user core Nov 1 00:44:11.881813 systemd[1]: sshd@10-10.0.0.111:22-10.0.0.1:47812.service: Deactivated successfully. Nov 1 00:44:11.882680 systemd[1]: session-11.scope: Deactivated successfully. Nov 1 00:44:11.883450 systemd-logind[1194]: Session 11 logged out. Waiting for processes to exit. Nov 1 00:44:11.884249 systemd-logind[1194]: Removed session 11. Nov 1 00:44:16.886253 systemd[1]: Started sshd@11-10.0.0.111:22-10.0.0.1:47818.service. Nov 1 00:44:16.933452 sshd[3406]: Accepted publickey for core from 10.0.0.1 port 47818 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:44:16.935670 sshd[3406]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:44:16.941982 systemd-logind[1194]: New session 12 of user core. Nov 1 00:44:16.943379 systemd[1]: Started session-12.scope. Nov 1 00:44:17.108998 sshd[3406]: pam_unix(sshd:session): session closed for user core Nov 1 00:44:17.112580 systemd[1]: sshd@11-10.0.0.111:22-10.0.0.1:47818.service: Deactivated successfully. Nov 1 00:44:17.113212 systemd[1]: session-12.scope: Deactivated successfully. Nov 1 00:44:17.115275 systemd[1]: Started sshd@12-10.0.0.111:22-10.0.0.1:47822.service. Nov 1 00:44:17.115747 systemd-logind[1194]: Session 12 logged out. Waiting for processes to exit. Nov 1 00:44:17.116914 systemd-logind[1194]: Removed session 12. Nov 1 00:44:17.158094 sshd[3420]: Accepted publickey for core from 10.0.0.1 port 47822 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:44:17.159361 sshd[3420]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:44:17.162844 systemd-logind[1194]: New session 13 of user core. Nov 1 00:44:17.163958 systemd[1]: Started session-13.scope. Nov 1 00:44:17.334187 sshd[3420]: pam_unix(sshd:session): session closed for user core Nov 1 00:44:17.338394 systemd[1]: Started sshd@13-10.0.0.111:22-10.0.0.1:47836.service. Nov 1 00:44:17.340735 systemd[1]: sshd@12-10.0.0.111:22-10.0.0.1:47822.service: Deactivated successfully. Nov 1 00:44:17.341608 systemd[1]: session-13.scope: Deactivated successfully. Nov 1 00:44:17.342374 systemd-logind[1194]: Session 13 logged out. Waiting for processes to exit. Nov 1 00:44:17.344097 systemd-logind[1194]: Removed session 13. Nov 1 00:44:17.382418 sshd[3430]: Accepted publickey for core from 10.0.0.1 port 47836 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:44:17.384151 sshd[3430]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:44:17.388505 systemd-logind[1194]: New session 14 of user core. Nov 1 00:44:17.389570 systemd[1]: Started session-14.scope. Nov 1 00:44:17.514422 sshd[3430]: pam_unix(sshd:session): session closed for user core Nov 1 00:44:17.517894 systemd[1]: sshd@13-10.0.0.111:22-10.0.0.1:47836.service: Deactivated successfully. Nov 1 00:44:17.518729 systemd[1]: session-14.scope: Deactivated successfully. Nov 1 00:44:17.519518 systemd-logind[1194]: Session 14 logged out. Waiting for processes to exit. Nov 1 00:44:17.520548 systemd-logind[1194]: Removed session 14. Nov 1 00:44:22.520568 systemd[1]: Started sshd@14-10.0.0.111:22-10.0.0.1:36292.service. Nov 1 00:44:22.561448 sshd[3444]: Accepted publickey for core from 10.0.0.1 port 36292 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:44:22.562889 sshd[3444]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:44:22.566709 systemd-logind[1194]: New session 15 of user core. Nov 1 00:44:22.567513 systemd[1]: Started session-15.scope. Nov 1 00:44:22.681336 sshd[3444]: pam_unix(sshd:session): session closed for user core Nov 1 00:44:22.683949 systemd[1]: sshd@14-10.0.0.111:22-10.0.0.1:36292.service: Deactivated successfully. Nov 1 00:44:22.684744 systemd[1]: session-15.scope: Deactivated successfully. Nov 1 00:44:22.685511 systemd-logind[1194]: Session 15 logged out. Waiting for processes to exit. Nov 1 00:44:22.686184 systemd-logind[1194]: Removed session 15. Nov 1 00:44:27.687288 systemd[1]: Started sshd@15-10.0.0.111:22-10.0.0.1:36308.service. Nov 1 00:44:27.729062 sshd[3457]: Accepted publickey for core from 10.0.0.1 port 36308 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:44:27.729807 sshd[3457]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:44:27.734036 systemd-logind[1194]: New session 16 of user core. Nov 1 00:44:27.734960 systemd[1]: Started session-16.scope. Nov 1 00:44:27.841751 sshd[3457]: pam_unix(sshd:session): session closed for user core Nov 1 00:44:27.844010 systemd[1]: sshd@15-10.0.0.111:22-10.0.0.1:36308.service: Deactivated successfully. Nov 1 00:44:27.844815 systemd[1]: session-16.scope: Deactivated successfully. Nov 1 00:44:27.845604 systemd-logind[1194]: Session 16 logged out. Waiting for processes to exit. Nov 1 00:44:27.846377 systemd-logind[1194]: Removed session 16. Nov 1 00:44:32.846209 systemd[1]: Started sshd@16-10.0.0.111:22-10.0.0.1:41660.service. Nov 1 00:44:32.885352 sshd[3473]: Accepted publickey for core from 10.0.0.1 port 41660 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:44:32.886502 sshd[3473]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:44:32.889966 systemd-logind[1194]: New session 17 of user core. Nov 1 00:44:32.891056 systemd[1]: Started session-17.scope. Nov 1 00:44:33.009181 sshd[3473]: pam_unix(sshd:session): session closed for user core Nov 1 00:44:33.011578 systemd[1]: sshd@16-10.0.0.111:22-10.0.0.1:41660.service: Deactivated successfully. Nov 1 00:44:33.012302 systemd[1]: session-17.scope: Deactivated successfully. Nov 1 00:44:33.013096 systemd-logind[1194]: Session 17 logged out. Waiting for processes to exit. Nov 1 00:44:33.013869 systemd-logind[1194]: Removed session 17. Nov 1 00:44:38.013794 systemd[1]: Started sshd@17-10.0.0.111:22-10.0.0.1:41672.service. Nov 1 00:44:38.059741 sshd[3488]: Accepted publickey for core from 10.0.0.1 port 41672 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:44:38.060981 sshd[3488]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:44:38.064605 systemd-logind[1194]: New session 18 of user core. Nov 1 00:44:38.065697 systemd[1]: Started session-18.scope. Nov 1 00:44:38.169238 sshd[3488]: pam_unix(sshd:session): session closed for user core Nov 1 00:44:38.172603 systemd[1]: sshd@17-10.0.0.111:22-10.0.0.1:41672.service: Deactivated successfully. Nov 1 00:44:38.173290 systemd[1]: session-18.scope: Deactivated successfully. Nov 1 00:44:38.173904 systemd-logind[1194]: Session 18 logged out. Waiting for processes to exit. Nov 1 00:44:38.175139 systemd[1]: Started sshd@18-10.0.0.111:22-10.0.0.1:41682.service. Nov 1 00:44:38.175863 systemd-logind[1194]: Removed session 18. Nov 1 00:44:38.216075 sshd[3501]: Accepted publickey for core from 10.0.0.1 port 41682 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:44:38.217146 sshd[3501]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:44:38.220589 systemd-logind[1194]: New session 19 of user core. Nov 1 00:44:38.221614 systemd[1]: Started session-19.scope. Nov 1 00:44:38.744133 sshd[3501]: pam_unix(sshd:session): session closed for user core Nov 1 00:44:38.747008 systemd[1]: sshd@18-10.0.0.111:22-10.0.0.1:41682.service: Deactivated successfully. Nov 1 00:44:38.747761 systemd[1]: session-19.scope: Deactivated successfully. Nov 1 00:44:38.748365 systemd-logind[1194]: Session 19 logged out. Waiting for processes to exit. Nov 1 00:44:38.749468 systemd[1]: Started sshd@19-10.0.0.111:22-10.0.0.1:41686.service. Nov 1 00:44:38.750289 systemd-logind[1194]: Removed session 19. Nov 1 00:44:38.792070 sshd[3512]: Accepted publickey for core from 10.0.0.1 port 41686 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:44:38.793372 sshd[3512]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:44:38.796816 systemd-logind[1194]: New session 20 of user core. Nov 1 00:44:38.797731 systemd[1]: Started session-20.scope. Nov 1 00:44:39.473750 sshd[3512]: pam_unix(sshd:session): session closed for user core Nov 1 00:44:39.476822 systemd[1]: sshd@19-10.0.0.111:22-10.0.0.1:41686.service: Deactivated successfully. Nov 1 00:44:39.478461 systemd[1]: session-20.scope: Deactivated successfully. Nov 1 00:44:39.479174 systemd-logind[1194]: Session 20 logged out. Waiting for processes to exit. Nov 1 00:44:39.480708 systemd[1]: Started sshd@20-10.0.0.111:22-10.0.0.1:41698.service. Nov 1 00:44:39.481741 systemd-logind[1194]: Removed session 20. Nov 1 00:44:39.527364 sshd[3529]: Accepted publickey for core from 10.0.0.1 port 41698 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:44:39.528751 sshd[3529]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:44:39.532557 systemd-logind[1194]: New session 21 of user core. Nov 1 00:44:39.533691 systemd[1]: Started session-21.scope. Nov 1 00:44:40.240460 sshd[3529]: pam_unix(sshd:session): session closed for user core Nov 1 00:44:40.245717 systemd[1]: sshd@20-10.0.0.111:22-10.0.0.1:41698.service: Deactivated successfully. Nov 1 00:44:40.246457 systemd[1]: session-21.scope: Deactivated successfully. Nov 1 00:44:40.247284 systemd-logind[1194]: Session 21 logged out. Waiting for processes to exit. Nov 1 00:44:40.249336 systemd[1]: Started sshd@21-10.0.0.111:22-10.0.0.1:57310.service. Nov 1 00:44:40.250394 systemd-logind[1194]: Removed session 21. Nov 1 00:44:40.292061 sshd[3541]: Accepted publickey for core from 10.0.0.1 port 57310 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:44:40.293471 sshd[3541]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:44:40.298159 systemd-logind[1194]: New session 22 of user core. Nov 1 00:44:40.299346 systemd[1]: Started session-22.scope. Nov 1 00:44:40.417876 sshd[3541]: pam_unix(sshd:session): session closed for user core Nov 1 00:44:40.421002 systemd[1]: sshd@21-10.0.0.111:22-10.0.0.1:57310.service: Deactivated successfully. Nov 1 00:44:40.421846 systemd[1]: session-22.scope: Deactivated successfully. Nov 1 00:44:40.422733 systemd-logind[1194]: Session 22 logged out. Waiting for processes to exit. Nov 1 00:44:40.423432 systemd-logind[1194]: Removed session 22. Nov 1 00:44:42.280407 kubelet[1934]: E1101 00:44:42.280356 1934 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:44:45.436136 systemd[1]: Started sshd@22-10.0.0.111:22-10.0.0.1:57314.service. Nov 1 00:44:45.513011 sshd[3556]: Accepted publickey for core from 10.0.0.1 port 57314 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:44:45.514247 sshd[3556]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:44:45.531127 systemd-logind[1194]: New session 23 of user core. Nov 1 00:44:45.533815 systemd[1]: Started session-23.scope. Nov 1 00:44:45.680447 sshd[3556]: pam_unix(sshd:session): session closed for user core Nov 1 00:44:45.683404 systemd[1]: sshd@22-10.0.0.111:22-10.0.0.1:57314.service: Deactivated successfully. Nov 1 00:44:45.684242 systemd[1]: session-23.scope: Deactivated successfully. Nov 1 00:44:45.684968 systemd-logind[1194]: Session 23 logged out. Waiting for processes to exit. Nov 1 00:44:45.685954 systemd-logind[1194]: Removed session 23. Nov 1 00:44:50.692835 systemd[1]: Started sshd@23-10.0.0.111:22-10.0.0.1:59920.service. Nov 1 00:44:50.775453 sshd[3571]: Accepted publickey for core from 10.0.0.1 port 59920 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:44:50.778705 sshd[3571]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:44:50.793498 systemd-logind[1194]: New session 24 of user core. Nov 1 00:44:50.797176 systemd[1]: Started session-24.scope. Nov 1 00:44:51.018149 sshd[3571]: pam_unix(sshd:session): session closed for user core Nov 1 00:44:51.023468 systemd-logind[1194]: Session 24 logged out. Waiting for processes to exit. Nov 1 00:44:51.024719 systemd[1]: sshd@23-10.0.0.111:22-10.0.0.1:59920.service: Deactivated successfully. Nov 1 00:44:51.026342 systemd[1]: session-24.scope: Deactivated successfully. Nov 1 00:44:51.047464 systemd-logind[1194]: Removed session 24. Nov 1 00:44:56.040094 systemd[1]: Started sshd@24-10.0.0.111:22-10.0.0.1:59924.service. Nov 1 00:44:56.094965 sshd[3585]: Accepted publickey for core from 10.0.0.1 port 59924 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:44:56.097947 sshd[3585]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:44:56.108737 systemd[1]: Started session-25.scope. Nov 1 00:44:56.109291 systemd-logind[1194]: New session 25 of user core. Nov 1 00:44:56.230416 sshd[3585]: pam_unix(sshd:session): session closed for user core Nov 1 00:44:56.234443 systemd[1]: Started sshd@25-10.0.0.111:22-10.0.0.1:59926.service. Nov 1 00:44:56.235043 systemd[1]: sshd@24-10.0.0.111:22-10.0.0.1:59924.service: Deactivated successfully. Nov 1 00:44:56.235594 systemd[1]: session-25.scope: Deactivated successfully. Nov 1 00:44:56.236185 systemd-logind[1194]: Session 25 logged out. Waiting for processes to exit. Nov 1 00:44:56.237005 systemd-logind[1194]: Removed session 25. Nov 1 00:44:56.283770 sshd[3597]: Accepted publickey for core from 10.0.0.1 port 59926 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:44:56.289770 sshd[3597]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:44:56.305286 systemd-logind[1194]: New session 26 of user core. Nov 1 00:44:56.307698 systemd[1]: Started session-26.scope. Nov 1 00:44:57.926929 env[1212]: time="2025-11-01T00:44:57.926877163Z" level=info msg="StopContainer for \"be60f2fc6e35cbb8655b42d804f2ea531e217ce17c94a8489c85c0d8102adf54\" with timeout 30 (s)" Nov 1 00:44:57.928073 env[1212]: time="2025-11-01T00:44:57.928051397Z" level=info msg="Stop container \"be60f2fc6e35cbb8655b42d804f2ea531e217ce17c94a8489c85c0d8102adf54\" with signal terminated" Nov 1 00:44:57.942277 systemd[1]: cri-containerd-be60f2fc6e35cbb8655b42d804f2ea531e217ce17c94a8489c85c0d8102adf54.scope: Deactivated successfully. Nov 1 00:44:57.959894 env[1212]: time="2025-11-01T00:44:57.959816411Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 1 00:44:57.966326 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-be60f2fc6e35cbb8655b42d804f2ea531e217ce17c94a8489c85c0d8102adf54-rootfs.mount: Deactivated successfully. Nov 1 00:44:57.968314 env[1212]: time="2025-11-01T00:44:57.968273016Z" level=info msg="StopContainer for \"868b2f3c769d58b1f6b7f68f8bc89b745ff1ad6a790698b43eac52c1d81152be\" with timeout 2 (s)" Nov 1 00:44:57.968642 env[1212]: time="2025-11-01T00:44:57.968601651Z" level=info msg="Stop container \"868b2f3c769d58b1f6b7f68f8bc89b745ff1ad6a790698b43eac52c1d81152be\" with signal terminated" Nov 1 00:44:57.975737 systemd-networkd[1035]: lxc_health: Link DOWN Nov 1 00:44:57.975747 systemd-networkd[1035]: lxc_health: Lost carrier Nov 1 00:44:57.977942 env[1212]: time="2025-11-01T00:44:57.977893155Z" level=info msg="shim disconnected" id=be60f2fc6e35cbb8655b42d804f2ea531e217ce17c94a8489c85c0d8102adf54 Nov 1 00:44:57.977942 env[1212]: time="2025-11-01T00:44:57.977938240Z" level=warning msg="cleaning up after shim disconnected" id=be60f2fc6e35cbb8655b42d804f2ea531e217ce17c94a8489c85c0d8102adf54 namespace=k8s.io Nov 1 00:44:57.978088 env[1212]: time="2025-11-01T00:44:57.977947387Z" level=info msg="cleaning up dead shim" Nov 1 00:44:57.984564 env[1212]: time="2025-11-01T00:44:57.984512131Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:44:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3653 runtime=io.containerd.runc.v2\n" Nov 1 00:44:57.987992 env[1212]: time="2025-11-01T00:44:57.987948101Z" level=info msg="StopContainer for \"be60f2fc6e35cbb8655b42d804f2ea531e217ce17c94a8489c85c0d8102adf54\" returns successfully" Nov 1 00:44:57.988701 env[1212]: time="2025-11-01T00:44:57.988667059Z" level=info msg="StopPodSandbox for \"91dadb33e64288dc6a9b2986acbf09bc13c28e09f5de970d6e5f70ea1a0eb19c\"" Nov 1 00:44:57.988780 env[1212]: time="2025-11-01T00:44:57.988733305Z" level=info msg="Container to stop \"be60f2fc6e35cbb8655b42d804f2ea531e217ce17c94a8489c85c0d8102adf54\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 00:44:57.990539 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-91dadb33e64288dc6a9b2986acbf09bc13c28e09f5de970d6e5f70ea1a0eb19c-shm.mount: Deactivated successfully. Nov 1 00:44:58.017169 systemd[1]: cri-containerd-91dadb33e64288dc6a9b2986acbf09bc13c28e09f5de970d6e5f70ea1a0eb19c.scope: Deactivated successfully. Nov 1 00:44:58.029524 systemd[1]: cri-containerd-868b2f3c769d58b1f6b7f68f8bc89b745ff1ad6a790698b43eac52c1d81152be.scope: Deactivated successfully. Nov 1 00:44:58.029857 systemd[1]: cri-containerd-868b2f3c769d58b1f6b7f68f8bc89b745ff1ad6a790698b43eac52c1d81152be.scope: Consumed 6.645s CPU time. Nov 1 00:44:58.045085 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-91dadb33e64288dc6a9b2986acbf09bc13c28e09f5de970d6e5f70ea1a0eb19c-rootfs.mount: Deactivated successfully. Nov 1 00:44:58.054016 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-868b2f3c769d58b1f6b7f68f8bc89b745ff1ad6a790698b43eac52c1d81152be-rootfs.mount: Deactivated successfully. Nov 1 00:44:58.060882 env[1212]: time="2025-11-01T00:44:58.060824768Z" level=info msg="shim disconnected" id=91dadb33e64288dc6a9b2986acbf09bc13c28e09f5de970d6e5f70ea1a0eb19c Nov 1 00:44:58.061037 env[1212]: time="2025-11-01T00:44:58.060887327Z" level=warning msg="cleaning up after shim disconnected" id=91dadb33e64288dc6a9b2986acbf09bc13c28e09f5de970d6e5f70ea1a0eb19c namespace=k8s.io Nov 1 00:44:58.061037 env[1212]: time="2025-11-01T00:44:58.060899019Z" level=info msg="cleaning up dead shim" Nov 1 00:44:58.066562 env[1212]: time="2025-11-01T00:44:58.066514533Z" level=info msg="shim disconnected" id=868b2f3c769d58b1f6b7f68f8bc89b745ff1ad6a790698b43eac52c1d81152be Nov 1 00:44:58.066562 env[1212]: time="2025-11-01T00:44:58.066560209Z" level=warning msg="cleaning up after shim disconnected" id=868b2f3c769d58b1f6b7f68f8bc89b745ff1ad6a790698b43eac52c1d81152be namespace=k8s.io Nov 1 00:44:58.066800 env[1212]: time="2025-11-01T00:44:58.066569808Z" level=info msg="cleaning up dead shim" Nov 1 00:44:58.069071 env[1212]: time="2025-11-01T00:44:58.069004470Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:44:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3698 runtime=io.containerd.runc.v2\n" Nov 1 00:44:58.069491 env[1212]: time="2025-11-01T00:44:58.069454556Z" level=info msg="TearDown network for sandbox \"91dadb33e64288dc6a9b2986acbf09bc13c28e09f5de970d6e5f70ea1a0eb19c\" successfully" Nov 1 00:44:58.069556 env[1212]: time="2025-11-01T00:44:58.069487990Z" level=info msg="StopPodSandbox for \"91dadb33e64288dc6a9b2986acbf09bc13c28e09f5de970d6e5f70ea1a0eb19c\" returns successfully" Nov 1 00:44:58.082255 env[1212]: time="2025-11-01T00:44:58.082191619Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:44:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3710 runtime=io.containerd.runc.v2\n" Nov 1 00:44:58.086274 env[1212]: time="2025-11-01T00:44:58.086192281Z" level=info msg="StopContainer for \"868b2f3c769d58b1f6b7f68f8bc89b745ff1ad6a790698b43eac52c1d81152be\" returns successfully" Nov 1 00:44:58.086941 env[1212]: time="2025-11-01T00:44:58.086902612Z" level=info msg="StopPodSandbox for \"751f0729a1b177c18c486bb5d178fc8720d86e99a4c04194a47ca6a191c59248\"" Nov 1 00:44:58.087148 env[1212]: time="2025-11-01T00:44:58.086969179Z" level=info msg="Container to stop \"5ba2c4cba99b862da509ac9749ab924a552a01e33c5a178c60dd4f2cd0cf6956\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 00:44:58.087148 env[1212]: time="2025-11-01T00:44:58.086987964Z" level=info msg="Container to stop \"ee13319e5220e59ddbbcf66c558a5a3e55253a21edecfb57f192f3747a1a45f6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 00:44:58.087148 env[1212]: time="2025-11-01T00:44:58.087001119Z" level=info msg="Container to stop \"868b2f3c769d58b1f6b7f68f8bc89b745ff1ad6a790698b43eac52c1d81152be\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 00:44:58.087148 env[1212]: time="2025-11-01T00:44:58.087015446Z" level=info msg="Container to stop \"d73ea7e0d04e64442e5358f0648369d65a2b10252511ad0addfecdb7c69feff5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 00:44:58.087148 env[1212]: time="2025-11-01T00:44:58.087045574Z" level=info msg="Container to stop \"e3eecafc981be478a751ee979bd12ac3693eb6d8f423a98c3f53fcd1fc2b44d0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 00:44:58.093091 systemd[1]: cri-containerd-751f0729a1b177c18c486bb5d178fc8720d86e99a4c04194a47ca6a191c59248.scope: Deactivated successfully. Nov 1 00:44:58.117640 env[1212]: time="2025-11-01T00:44:58.117543877Z" level=info msg="shim disconnected" id=751f0729a1b177c18c486bb5d178fc8720d86e99a4c04194a47ca6a191c59248 Nov 1 00:44:58.117640 env[1212]: time="2025-11-01T00:44:58.117618329Z" level=warning msg="cleaning up after shim disconnected" id=751f0729a1b177c18c486bb5d178fc8720d86e99a4c04194a47ca6a191c59248 namespace=k8s.io Nov 1 00:44:58.117640 env[1212]: time="2025-11-01T00:44:58.117644338Z" level=info msg="cleaning up dead shim" Nov 1 00:44:58.125438 env[1212]: time="2025-11-01T00:44:58.125373193Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:44:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3741 runtime=io.containerd.runc.v2\n" Nov 1 00:44:58.125883 env[1212]: time="2025-11-01T00:44:58.125840822Z" level=info msg="TearDown network for sandbox \"751f0729a1b177c18c486bb5d178fc8720d86e99a4c04194a47ca6a191c59248\" successfully" Nov 1 00:44:58.125938 env[1212]: time="2025-11-01T00:44:58.125882482Z" level=info msg="StopPodSandbox for \"751f0729a1b177c18c486bb5d178fc8720d86e99a4c04194a47ca6a191c59248\" returns successfully" Nov 1 00:44:58.275580 kubelet[1934]: I1101 00:44:58.274363 1934 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ccf5477c-955c-4f12-a373-2f4712c8e970-xtables-lock\") pod \"ccf5477c-955c-4f12-a373-2f4712c8e970\" (UID: \"ccf5477c-955c-4f12-a373-2f4712c8e970\") " Nov 1 00:44:58.275580 kubelet[1934]: I1101 00:44:58.274426 1934 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ccf5477c-955c-4f12-a373-2f4712c8e970-hubble-tls\") pod \"ccf5477c-955c-4f12-a373-2f4712c8e970\" (UID: \"ccf5477c-955c-4f12-a373-2f4712c8e970\") " Nov 1 00:44:58.275580 kubelet[1934]: I1101 00:44:58.274445 1934 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6d126056-436b-42bf-96ea-1cc20d0761cd-cilium-config-path\") pod \"6d126056-436b-42bf-96ea-1cc20d0761cd\" (UID: \"6d126056-436b-42bf-96ea-1cc20d0761cd\") " Nov 1 00:44:58.275580 kubelet[1934]: I1101 00:44:58.274471 1934 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ccf5477c-955c-4f12-a373-2f4712c8e970-cni-path\") pod \"ccf5477c-955c-4f12-a373-2f4712c8e970\" (UID: \"ccf5477c-955c-4f12-a373-2f4712c8e970\") " Nov 1 00:44:58.275580 kubelet[1934]: I1101 00:44:58.274485 1934 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ccf5477c-955c-4f12-a373-2f4712c8e970-etc-cni-netd\") pod \"ccf5477c-955c-4f12-a373-2f4712c8e970\" (UID: \"ccf5477c-955c-4f12-a373-2f4712c8e970\") " Nov 1 00:44:58.275580 kubelet[1934]: I1101 00:44:58.274498 1934 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ccf5477c-955c-4f12-a373-2f4712c8e970-lib-modules\") pod \"ccf5477c-955c-4f12-a373-2f4712c8e970\" (UID: \"ccf5477c-955c-4f12-a373-2f4712c8e970\") " Nov 1 00:44:58.276364 kubelet[1934]: I1101 00:44:58.274512 1934 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ccf5477c-955c-4f12-a373-2f4712c8e970-cilium-run\") pod \"ccf5477c-955c-4f12-a373-2f4712c8e970\" (UID: \"ccf5477c-955c-4f12-a373-2f4712c8e970\") " Nov 1 00:44:58.276364 kubelet[1934]: I1101 00:44:58.274532 1934 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ccf5477c-955c-4f12-a373-2f4712c8e970-clustermesh-secrets\") pod \"ccf5477c-955c-4f12-a373-2f4712c8e970\" (UID: \"ccf5477c-955c-4f12-a373-2f4712c8e970\") " Nov 1 00:44:58.276364 kubelet[1934]: I1101 00:44:58.274550 1934 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ccf5477c-955c-4f12-a373-2f4712c8e970-host-proc-sys-net\") pod \"ccf5477c-955c-4f12-a373-2f4712c8e970\" (UID: \"ccf5477c-955c-4f12-a373-2f4712c8e970\") " Nov 1 00:44:58.276364 kubelet[1934]: I1101 00:44:58.274526 1934 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ccf5477c-955c-4f12-a373-2f4712c8e970-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "ccf5477c-955c-4f12-a373-2f4712c8e970" (UID: "ccf5477c-955c-4f12-a373-2f4712c8e970"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:44:58.276364 kubelet[1934]: I1101 00:44:58.274567 1934 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ccf5477c-955c-4f12-a373-2f4712c8e970-hostproc\") pod \"ccf5477c-955c-4f12-a373-2f4712c8e970\" (UID: \"ccf5477c-955c-4f12-a373-2f4712c8e970\") " Nov 1 00:44:58.276364 kubelet[1934]: I1101 00:44:58.274586 1934 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ccf5477c-955c-4f12-a373-2f4712c8e970-host-proc-sys-kernel\") pod \"ccf5477c-955c-4f12-a373-2f4712c8e970\" (UID: \"ccf5477c-955c-4f12-a373-2f4712c8e970\") " Nov 1 00:44:58.276560 kubelet[1934]: I1101 00:44:58.274568 1934 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ccf5477c-955c-4f12-a373-2f4712c8e970-cni-path" (OuterVolumeSpecName: "cni-path") pod "ccf5477c-955c-4f12-a373-2f4712c8e970" (UID: "ccf5477c-955c-4f12-a373-2f4712c8e970"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:44:58.276560 kubelet[1934]: I1101 00:44:58.274603 1934 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxq79\" (UniqueName: \"kubernetes.io/projected/ccf5477c-955c-4f12-a373-2f4712c8e970-kube-api-access-mxq79\") pod \"ccf5477c-955c-4f12-a373-2f4712c8e970\" (UID: \"ccf5477c-955c-4f12-a373-2f4712c8e970\") " Nov 1 00:44:58.276560 kubelet[1934]: I1101 00:44:58.274721 1934 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ccf5477c-955c-4f12-a373-2f4712c8e970-cilium-config-path\") pod \"ccf5477c-955c-4f12-a373-2f4712c8e970\" (UID: \"ccf5477c-955c-4f12-a373-2f4712c8e970\") " Nov 1 00:44:58.276560 kubelet[1934]: I1101 00:44:58.274753 1934 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmhp2\" (UniqueName: \"kubernetes.io/projected/6d126056-436b-42bf-96ea-1cc20d0761cd-kube-api-access-rmhp2\") pod \"6d126056-436b-42bf-96ea-1cc20d0761cd\" (UID: \"6d126056-436b-42bf-96ea-1cc20d0761cd\") " Nov 1 00:44:58.276560 kubelet[1934]: I1101 00:44:58.274779 1934 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ccf5477c-955c-4f12-a373-2f4712c8e970-bpf-maps\") pod \"ccf5477c-955c-4f12-a373-2f4712c8e970\" (UID: \"ccf5477c-955c-4f12-a373-2f4712c8e970\") " Nov 1 00:44:58.276560 kubelet[1934]: I1101 00:44:58.274802 1934 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ccf5477c-955c-4f12-a373-2f4712c8e970-cilium-cgroup\") pod \"ccf5477c-955c-4f12-a373-2f4712c8e970\" (UID: \"ccf5477c-955c-4f12-a373-2f4712c8e970\") " Nov 1 00:44:58.276772 kubelet[1934]: I1101 00:44:58.274872 1934 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ccf5477c-955c-4f12-a373-2f4712c8e970-xtables-lock\") on node \"localhost\" DevicePath \"\"" Nov 1 00:44:58.276772 kubelet[1934]: I1101 00:44:58.274888 1934 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ccf5477c-955c-4f12-a373-2f4712c8e970-cni-path\") on node \"localhost\" DevicePath \"\"" Nov 1 00:44:58.276772 kubelet[1934]: I1101 00:44:58.274918 1934 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ccf5477c-955c-4f12-a373-2f4712c8e970-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "ccf5477c-955c-4f12-a373-2f4712c8e970" (UID: "ccf5477c-955c-4f12-a373-2f4712c8e970"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:44:58.277625 kubelet[1934]: I1101 00:44:58.277570 1934 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d126056-436b-42bf-96ea-1cc20d0761cd-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6d126056-436b-42bf-96ea-1cc20d0761cd" (UID: "6d126056-436b-42bf-96ea-1cc20d0761cd"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 1 00:44:58.278442 kubelet[1934]: I1101 00:44:58.278362 1934 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ccf5477c-955c-4f12-a373-2f4712c8e970-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "ccf5477c-955c-4f12-a373-2f4712c8e970" (UID: "ccf5477c-955c-4f12-a373-2f4712c8e970"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:44:58.278442 kubelet[1934]: I1101 00:44:58.278420 1934 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ccf5477c-955c-4f12-a373-2f4712c8e970-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ccf5477c-955c-4f12-a373-2f4712c8e970" (UID: "ccf5477c-955c-4f12-a373-2f4712c8e970"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:44:58.278442 kubelet[1934]: I1101 00:44:58.278435 1934 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ccf5477c-955c-4f12-a373-2f4712c8e970-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "ccf5477c-955c-4f12-a373-2f4712c8e970" (UID: "ccf5477c-955c-4f12-a373-2f4712c8e970"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:44:58.278719 kubelet[1934]: I1101 00:44:58.278454 1934 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ccf5477c-955c-4f12-a373-2f4712c8e970-hostproc" (OuterVolumeSpecName: "hostproc") pod "ccf5477c-955c-4f12-a373-2f4712c8e970" (UID: "ccf5477c-955c-4f12-a373-2f4712c8e970"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:44:58.278719 kubelet[1934]: I1101 00:44:58.278467 1934 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ccf5477c-955c-4f12-a373-2f4712c8e970-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "ccf5477c-955c-4f12-a373-2f4712c8e970" (UID: "ccf5477c-955c-4f12-a373-2f4712c8e970"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:44:58.278719 kubelet[1934]: I1101 00:44:58.278481 1934 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ccf5477c-955c-4f12-a373-2f4712c8e970-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "ccf5477c-955c-4f12-a373-2f4712c8e970" (UID: "ccf5477c-955c-4f12-a373-2f4712c8e970"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:44:58.279049 kubelet[1934]: I1101 00:44:58.278992 1934 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccf5477c-955c-4f12-a373-2f4712c8e970-kube-api-access-mxq79" (OuterVolumeSpecName: "kube-api-access-mxq79") pod "ccf5477c-955c-4f12-a373-2f4712c8e970" (UID: "ccf5477c-955c-4f12-a373-2f4712c8e970"). InnerVolumeSpecName "kube-api-access-mxq79". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 00:44:58.279113 kubelet[1934]: I1101 00:44:58.279078 1934 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ccf5477c-955c-4f12-a373-2f4712c8e970-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "ccf5477c-955c-4f12-a373-2f4712c8e970" (UID: "ccf5477c-955c-4f12-a373-2f4712c8e970"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:44:58.280259 kubelet[1934]: I1101 00:44:58.280230 1934 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccf5477c-955c-4f12-a373-2f4712c8e970-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "ccf5477c-955c-4f12-a373-2f4712c8e970" (UID: "ccf5477c-955c-4f12-a373-2f4712c8e970"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 1 00:44:58.280329 kubelet[1934]: I1101 00:44:58.280258 1934 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccf5477c-955c-4f12-a373-2f4712c8e970-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ccf5477c-955c-4f12-a373-2f4712c8e970" (UID: "ccf5477c-955c-4f12-a373-2f4712c8e970"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 1 00:44:58.281434 kubelet[1934]: I1101 00:44:58.281384 1934 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d126056-436b-42bf-96ea-1cc20d0761cd-kube-api-access-rmhp2" (OuterVolumeSpecName: "kube-api-access-rmhp2") pod "6d126056-436b-42bf-96ea-1cc20d0761cd" (UID: "6d126056-436b-42bf-96ea-1cc20d0761cd"). InnerVolumeSpecName "kube-api-access-rmhp2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 00:44:58.282370 kubelet[1934]: I1101 00:44:58.282315 1934 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccf5477c-955c-4f12-a373-2f4712c8e970-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "ccf5477c-955c-4f12-a373-2f4712c8e970" (UID: "ccf5477c-955c-4f12-a373-2f4712c8e970"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 00:44:58.286937 systemd[1]: Removed slice kubepods-besteffort-pod6d126056_436b_42bf_96ea_1cc20d0761cd.slice. Nov 1 00:44:58.375348 kubelet[1934]: I1101 00:44:58.375279 1934 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ccf5477c-955c-4f12-a373-2f4712c8e970-hostproc\") on node \"localhost\" DevicePath \"\"" Nov 1 00:44:58.375348 kubelet[1934]: I1101 00:44:58.375322 1934 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ccf5477c-955c-4f12-a373-2f4712c8e970-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Nov 1 00:44:58.375348 kubelet[1934]: I1101 00:44:58.375334 1934 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mxq79\" (UniqueName: \"kubernetes.io/projected/ccf5477c-955c-4f12-a373-2f4712c8e970-kube-api-access-mxq79\") on node \"localhost\" DevicePath \"\"" Nov 1 00:44:58.375348 kubelet[1934]: I1101 00:44:58.375342 1934 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ccf5477c-955c-4f12-a373-2f4712c8e970-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Nov 1 00:44:58.375348 kubelet[1934]: I1101 00:44:58.375350 1934 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rmhp2\" (UniqueName: \"kubernetes.io/projected/6d126056-436b-42bf-96ea-1cc20d0761cd-kube-api-access-rmhp2\") on node \"localhost\" DevicePath \"\"" Nov 1 00:44:58.375348 kubelet[1934]: I1101 00:44:58.375357 1934 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ccf5477c-955c-4f12-a373-2f4712c8e970-bpf-maps\") on node \"localhost\" DevicePath \"\"" Nov 1 00:44:58.375348 kubelet[1934]: I1101 00:44:58.375364 1934 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ccf5477c-955c-4f12-a373-2f4712c8e970-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Nov 1 00:44:58.375348 kubelet[1934]: I1101 00:44:58.375371 1934 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ccf5477c-955c-4f12-a373-2f4712c8e970-hubble-tls\") on node \"localhost\" DevicePath \"\"" Nov 1 00:44:58.375826 kubelet[1934]: I1101 00:44:58.375380 1934 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6d126056-436b-42bf-96ea-1cc20d0761cd-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Nov 1 00:44:58.375826 kubelet[1934]: I1101 00:44:58.375387 1934 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ccf5477c-955c-4f12-a373-2f4712c8e970-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Nov 1 00:44:58.375826 kubelet[1934]: I1101 00:44:58.375394 1934 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ccf5477c-955c-4f12-a373-2f4712c8e970-lib-modules\") on node \"localhost\" DevicePath \"\"" Nov 1 00:44:58.375826 kubelet[1934]: I1101 00:44:58.375401 1934 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ccf5477c-955c-4f12-a373-2f4712c8e970-cilium-run\") on node \"localhost\" DevicePath \"\"" Nov 1 00:44:58.375826 kubelet[1934]: I1101 00:44:58.375408 1934 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ccf5477c-955c-4f12-a373-2f4712c8e970-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Nov 1 00:44:58.375826 kubelet[1934]: I1101 00:44:58.375415 1934 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ccf5477c-955c-4f12-a373-2f4712c8e970-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Nov 1 00:44:58.535893 kubelet[1934]: I1101 00:44:58.535769 1934 scope.go:117] "RemoveContainer" containerID="868b2f3c769d58b1f6b7f68f8bc89b745ff1ad6a790698b43eac52c1d81152be" Nov 1 00:44:58.537108 env[1212]: time="2025-11-01T00:44:58.537068289Z" level=info msg="RemoveContainer for \"868b2f3c769d58b1f6b7f68f8bc89b745ff1ad6a790698b43eac52c1d81152be\"" Nov 1 00:44:58.541731 systemd[1]: Removed slice kubepods-burstable-podccf5477c_955c_4f12_a373_2f4712c8e970.slice. Nov 1 00:44:58.541834 systemd[1]: kubepods-burstable-podccf5477c_955c_4f12_a373_2f4712c8e970.slice: Consumed 6.759s CPU time. Nov 1 00:44:58.547399 env[1212]: time="2025-11-01T00:44:58.547348578Z" level=info msg="RemoveContainer for \"868b2f3c769d58b1f6b7f68f8bc89b745ff1ad6a790698b43eac52c1d81152be\" returns successfully" Nov 1 00:44:58.547709 kubelet[1934]: I1101 00:44:58.547678 1934 scope.go:117] "RemoveContainer" containerID="e3eecafc981be478a751ee979bd12ac3693eb6d8f423a98c3f53fcd1fc2b44d0" Nov 1 00:44:58.548829 env[1212]: time="2025-11-01T00:44:58.548799048Z" level=info msg="RemoveContainer for \"e3eecafc981be478a751ee979bd12ac3693eb6d8f423a98c3f53fcd1fc2b44d0\"" Nov 1 00:44:58.552849 env[1212]: time="2025-11-01T00:44:58.552808396Z" level=info msg="RemoveContainer for \"e3eecafc981be478a751ee979bd12ac3693eb6d8f423a98c3f53fcd1fc2b44d0\" returns successfully" Nov 1 00:44:58.553062 kubelet[1934]: I1101 00:44:58.553001 1934 scope.go:117] "RemoveContainer" containerID="ee13319e5220e59ddbbcf66c558a5a3e55253a21edecfb57f192f3747a1a45f6" Nov 1 00:44:58.554242 env[1212]: time="2025-11-01T00:44:58.554186858Z" level=info msg="RemoveContainer for \"ee13319e5220e59ddbbcf66c558a5a3e55253a21edecfb57f192f3747a1a45f6\"" Nov 1 00:44:58.558738 env[1212]: time="2025-11-01T00:44:58.558625723Z" level=info msg="RemoveContainer for \"ee13319e5220e59ddbbcf66c558a5a3e55253a21edecfb57f192f3747a1a45f6\" returns successfully" Nov 1 00:44:58.559094 kubelet[1934]: I1101 00:44:58.558911 1934 scope.go:117] "RemoveContainer" containerID="d73ea7e0d04e64442e5358f0648369d65a2b10252511ad0addfecdb7c69feff5" Nov 1 00:44:58.560240 env[1212]: time="2025-11-01T00:44:58.560205468Z" level=info msg="RemoveContainer for \"d73ea7e0d04e64442e5358f0648369d65a2b10252511ad0addfecdb7c69feff5\"" Nov 1 00:44:58.564252 env[1212]: time="2025-11-01T00:44:58.564195089Z" level=info msg="RemoveContainer for \"d73ea7e0d04e64442e5358f0648369d65a2b10252511ad0addfecdb7c69feff5\" returns successfully" Nov 1 00:44:58.564472 kubelet[1934]: I1101 00:44:58.564435 1934 scope.go:117] "RemoveContainer" containerID="5ba2c4cba99b862da509ac9749ab924a552a01e33c5a178c60dd4f2cd0cf6956" Nov 1 00:44:58.565538 env[1212]: time="2025-11-01T00:44:58.565514418Z" level=info msg="RemoveContainer for \"5ba2c4cba99b862da509ac9749ab924a552a01e33c5a178c60dd4f2cd0cf6956\"" Nov 1 00:44:58.569702 env[1212]: time="2025-11-01T00:44:58.569172629Z" level=info msg="RemoveContainer for \"5ba2c4cba99b862da509ac9749ab924a552a01e33c5a178c60dd4f2cd0cf6956\" returns successfully" Nov 1 00:44:58.569873 kubelet[1934]: I1101 00:44:58.569760 1934 scope.go:117] "RemoveContainer" containerID="868b2f3c769d58b1f6b7f68f8bc89b745ff1ad6a790698b43eac52c1d81152be" Nov 1 00:44:58.570276 env[1212]: time="2025-11-01T00:44:58.570124239Z" level=error msg="ContainerStatus for \"868b2f3c769d58b1f6b7f68f8bc89b745ff1ad6a790698b43eac52c1d81152be\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"868b2f3c769d58b1f6b7f68f8bc89b745ff1ad6a790698b43eac52c1d81152be\": not found" Nov 1 00:44:58.570474 kubelet[1934]: E1101 00:44:58.570443 1934 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"868b2f3c769d58b1f6b7f68f8bc89b745ff1ad6a790698b43eac52c1d81152be\": not found" containerID="868b2f3c769d58b1f6b7f68f8bc89b745ff1ad6a790698b43eac52c1d81152be" Nov 1 00:44:58.570872 kubelet[1934]: I1101 00:44:58.570807 1934 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"868b2f3c769d58b1f6b7f68f8bc89b745ff1ad6a790698b43eac52c1d81152be"} err="failed to get container status \"868b2f3c769d58b1f6b7f68f8bc89b745ff1ad6a790698b43eac52c1d81152be\": rpc error: code = NotFound desc = an error occurred when try to find container \"868b2f3c769d58b1f6b7f68f8bc89b745ff1ad6a790698b43eac52c1d81152be\": not found" Nov 1 00:44:58.570872 kubelet[1934]: I1101 00:44:58.570868 1934 scope.go:117] "RemoveContainer" containerID="e3eecafc981be478a751ee979bd12ac3693eb6d8f423a98c3f53fcd1fc2b44d0" Nov 1 00:44:58.571105 env[1212]: time="2025-11-01T00:44:58.571053687Z" level=error msg="ContainerStatus for \"e3eecafc981be478a751ee979bd12ac3693eb6d8f423a98c3f53fcd1fc2b44d0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e3eecafc981be478a751ee979bd12ac3693eb6d8f423a98c3f53fcd1fc2b44d0\": not found" Nov 1 00:44:58.573665 kubelet[1934]: E1101 00:44:58.573605 1934 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e3eecafc981be478a751ee979bd12ac3693eb6d8f423a98c3f53fcd1fc2b44d0\": not found" containerID="e3eecafc981be478a751ee979bd12ac3693eb6d8f423a98c3f53fcd1fc2b44d0" Nov 1 00:44:58.573665 kubelet[1934]: I1101 00:44:58.573647 1934 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e3eecafc981be478a751ee979bd12ac3693eb6d8f423a98c3f53fcd1fc2b44d0"} err="failed to get container status \"e3eecafc981be478a751ee979bd12ac3693eb6d8f423a98c3f53fcd1fc2b44d0\": rpc error: code = NotFound desc = an error occurred when try to find container \"e3eecafc981be478a751ee979bd12ac3693eb6d8f423a98c3f53fcd1fc2b44d0\": not found" Nov 1 00:44:58.573665 kubelet[1934]: I1101 00:44:58.573665 1934 scope.go:117] "RemoveContainer" containerID="ee13319e5220e59ddbbcf66c558a5a3e55253a21edecfb57f192f3747a1a45f6" Nov 1 00:44:58.574175 env[1212]: time="2025-11-01T00:44:58.574107998Z" level=error msg="ContainerStatus for \"ee13319e5220e59ddbbcf66c558a5a3e55253a21edecfb57f192f3747a1a45f6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ee13319e5220e59ddbbcf66c558a5a3e55253a21edecfb57f192f3747a1a45f6\": not found" Nov 1 00:44:58.574544 kubelet[1934]: E1101 00:44:58.574517 1934 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ee13319e5220e59ddbbcf66c558a5a3e55253a21edecfb57f192f3747a1a45f6\": not found" containerID="ee13319e5220e59ddbbcf66c558a5a3e55253a21edecfb57f192f3747a1a45f6" Nov 1 00:44:58.574602 kubelet[1934]: I1101 00:44:58.574551 1934 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ee13319e5220e59ddbbcf66c558a5a3e55253a21edecfb57f192f3747a1a45f6"} err="failed to get container status \"ee13319e5220e59ddbbcf66c558a5a3e55253a21edecfb57f192f3747a1a45f6\": rpc error: code = NotFound desc = an error occurred when try to find container \"ee13319e5220e59ddbbcf66c558a5a3e55253a21edecfb57f192f3747a1a45f6\": not found" Nov 1 00:44:58.574602 kubelet[1934]: I1101 00:44:58.574567 1934 scope.go:117] "RemoveContainer" containerID="d73ea7e0d04e64442e5358f0648369d65a2b10252511ad0addfecdb7c69feff5" Nov 1 00:44:58.574805 env[1212]: time="2025-11-01T00:44:58.574750831Z" level=error msg="ContainerStatus for \"d73ea7e0d04e64442e5358f0648369d65a2b10252511ad0addfecdb7c69feff5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d73ea7e0d04e64442e5358f0648369d65a2b10252511ad0addfecdb7c69feff5\": not found" Nov 1 00:44:58.574910 kubelet[1934]: E1101 00:44:58.574888 1934 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d73ea7e0d04e64442e5358f0648369d65a2b10252511ad0addfecdb7c69feff5\": not found" containerID="d73ea7e0d04e64442e5358f0648369d65a2b10252511ad0addfecdb7c69feff5" Nov 1 00:44:58.574968 kubelet[1934]: I1101 00:44:58.574915 1934 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d73ea7e0d04e64442e5358f0648369d65a2b10252511ad0addfecdb7c69feff5"} err="failed to get container status \"d73ea7e0d04e64442e5358f0648369d65a2b10252511ad0addfecdb7c69feff5\": rpc error: code = NotFound desc = an error occurred when try to find container \"d73ea7e0d04e64442e5358f0648369d65a2b10252511ad0addfecdb7c69feff5\": not found" Nov 1 00:44:58.574968 kubelet[1934]: I1101 00:44:58.574933 1934 scope.go:117] "RemoveContainer" containerID="5ba2c4cba99b862da509ac9749ab924a552a01e33c5a178c60dd4f2cd0cf6956" Nov 1 00:44:58.575270 env[1212]: time="2025-11-01T00:44:58.575208652Z" level=error msg="ContainerStatus for \"5ba2c4cba99b862da509ac9749ab924a552a01e33c5a178c60dd4f2cd0cf6956\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5ba2c4cba99b862da509ac9749ab924a552a01e33c5a178c60dd4f2cd0cf6956\": not found" Nov 1 00:44:58.575353 kubelet[1934]: E1101 00:44:58.575331 1934 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5ba2c4cba99b862da509ac9749ab924a552a01e33c5a178c60dd4f2cd0cf6956\": not found" containerID="5ba2c4cba99b862da509ac9749ab924a552a01e33c5a178c60dd4f2cd0cf6956" Nov 1 00:44:58.575382 kubelet[1934]: I1101 00:44:58.575358 1934 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5ba2c4cba99b862da509ac9749ab924a552a01e33c5a178c60dd4f2cd0cf6956"} err="failed to get container status \"5ba2c4cba99b862da509ac9749ab924a552a01e33c5a178c60dd4f2cd0cf6956\": rpc error: code = NotFound desc = an error occurred when try to find container \"5ba2c4cba99b862da509ac9749ab924a552a01e33c5a178c60dd4f2cd0cf6956\": not found" Nov 1 00:44:58.575382 kubelet[1934]: I1101 00:44:58.575373 1934 scope.go:117] "RemoveContainer" containerID="be60f2fc6e35cbb8655b42d804f2ea531e217ce17c94a8489c85c0d8102adf54" Nov 1 00:44:58.576711 env[1212]: time="2025-11-01T00:44:58.576456125Z" level=info msg="RemoveContainer for \"be60f2fc6e35cbb8655b42d804f2ea531e217ce17c94a8489c85c0d8102adf54\"" Nov 1 00:44:58.579336 env[1212]: time="2025-11-01T00:44:58.579309675Z" level=info msg="RemoveContainer for \"be60f2fc6e35cbb8655b42d804f2ea531e217ce17c94a8489c85c0d8102adf54\" returns successfully" Nov 1 00:44:58.579485 kubelet[1934]: I1101 00:44:58.579457 1934 scope.go:117] "RemoveContainer" containerID="be60f2fc6e35cbb8655b42d804f2ea531e217ce17c94a8489c85c0d8102adf54" Nov 1 00:44:58.579688 env[1212]: time="2025-11-01T00:44:58.579642949Z" level=error msg="ContainerStatus for \"be60f2fc6e35cbb8655b42d804f2ea531e217ce17c94a8489c85c0d8102adf54\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"be60f2fc6e35cbb8655b42d804f2ea531e217ce17c94a8489c85c0d8102adf54\": not found" Nov 1 00:44:58.579781 kubelet[1934]: E1101 00:44:58.579754 1934 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"be60f2fc6e35cbb8655b42d804f2ea531e217ce17c94a8489c85c0d8102adf54\": not found" containerID="be60f2fc6e35cbb8655b42d804f2ea531e217ce17c94a8489c85c0d8102adf54" Nov 1 00:44:58.579846 kubelet[1934]: I1101 00:44:58.579784 1934 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"be60f2fc6e35cbb8655b42d804f2ea531e217ce17c94a8489c85c0d8102adf54"} err="failed to get container status \"be60f2fc6e35cbb8655b42d804f2ea531e217ce17c94a8489c85c0d8102adf54\": rpc error: code = NotFound desc = an error occurred when try to find container \"be60f2fc6e35cbb8655b42d804f2ea531e217ce17c94a8489c85c0d8102adf54\": not found" Nov 1 00:44:58.937332 systemd[1]: var-lib-kubelet-pods-6d126056\x2d436b\x2d42bf\x2d96ea\x2d1cc20d0761cd-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drmhp2.mount: Deactivated successfully. Nov 1 00:44:58.937455 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-751f0729a1b177c18c486bb5d178fc8720d86e99a4c04194a47ca6a191c59248-rootfs.mount: Deactivated successfully. Nov 1 00:44:58.937530 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-751f0729a1b177c18c486bb5d178fc8720d86e99a4c04194a47ca6a191c59248-shm.mount: Deactivated successfully. Nov 1 00:44:58.937604 systemd[1]: var-lib-kubelet-pods-ccf5477c\x2d955c\x2d4f12\x2da373\x2d2f4712c8e970-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmxq79.mount: Deactivated successfully. Nov 1 00:44:58.937699 systemd[1]: var-lib-kubelet-pods-ccf5477c\x2d955c\x2d4f12\x2da373\x2d2f4712c8e970-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Nov 1 00:44:58.937802 systemd[1]: var-lib-kubelet-pods-ccf5477c\x2d955c\x2d4f12\x2da373\x2d2f4712c8e970-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Nov 1 00:44:59.874855 sshd[3597]: pam_unix(sshd:session): session closed for user core Nov 1 00:44:59.878152 systemd[1]: Started sshd@26-10.0.0.111:22-10.0.0.1:59940.service. Nov 1 00:44:59.878671 systemd[1]: sshd@25-10.0.0.111:22-10.0.0.1:59926.service: Deactivated successfully. Nov 1 00:44:59.879234 systemd[1]: session-26.scope: Deactivated successfully. Nov 1 00:44:59.880138 systemd-logind[1194]: Session 26 logged out. Waiting for processes to exit. Nov 1 00:44:59.881306 systemd-logind[1194]: Removed session 26. Nov 1 00:44:59.922555 sshd[3758]: Accepted publickey for core from 10.0.0.1 port 59940 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:44:59.923593 sshd[3758]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:44:59.927254 systemd-logind[1194]: New session 27 of user core. Nov 1 00:44:59.928274 systemd[1]: Started session-27.scope. Nov 1 00:45:00.279505 kubelet[1934]: I1101 00:45:00.279393 1934 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d126056-436b-42bf-96ea-1cc20d0761cd" path="/var/lib/kubelet/pods/6d126056-436b-42bf-96ea-1cc20d0761cd/volumes" Nov 1 00:45:00.279893 kubelet[1934]: I1101 00:45:00.279774 1934 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ccf5477c-955c-4f12-a373-2f4712c8e970" path="/var/lib/kubelet/pods/ccf5477c-955c-4f12-a373-2f4712c8e970/volumes" Nov 1 00:45:00.338284 kubelet[1934]: E1101 00:45:00.338219 1934 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 1 00:45:00.505313 sshd[3758]: pam_unix(sshd:session): session closed for user core Nov 1 00:45:00.507752 systemd[1]: sshd@26-10.0.0.111:22-10.0.0.1:59940.service: Deactivated successfully. Nov 1 00:45:00.508285 systemd[1]: session-27.scope: Deactivated successfully. Nov 1 00:45:00.509832 systemd[1]: Started sshd@27-10.0.0.111:22-10.0.0.1:43786.service. Nov 1 00:45:00.511008 systemd-logind[1194]: Session 27 logged out. Waiting for processes to exit. Nov 1 00:45:00.511914 systemd-logind[1194]: Removed session 27. Nov 1 00:45:00.536914 systemd[1]: Created slice kubepods-burstable-pod1a7e07eb_246c_4571_aeb7_8574c6bf1a48.slice. Nov 1 00:45:00.558886 sshd[3771]: Accepted publickey for core from 10.0.0.1 port 43786 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:45:00.559097 sshd[3771]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:45:00.564079 systemd[1]: Started session-28.scope. Nov 1 00:45:00.564688 systemd-logind[1194]: New session 28 of user core. Nov 1 00:45:00.585564 kubelet[1934]: I1101 00:45:00.585508 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1a7e07eb-246c-4571-aeb7-8574c6bf1a48-cilium-config-path\") pod \"cilium-x5dtf\" (UID: \"1a7e07eb-246c-4571-aeb7-8574c6bf1a48\") " pod="kube-system/cilium-x5dtf" Nov 1 00:45:00.585564 kubelet[1934]: I1101 00:45:00.585542 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1a7e07eb-246c-4571-aeb7-8574c6bf1a48-host-proc-sys-net\") pod \"cilium-x5dtf\" (UID: \"1a7e07eb-246c-4571-aeb7-8574c6bf1a48\") " pod="kube-system/cilium-x5dtf" Nov 1 00:45:00.585564 kubelet[1934]: I1101 00:45:00.585560 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1a7e07eb-246c-4571-aeb7-8574c6bf1a48-hubble-tls\") pod \"cilium-x5dtf\" (UID: \"1a7e07eb-246c-4571-aeb7-8574c6bf1a48\") " pod="kube-system/cilium-x5dtf" Nov 1 00:45:00.585564 kubelet[1934]: I1101 00:45:00.585573 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1a7e07eb-246c-4571-aeb7-8574c6bf1a48-hostproc\") pod \"cilium-x5dtf\" (UID: \"1a7e07eb-246c-4571-aeb7-8574c6bf1a48\") " pod="kube-system/cilium-x5dtf" Nov 1 00:45:00.585895 kubelet[1934]: I1101 00:45:00.585588 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1a7e07eb-246c-4571-aeb7-8574c6bf1a48-cilium-cgroup\") pod \"cilium-x5dtf\" (UID: \"1a7e07eb-246c-4571-aeb7-8574c6bf1a48\") " pod="kube-system/cilium-x5dtf" Nov 1 00:45:00.585895 kubelet[1934]: I1101 00:45:00.585600 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1a7e07eb-246c-4571-aeb7-8574c6bf1a48-cni-path\") pod \"cilium-x5dtf\" (UID: \"1a7e07eb-246c-4571-aeb7-8574c6bf1a48\") " pod="kube-system/cilium-x5dtf" Nov 1 00:45:00.585895 kubelet[1934]: I1101 00:45:00.585619 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1a7e07eb-246c-4571-aeb7-8574c6bf1a48-cilium-run\") pod \"cilium-x5dtf\" (UID: \"1a7e07eb-246c-4571-aeb7-8574c6bf1a48\") " pod="kube-system/cilium-x5dtf" Nov 1 00:45:00.585895 kubelet[1934]: I1101 00:45:00.585645 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1a7e07eb-246c-4571-aeb7-8574c6bf1a48-lib-modules\") pod \"cilium-x5dtf\" (UID: \"1a7e07eb-246c-4571-aeb7-8574c6bf1a48\") " pod="kube-system/cilium-x5dtf" Nov 1 00:45:00.585895 kubelet[1934]: I1101 00:45:00.585659 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1a7e07eb-246c-4571-aeb7-8574c6bf1a48-cilium-ipsec-secrets\") pod \"cilium-x5dtf\" (UID: \"1a7e07eb-246c-4571-aeb7-8574c6bf1a48\") " pod="kube-system/cilium-x5dtf" Nov 1 00:45:00.585895 kubelet[1934]: I1101 00:45:00.585672 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1a7e07eb-246c-4571-aeb7-8574c6bf1a48-host-proc-sys-kernel\") pod \"cilium-x5dtf\" (UID: \"1a7e07eb-246c-4571-aeb7-8574c6bf1a48\") " pod="kube-system/cilium-x5dtf" Nov 1 00:45:00.586060 kubelet[1934]: I1101 00:45:00.585684 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1a7e07eb-246c-4571-aeb7-8574c6bf1a48-xtables-lock\") pod \"cilium-x5dtf\" (UID: \"1a7e07eb-246c-4571-aeb7-8574c6bf1a48\") " pod="kube-system/cilium-x5dtf" Nov 1 00:45:00.586060 kubelet[1934]: I1101 00:45:00.585696 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1a7e07eb-246c-4571-aeb7-8574c6bf1a48-clustermesh-secrets\") pod \"cilium-x5dtf\" (UID: \"1a7e07eb-246c-4571-aeb7-8574c6bf1a48\") " pod="kube-system/cilium-x5dtf" Nov 1 00:45:00.586060 kubelet[1934]: I1101 00:45:00.585708 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1a7e07eb-246c-4571-aeb7-8574c6bf1a48-etc-cni-netd\") pod \"cilium-x5dtf\" (UID: \"1a7e07eb-246c-4571-aeb7-8574c6bf1a48\") " pod="kube-system/cilium-x5dtf" Nov 1 00:45:00.586060 kubelet[1934]: I1101 00:45:00.585719 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1a7e07eb-246c-4571-aeb7-8574c6bf1a48-bpf-maps\") pod \"cilium-x5dtf\" (UID: \"1a7e07eb-246c-4571-aeb7-8574c6bf1a48\") " pod="kube-system/cilium-x5dtf" Nov 1 00:45:00.586060 kubelet[1934]: I1101 00:45:00.585730 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ml5jm\" (UniqueName: \"kubernetes.io/projected/1a7e07eb-246c-4571-aeb7-8574c6bf1a48-kube-api-access-ml5jm\") pod \"cilium-x5dtf\" (UID: \"1a7e07eb-246c-4571-aeb7-8574c6bf1a48\") " pod="kube-system/cilium-x5dtf" Nov 1 00:45:00.723227 sshd[3771]: pam_unix(sshd:session): session closed for user core Nov 1 00:45:00.726802 systemd[1]: sshd@27-10.0.0.111:22-10.0.0.1:43786.service: Deactivated successfully. Nov 1 00:45:00.727462 systemd[1]: session-28.scope: Deactivated successfully. Nov 1 00:45:00.729206 systemd[1]: Started sshd@28-10.0.0.111:22-10.0.0.1:43800.service. Nov 1 00:45:00.729779 systemd-logind[1194]: Session 28 logged out. Waiting for processes to exit. Nov 1 00:45:00.730917 systemd-logind[1194]: Removed session 28. Nov 1 00:45:00.747154 kubelet[1934]: E1101 00:45:00.747106 1934 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:45:00.747790 env[1212]: time="2025-11-01T00:45:00.747735115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-x5dtf,Uid:1a7e07eb-246c-4571-aeb7-8574c6bf1a48,Namespace:kube-system,Attempt:0,}" Nov 1 00:45:00.769496 env[1212]: time="2025-11-01T00:45:00.769285828Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:45:00.769496 env[1212]: time="2025-11-01T00:45:00.769332026Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:45:00.769496 env[1212]: time="2025-11-01T00:45:00.769356713Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:45:00.769751 env[1212]: time="2025-11-01T00:45:00.769687202Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/694983bd9a62b308a13862d139b634236f8020b2946ef987cdc50f65bf83520d pid=3797 runtime=io.containerd.runc.v2 Nov 1 00:45:00.775270 sshd[3788]: Accepted publickey for core from 10.0.0.1 port 43800 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:45:00.776765 sshd[3788]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:45:00.783998 systemd[1]: Started session-29.scope. Nov 1 00:45:00.784497 systemd-logind[1194]: New session 29 of user core. Nov 1 00:45:00.789172 systemd[1]: Started cri-containerd-694983bd9a62b308a13862d139b634236f8020b2946ef987cdc50f65bf83520d.scope. Nov 1 00:45:00.818405 env[1212]: time="2025-11-01T00:45:00.818349332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-x5dtf,Uid:1a7e07eb-246c-4571-aeb7-8574c6bf1a48,Namespace:kube-system,Attempt:0,} returns sandbox id \"694983bd9a62b308a13862d139b634236f8020b2946ef987cdc50f65bf83520d\"" Nov 1 00:45:00.819378 kubelet[1934]: E1101 00:45:00.819353 1934 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:45:00.825015 env[1212]: time="2025-11-01T00:45:00.824965231Z" level=info msg="CreateContainer within sandbox \"694983bd9a62b308a13862d139b634236f8020b2946ef987cdc50f65bf83520d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 1 00:45:00.836094 env[1212]: time="2025-11-01T00:45:00.836038283Z" level=info msg="CreateContainer within sandbox \"694983bd9a62b308a13862d139b634236f8020b2946ef987cdc50f65bf83520d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c6c7a7ad9673031267d4e3a922fe0be3c793356cc2799eca8e02648d9c51eb1b\"" Nov 1 00:45:00.836815 env[1212]: time="2025-11-01T00:45:00.836779423Z" level=info msg="StartContainer for \"c6c7a7ad9673031267d4e3a922fe0be3c793356cc2799eca8e02648d9c51eb1b\"" Nov 1 00:45:00.854994 systemd[1]: Started cri-containerd-c6c7a7ad9673031267d4e3a922fe0be3c793356cc2799eca8e02648d9c51eb1b.scope. Nov 1 00:45:00.865281 systemd[1]: cri-containerd-c6c7a7ad9673031267d4e3a922fe0be3c793356cc2799eca8e02648d9c51eb1b.scope: Deactivated successfully. Nov 1 00:45:00.865652 systemd[1]: Stopped cri-containerd-c6c7a7ad9673031267d4e3a922fe0be3c793356cc2799eca8e02648d9c51eb1b.scope. Nov 1 00:45:00.883247 env[1212]: time="2025-11-01T00:45:00.883185734Z" level=info msg="shim disconnected" id=c6c7a7ad9673031267d4e3a922fe0be3c793356cc2799eca8e02648d9c51eb1b Nov 1 00:45:00.883247 env[1212]: time="2025-11-01T00:45:00.883249456Z" level=warning msg="cleaning up after shim disconnected" id=c6c7a7ad9673031267d4e3a922fe0be3c793356cc2799eca8e02648d9c51eb1b namespace=k8s.io Nov 1 00:45:00.883524 env[1212]: time="2025-11-01T00:45:00.883266157Z" level=info msg="cleaning up dead shim" Nov 1 00:45:00.892375 env[1212]: time="2025-11-01T00:45:00.892261590Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:45:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3864 runtime=io.containerd.runc.v2\ntime=\"2025-11-01T00:45:00Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/c6c7a7ad9673031267d4e3a922fe0be3c793356cc2799eca8e02648d9c51eb1b/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Nov 1 00:45:00.892695 env[1212]: time="2025-11-01T00:45:00.892557783Z" level=error msg="copy shim log" error="read /proc/self/fd/33: file already closed" Nov 1 00:45:00.894257 env[1212]: time="2025-11-01T00:45:00.894215045Z" level=error msg="Failed to pipe stdout of container \"c6c7a7ad9673031267d4e3a922fe0be3c793356cc2799eca8e02648d9c51eb1b\"" error="reading from a closed fifo" Nov 1 00:45:00.894477 env[1212]: time="2025-11-01T00:45:00.894398633Z" level=error msg="Failed to pipe stderr of container \"c6c7a7ad9673031267d4e3a922fe0be3c793356cc2799eca8e02648d9c51eb1b\"" error="reading from a closed fifo" Nov 1 00:45:00.897050 env[1212]: time="2025-11-01T00:45:00.896994499Z" level=error msg="StartContainer for \"c6c7a7ad9673031267d4e3a922fe0be3c793356cc2799eca8e02648d9c51eb1b\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Nov 1 00:45:00.897323 kubelet[1934]: E1101 00:45:00.897264 1934 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="c6c7a7ad9673031267d4e3a922fe0be3c793356cc2799eca8e02648d9c51eb1b" Nov 1 00:45:00.897427 kubelet[1934]: E1101 00:45:00.897407 1934 kuberuntime_manager.go:1449] "Unhandled Error" err="init container mount-cgroup start failed in pod cilium-x5dtf_kube-system(1a7e07eb-246c-4571-aeb7-8574c6bf1a48): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" logger="UnhandledError" Nov 1 00:45:00.897481 kubelet[1934]: E1101 00:45:00.897460 1934 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-x5dtf" podUID="1a7e07eb-246c-4571-aeb7-8574c6bf1a48" Nov 1 00:45:01.557839 env[1212]: time="2025-11-01T00:45:01.557790282Z" level=info msg="StopPodSandbox for \"694983bd9a62b308a13862d139b634236f8020b2946ef987cdc50f65bf83520d\"" Nov 1 00:45:01.558151 env[1212]: time="2025-11-01T00:45:01.558106323Z" level=info msg="Container to stop \"c6c7a7ad9673031267d4e3a922fe0be3c793356cc2799eca8e02648d9c51eb1b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 00:45:01.563646 systemd[1]: cri-containerd-694983bd9a62b308a13862d139b634236f8020b2946ef987cdc50f65bf83520d.scope: Deactivated successfully. Nov 1 00:45:01.594114 env[1212]: time="2025-11-01T00:45:01.594053478Z" level=info msg="shim disconnected" id=694983bd9a62b308a13862d139b634236f8020b2946ef987cdc50f65bf83520d Nov 1 00:45:01.594319 env[1212]: time="2025-11-01T00:45:01.594122519Z" level=warning msg="cleaning up after shim disconnected" id=694983bd9a62b308a13862d139b634236f8020b2946ef987cdc50f65bf83520d namespace=k8s.io Nov 1 00:45:01.594319 env[1212]: time="2025-11-01T00:45:01.594137047Z" level=info msg="cleaning up dead shim" Nov 1 00:45:01.601117 env[1212]: time="2025-11-01T00:45:01.601062109Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:45:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3893 runtime=io.containerd.runc.v2\n" Nov 1 00:45:01.601410 env[1212]: time="2025-11-01T00:45:01.601373071Z" level=info msg="TearDown network for sandbox \"694983bd9a62b308a13862d139b634236f8020b2946ef987cdc50f65bf83520d\" successfully" Nov 1 00:45:01.601410 env[1212]: time="2025-11-01T00:45:01.601400603Z" level=info msg="StopPodSandbox for \"694983bd9a62b308a13862d139b634236f8020b2946ef987cdc50f65bf83520d\" returns successfully" Nov 1 00:45:01.690972 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-694983bd9a62b308a13862d139b634236f8020b2946ef987cdc50f65bf83520d-rootfs.mount: Deactivated successfully. Nov 1 00:45:01.691092 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-694983bd9a62b308a13862d139b634236f8020b2946ef987cdc50f65bf83520d-shm.mount: Deactivated successfully. Nov 1 00:45:01.693019 kubelet[1934]: I1101 00:45:01.692974 1934 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1a7e07eb-246c-4571-aeb7-8574c6bf1a48-cni-path\") pod \"1a7e07eb-246c-4571-aeb7-8574c6bf1a48\" (UID: \"1a7e07eb-246c-4571-aeb7-8574c6bf1a48\") " Nov 1 00:45:01.693019 kubelet[1934]: I1101 00:45:01.693041 1934 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ml5jm\" (UniqueName: \"kubernetes.io/projected/1a7e07eb-246c-4571-aeb7-8574c6bf1a48-kube-api-access-ml5jm\") pod \"1a7e07eb-246c-4571-aeb7-8574c6bf1a48\" (UID: \"1a7e07eb-246c-4571-aeb7-8574c6bf1a48\") " Nov 1 00:45:01.693445 kubelet[1934]: I1101 00:45:01.693068 1934 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1a7e07eb-246c-4571-aeb7-8574c6bf1a48-cilium-run\") pod \"1a7e07eb-246c-4571-aeb7-8574c6bf1a48\" (UID: \"1a7e07eb-246c-4571-aeb7-8574c6bf1a48\") " Nov 1 00:45:01.693445 kubelet[1934]: I1101 00:45:01.693091 1934 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1a7e07eb-246c-4571-aeb7-8574c6bf1a48-host-proc-sys-net\") pod \"1a7e07eb-246c-4571-aeb7-8574c6bf1a48\" (UID: \"1a7e07eb-246c-4571-aeb7-8574c6bf1a48\") " Nov 1 00:45:01.693445 kubelet[1934]: I1101 00:45:01.693111 1934 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1a7e07eb-246c-4571-aeb7-8574c6bf1a48-etc-cni-netd\") pod \"1a7e07eb-246c-4571-aeb7-8574c6bf1a48\" (UID: \"1a7e07eb-246c-4571-aeb7-8574c6bf1a48\") " Nov 1 00:45:01.693445 kubelet[1934]: I1101 00:45:01.693139 1934 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1a7e07eb-246c-4571-aeb7-8574c6bf1a48-clustermesh-secrets\") pod \"1a7e07eb-246c-4571-aeb7-8574c6bf1a48\" (UID: \"1a7e07eb-246c-4571-aeb7-8574c6bf1a48\") " Nov 1 00:45:01.693445 kubelet[1934]: I1101 00:45:01.693158 1934 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1a7e07eb-246c-4571-aeb7-8574c6bf1a48-hostproc\") pod \"1a7e07eb-246c-4571-aeb7-8574c6bf1a48\" (UID: \"1a7e07eb-246c-4571-aeb7-8574c6bf1a48\") " Nov 1 00:45:01.693445 kubelet[1934]: I1101 00:45:01.693177 1934 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1a7e07eb-246c-4571-aeb7-8574c6bf1a48-lib-modules\") pod \"1a7e07eb-246c-4571-aeb7-8574c6bf1a48\" (UID: \"1a7e07eb-246c-4571-aeb7-8574c6bf1a48\") " Nov 1 00:45:01.693610 kubelet[1934]: I1101 00:45:01.693199 1934 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1a7e07eb-246c-4571-aeb7-8574c6bf1a48-cilium-ipsec-secrets\") pod \"1a7e07eb-246c-4571-aeb7-8574c6bf1a48\" (UID: \"1a7e07eb-246c-4571-aeb7-8574c6bf1a48\") " Nov 1 00:45:01.693610 kubelet[1934]: I1101 00:45:01.693222 1934 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1a7e07eb-246c-4571-aeb7-8574c6bf1a48-bpf-maps\") pod \"1a7e07eb-246c-4571-aeb7-8574c6bf1a48\" (UID: \"1a7e07eb-246c-4571-aeb7-8574c6bf1a48\") " Nov 1 00:45:01.693610 kubelet[1934]: I1101 00:45:01.693293 1934 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1a7e07eb-246c-4571-aeb7-8574c6bf1a48-cilium-config-path\") pod \"1a7e07eb-246c-4571-aeb7-8574c6bf1a48\" (UID: \"1a7e07eb-246c-4571-aeb7-8574c6bf1a48\") " Nov 1 00:45:01.693610 kubelet[1934]: I1101 00:45:01.693362 1934 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1a7e07eb-246c-4571-aeb7-8574c6bf1a48-cilium-cgroup\") pod \"1a7e07eb-246c-4571-aeb7-8574c6bf1a48\" (UID: \"1a7e07eb-246c-4571-aeb7-8574c6bf1a48\") " Nov 1 00:45:01.693610 kubelet[1934]: I1101 00:45:01.693367 1934 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a7e07eb-246c-4571-aeb7-8574c6bf1a48-cni-path" (OuterVolumeSpecName: "cni-path") pod "1a7e07eb-246c-4571-aeb7-8574c6bf1a48" (UID: "1a7e07eb-246c-4571-aeb7-8574c6bf1a48"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:45:01.693610 kubelet[1934]: I1101 00:45:01.693395 1934 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1a7e07eb-246c-4571-aeb7-8574c6bf1a48-xtables-lock\") pod \"1a7e07eb-246c-4571-aeb7-8574c6bf1a48\" (UID: \"1a7e07eb-246c-4571-aeb7-8574c6bf1a48\") " Nov 1 00:45:01.693775 kubelet[1934]: I1101 00:45:01.693401 1934 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a7e07eb-246c-4571-aeb7-8574c6bf1a48-hostproc" (OuterVolumeSpecName: "hostproc") pod "1a7e07eb-246c-4571-aeb7-8574c6bf1a48" (UID: "1a7e07eb-246c-4571-aeb7-8574c6bf1a48"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:45:01.693775 kubelet[1934]: I1101 00:45:01.693426 1934 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a7e07eb-246c-4571-aeb7-8574c6bf1a48-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "1a7e07eb-246c-4571-aeb7-8574c6bf1a48" (UID: "1a7e07eb-246c-4571-aeb7-8574c6bf1a48"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:45:01.693775 kubelet[1934]: I1101 00:45:01.693440 1934 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a7e07eb-246c-4571-aeb7-8574c6bf1a48-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "1a7e07eb-246c-4571-aeb7-8574c6bf1a48" (UID: "1a7e07eb-246c-4571-aeb7-8574c6bf1a48"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:45:01.693775 kubelet[1934]: I1101 00:45:01.693451 1934 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a7e07eb-246c-4571-aeb7-8574c6bf1a48-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "1a7e07eb-246c-4571-aeb7-8574c6bf1a48" (UID: "1a7e07eb-246c-4571-aeb7-8574c6bf1a48"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:45:01.693775 kubelet[1934]: I1101 00:45:01.693459 1934 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1a7e07eb-246c-4571-aeb7-8574c6bf1a48-hubble-tls\") pod \"1a7e07eb-246c-4571-aeb7-8574c6bf1a48\" (UID: \"1a7e07eb-246c-4571-aeb7-8574c6bf1a48\") " Nov 1 00:45:01.693903 kubelet[1934]: I1101 00:45:01.693502 1934 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1a7e07eb-246c-4571-aeb7-8574c6bf1a48-host-proc-sys-kernel\") pod \"1a7e07eb-246c-4571-aeb7-8574c6bf1a48\" (UID: \"1a7e07eb-246c-4571-aeb7-8574c6bf1a48\") " Nov 1 00:45:01.693903 kubelet[1934]: I1101 00:45:01.693538 1934 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1a7e07eb-246c-4571-aeb7-8574c6bf1a48-cilium-run\") on node \"localhost\" DevicePath \"\"" Nov 1 00:45:01.693903 kubelet[1934]: I1101 00:45:01.693550 1934 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1a7e07eb-246c-4571-aeb7-8574c6bf1a48-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Nov 1 00:45:01.693903 kubelet[1934]: I1101 00:45:01.693562 1934 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1a7e07eb-246c-4571-aeb7-8574c6bf1a48-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Nov 1 00:45:01.693903 kubelet[1934]: I1101 00:45:01.693572 1934 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1a7e07eb-246c-4571-aeb7-8574c6bf1a48-hostproc\") on node \"localhost\" DevicePath \"\"" Nov 1 00:45:01.693903 kubelet[1934]: I1101 00:45:01.693581 1934 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1a7e07eb-246c-4571-aeb7-8574c6bf1a48-cni-path\") on node \"localhost\" DevicePath \"\"" Nov 1 00:45:01.693903 kubelet[1934]: I1101 00:45:01.693607 1934 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a7e07eb-246c-4571-aeb7-8574c6bf1a48-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "1a7e07eb-246c-4571-aeb7-8574c6bf1a48" (UID: "1a7e07eb-246c-4571-aeb7-8574c6bf1a48"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:45:01.694086 kubelet[1934]: I1101 00:45:01.693642 1934 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a7e07eb-246c-4571-aeb7-8574c6bf1a48-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "1a7e07eb-246c-4571-aeb7-8574c6bf1a48" (UID: "1a7e07eb-246c-4571-aeb7-8574c6bf1a48"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:45:01.694086 kubelet[1934]: I1101 00:45:01.693868 1934 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a7e07eb-246c-4571-aeb7-8574c6bf1a48-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "1a7e07eb-246c-4571-aeb7-8574c6bf1a48" (UID: "1a7e07eb-246c-4571-aeb7-8574c6bf1a48"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:45:01.694086 kubelet[1934]: I1101 00:45:01.693892 1934 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a7e07eb-246c-4571-aeb7-8574c6bf1a48-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "1a7e07eb-246c-4571-aeb7-8574c6bf1a48" (UID: "1a7e07eb-246c-4571-aeb7-8574c6bf1a48"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:45:01.694769 kubelet[1934]: I1101 00:45:01.694248 1934 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a7e07eb-246c-4571-aeb7-8574c6bf1a48-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "1a7e07eb-246c-4571-aeb7-8574c6bf1a48" (UID: "1a7e07eb-246c-4571-aeb7-8574c6bf1a48"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:45:01.695797 kubelet[1934]: I1101 00:45:01.695763 1934 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a7e07eb-246c-4571-aeb7-8574c6bf1a48-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1a7e07eb-246c-4571-aeb7-8574c6bf1a48" (UID: "1a7e07eb-246c-4571-aeb7-8574c6bf1a48"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 1 00:45:01.698431 systemd[1]: var-lib-kubelet-pods-1a7e07eb\x2d246c\x2d4571\x2daeb7\x2d8574c6bf1a48-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Nov 1 00:45:01.698618 kubelet[1934]: I1101 00:45:01.698467 1934 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a7e07eb-246c-4571-aeb7-8574c6bf1a48-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "1a7e07eb-246c-4571-aeb7-8574c6bf1a48" (UID: "1a7e07eb-246c-4571-aeb7-8574c6bf1a48"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 1 00:45:01.700325 systemd[1]: var-lib-kubelet-pods-1a7e07eb\x2d246c\x2d4571\x2daeb7\x2d8574c6bf1a48-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Nov 1 00:45:01.701085 kubelet[1934]: I1101 00:45:01.700993 1934 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a7e07eb-246c-4571-aeb7-8574c6bf1a48-kube-api-access-ml5jm" (OuterVolumeSpecName: "kube-api-access-ml5jm") pod "1a7e07eb-246c-4571-aeb7-8574c6bf1a48" (UID: "1a7e07eb-246c-4571-aeb7-8574c6bf1a48"). InnerVolumeSpecName "kube-api-access-ml5jm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 00:45:01.701615 kubelet[1934]: I1101 00:45:01.701586 1934 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a7e07eb-246c-4571-aeb7-8574c6bf1a48-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "1a7e07eb-246c-4571-aeb7-8574c6bf1a48" (UID: "1a7e07eb-246c-4571-aeb7-8574c6bf1a48"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 1 00:45:01.702128 systemd[1]: var-lib-kubelet-pods-1a7e07eb\x2d246c\x2d4571\x2daeb7\x2d8574c6bf1a48-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dml5jm.mount: Deactivated successfully. Nov 1 00:45:01.702201 systemd[1]: var-lib-kubelet-pods-1a7e07eb\x2d246c\x2d4571\x2daeb7\x2d8574c6bf1a48-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Nov 1 00:45:01.703109 kubelet[1934]: I1101 00:45:01.703084 1934 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a7e07eb-246c-4571-aeb7-8574c6bf1a48-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "1a7e07eb-246c-4571-aeb7-8574c6bf1a48" (UID: "1a7e07eb-246c-4571-aeb7-8574c6bf1a48"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 00:45:01.793971 kubelet[1934]: I1101 00:45:01.793921 1934 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1a7e07eb-246c-4571-aeb7-8574c6bf1a48-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Nov 1 00:45:01.793971 kubelet[1934]: I1101 00:45:01.793956 1934 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1a7e07eb-246c-4571-aeb7-8574c6bf1a48-xtables-lock\") on node \"localhost\" DevicePath \"\"" Nov 1 00:45:01.793971 kubelet[1934]: I1101 00:45:01.793964 1934 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1a7e07eb-246c-4571-aeb7-8574c6bf1a48-hubble-tls\") on node \"localhost\" DevicePath \"\"" Nov 1 00:45:01.793971 kubelet[1934]: I1101 00:45:01.793972 1934 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1a7e07eb-246c-4571-aeb7-8574c6bf1a48-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Nov 1 00:45:01.793971 kubelet[1934]: I1101 00:45:01.793980 1934 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ml5jm\" (UniqueName: \"kubernetes.io/projected/1a7e07eb-246c-4571-aeb7-8574c6bf1a48-kube-api-access-ml5jm\") on node \"localhost\" DevicePath \"\"" Nov 1 00:45:01.793971 kubelet[1934]: I1101 00:45:01.793987 1934 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1a7e07eb-246c-4571-aeb7-8574c6bf1a48-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Nov 1 00:45:01.793971 kubelet[1934]: I1101 00:45:01.793994 1934 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1a7e07eb-246c-4571-aeb7-8574c6bf1a48-lib-modules\") on node \"localhost\" DevicePath \"\"" Nov 1 00:45:01.793971 kubelet[1934]: I1101 00:45:01.794000 1934 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1a7e07eb-246c-4571-aeb7-8574c6bf1a48-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Nov 1 00:45:01.794360 kubelet[1934]: I1101 00:45:01.794008 1934 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1a7e07eb-246c-4571-aeb7-8574c6bf1a48-bpf-maps\") on node \"localhost\" DevicePath \"\"" Nov 1 00:45:01.794360 kubelet[1934]: I1101 00:45:01.794015 1934 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1a7e07eb-246c-4571-aeb7-8574c6bf1a48-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Nov 1 00:45:02.278192 kubelet[1934]: E1101 00:45:02.278121 1934 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:45:02.284109 systemd[1]: Removed slice kubepods-burstable-pod1a7e07eb_246c_4571_aeb7_8574c6bf1a48.slice. Nov 1 00:45:02.474996 kubelet[1934]: I1101 00:45:02.474929 1934 setters.go:543] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-01T00:45:02Z","lastTransitionTime":"2025-11-01T00:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Nov 1 00:45:02.560641 kubelet[1934]: I1101 00:45:02.560594 1934 scope.go:117] "RemoveContainer" containerID="c6c7a7ad9673031267d4e3a922fe0be3c793356cc2799eca8e02648d9c51eb1b" Nov 1 00:45:02.561730 env[1212]: time="2025-11-01T00:45:02.561687689Z" level=info msg="RemoveContainer for \"c6c7a7ad9673031267d4e3a922fe0be3c793356cc2799eca8e02648d9c51eb1b\"" Nov 1 00:45:02.565081 env[1212]: time="2025-11-01T00:45:02.565042513Z" level=info msg="RemoveContainer for \"c6c7a7ad9673031267d4e3a922fe0be3c793356cc2799eca8e02648d9c51eb1b\" returns successfully" Nov 1 00:45:02.614459 systemd[1]: Created slice kubepods-burstable-pod7bb9a0ba_10b9_4f36_98a2_0164d14864f6.slice. Nov 1 00:45:02.699180 kubelet[1934]: I1101 00:45:02.699121 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7bb9a0ba-10b9-4f36-98a2-0164d14864f6-cilium-cgroup\") pod \"cilium-dwscl\" (UID: \"7bb9a0ba-10b9-4f36-98a2-0164d14864f6\") " pod="kube-system/cilium-dwscl" Nov 1 00:45:02.699180 kubelet[1934]: I1101 00:45:02.699164 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7bb9a0ba-10b9-4f36-98a2-0164d14864f6-hostproc\") pod \"cilium-dwscl\" (UID: \"7bb9a0ba-10b9-4f36-98a2-0164d14864f6\") " pod="kube-system/cilium-dwscl" Nov 1 00:45:02.699180 kubelet[1934]: I1101 00:45:02.699181 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7bb9a0ba-10b9-4f36-98a2-0164d14864f6-clustermesh-secrets\") pod \"cilium-dwscl\" (UID: \"7bb9a0ba-10b9-4f36-98a2-0164d14864f6\") " pod="kube-system/cilium-dwscl" Nov 1 00:45:02.699670 kubelet[1934]: I1101 00:45:02.699269 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7bb9a0ba-10b9-4f36-98a2-0164d14864f6-etc-cni-netd\") pod \"cilium-dwscl\" (UID: \"7bb9a0ba-10b9-4f36-98a2-0164d14864f6\") " pod="kube-system/cilium-dwscl" Nov 1 00:45:02.699670 kubelet[1934]: I1101 00:45:02.699315 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7bb9a0ba-10b9-4f36-98a2-0164d14864f6-xtables-lock\") pod \"cilium-dwscl\" (UID: \"7bb9a0ba-10b9-4f36-98a2-0164d14864f6\") " pod="kube-system/cilium-dwscl" Nov 1 00:45:02.699670 kubelet[1934]: I1101 00:45:02.699343 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7bb9a0ba-10b9-4f36-98a2-0164d14864f6-cilium-config-path\") pod \"cilium-dwscl\" (UID: \"7bb9a0ba-10b9-4f36-98a2-0164d14864f6\") " pod="kube-system/cilium-dwscl" Nov 1 00:45:02.699670 kubelet[1934]: I1101 00:45:02.699377 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7bb9a0ba-10b9-4f36-98a2-0164d14864f6-host-proc-sys-net\") pod \"cilium-dwscl\" (UID: \"7bb9a0ba-10b9-4f36-98a2-0164d14864f6\") " pod="kube-system/cilium-dwscl" Nov 1 00:45:02.699670 kubelet[1934]: I1101 00:45:02.699414 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7bb9a0ba-10b9-4f36-98a2-0164d14864f6-cilium-ipsec-secrets\") pod \"cilium-dwscl\" (UID: \"7bb9a0ba-10b9-4f36-98a2-0164d14864f6\") " pod="kube-system/cilium-dwscl" Nov 1 00:45:02.699861 kubelet[1934]: I1101 00:45:02.699457 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7bb9a0ba-10b9-4f36-98a2-0164d14864f6-host-proc-sys-kernel\") pod \"cilium-dwscl\" (UID: \"7bb9a0ba-10b9-4f36-98a2-0164d14864f6\") " pod="kube-system/cilium-dwscl" Nov 1 00:45:02.699861 kubelet[1934]: I1101 00:45:02.699481 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7bb9a0ba-10b9-4f36-98a2-0164d14864f6-hubble-tls\") pod \"cilium-dwscl\" (UID: \"7bb9a0ba-10b9-4f36-98a2-0164d14864f6\") " pod="kube-system/cilium-dwscl" Nov 1 00:45:02.699861 kubelet[1934]: I1101 00:45:02.699565 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7bb9a0ba-10b9-4f36-98a2-0164d14864f6-cilium-run\") pod \"cilium-dwscl\" (UID: \"7bb9a0ba-10b9-4f36-98a2-0164d14864f6\") " pod="kube-system/cilium-dwscl" Nov 1 00:45:02.699861 kubelet[1934]: I1101 00:45:02.699598 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7bb9a0ba-10b9-4f36-98a2-0164d14864f6-cni-path\") pod \"cilium-dwscl\" (UID: \"7bb9a0ba-10b9-4f36-98a2-0164d14864f6\") " pod="kube-system/cilium-dwscl" Nov 1 00:45:02.699861 kubelet[1934]: I1101 00:45:02.699613 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7bb9a0ba-10b9-4f36-98a2-0164d14864f6-lib-modules\") pod \"cilium-dwscl\" (UID: \"7bb9a0ba-10b9-4f36-98a2-0164d14864f6\") " pod="kube-system/cilium-dwscl" Nov 1 00:45:02.699861 kubelet[1934]: I1101 00:45:02.699677 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7bb9a0ba-10b9-4f36-98a2-0164d14864f6-bpf-maps\") pod \"cilium-dwscl\" (UID: \"7bb9a0ba-10b9-4f36-98a2-0164d14864f6\") " pod="kube-system/cilium-dwscl" Nov 1 00:45:02.700078 kubelet[1934]: I1101 00:45:02.699708 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68kmz\" (UniqueName: \"kubernetes.io/projected/7bb9a0ba-10b9-4f36-98a2-0164d14864f6-kube-api-access-68kmz\") pod \"cilium-dwscl\" (UID: \"7bb9a0ba-10b9-4f36-98a2-0164d14864f6\") " pod="kube-system/cilium-dwscl" Nov 1 00:45:02.921145 kubelet[1934]: E1101 00:45:02.921003 1934 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:45:02.921764 env[1212]: time="2025-11-01T00:45:02.921703926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dwscl,Uid:7bb9a0ba-10b9-4f36-98a2-0164d14864f6,Namespace:kube-system,Attempt:0,}" Nov 1 00:45:02.938299 env[1212]: time="2025-11-01T00:45:02.938236952Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:45:02.938299 env[1212]: time="2025-11-01T00:45:02.938272359Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:45:02.938299 env[1212]: time="2025-11-01T00:45:02.938283340Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:45:02.938609 env[1212]: time="2025-11-01T00:45:02.938575264Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a418181279e12dac4614cdcce3561668cb1504e4a70b62c9140c09f4e0a78227 pid=3921 runtime=io.containerd.runc.v2 Nov 1 00:45:02.948869 systemd[1]: Started cri-containerd-a418181279e12dac4614cdcce3561668cb1504e4a70b62c9140c09f4e0a78227.scope. Nov 1 00:45:02.971293 env[1212]: time="2025-11-01T00:45:02.971237998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dwscl,Uid:7bb9a0ba-10b9-4f36-98a2-0164d14864f6,Namespace:kube-system,Attempt:0,} returns sandbox id \"a418181279e12dac4614cdcce3561668cb1504e4a70b62c9140c09f4e0a78227\"" Nov 1 00:45:02.972393 kubelet[1934]: E1101 00:45:02.972318 1934 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:45:02.980657 env[1212]: time="2025-11-01T00:45:02.980598073Z" level=info msg="CreateContainer within sandbox \"a418181279e12dac4614cdcce3561668cb1504e4a70b62c9140c09f4e0a78227\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 1 00:45:03.121123 env[1212]: time="2025-11-01T00:45:03.121037700Z" level=info msg="CreateContainer within sandbox \"a418181279e12dac4614cdcce3561668cb1504e4a70b62c9140c09f4e0a78227\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3859e677509da3d9c0dc54f841ad94e127a5847ef7004687430f8982afb3d025\"" Nov 1 00:45:03.121749 env[1212]: time="2025-11-01T00:45:03.121725998Z" level=info msg="StartContainer for \"3859e677509da3d9c0dc54f841ad94e127a5847ef7004687430f8982afb3d025\"" Nov 1 00:45:03.138037 systemd[1]: Started cri-containerd-3859e677509da3d9c0dc54f841ad94e127a5847ef7004687430f8982afb3d025.scope. Nov 1 00:45:03.163920 env[1212]: time="2025-11-01T00:45:03.163858822Z" level=info msg="StartContainer for \"3859e677509da3d9c0dc54f841ad94e127a5847ef7004687430f8982afb3d025\" returns successfully" Nov 1 00:45:03.174852 systemd[1]: cri-containerd-3859e677509da3d9c0dc54f841ad94e127a5847ef7004687430f8982afb3d025.scope: Deactivated successfully. Nov 1 00:45:03.203981 env[1212]: time="2025-11-01T00:45:03.203925022Z" level=info msg="shim disconnected" id=3859e677509da3d9c0dc54f841ad94e127a5847ef7004687430f8982afb3d025 Nov 1 00:45:03.203981 env[1212]: time="2025-11-01T00:45:03.203982911Z" level=warning msg="cleaning up after shim disconnected" id=3859e677509da3d9c0dc54f841ad94e127a5847ef7004687430f8982afb3d025 namespace=k8s.io Nov 1 00:45:03.204219 env[1212]: time="2025-11-01T00:45:03.203994934Z" level=info msg="cleaning up dead shim" Nov 1 00:45:03.210518 env[1212]: time="2025-11-01T00:45:03.210486235Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:45:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4007 runtime=io.containerd.runc.v2\n" Nov 1 00:45:03.564919 kubelet[1934]: E1101 00:45:03.564876 1934 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:45:03.571330 env[1212]: time="2025-11-01T00:45:03.571282024Z" level=info msg="CreateContainer within sandbox \"a418181279e12dac4614cdcce3561668cb1504e4a70b62c9140c09f4e0a78227\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 1 00:45:03.586146 env[1212]: time="2025-11-01T00:45:03.585992498Z" level=info msg="CreateContainer within sandbox \"a418181279e12dac4614cdcce3561668cb1504e4a70b62c9140c09f4e0a78227\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"83f8389a8f4818fb196f0b3b0a48f173090511635b37dc17f2304246071ad2d9\"" Nov 1 00:45:03.586687 env[1212]: time="2025-11-01T00:45:03.586659004Z" level=info msg="StartContainer for \"83f8389a8f4818fb196f0b3b0a48f173090511635b37dc17f2304246071ad2d9\"" Nov 1 00:45:03.601815 systemd[1]: Started cri-containerd-83f8389a8f4818fb196f0b3b0a48f173090511635b37dc17f2304246071ad2d9.scope. Nov 1 00:45:03.627344 env[1212]: time="2025-11-01T00:45:03.627284536Z" level=info msg="StartContainer for \"83f8389a8f4818fb196f0b3b0a48f173090511635b37dc17f2304246071ad2d9\" returns successfully" Nov 1 00:45:03.633858 systemd[1]: cri-containerd-83f8389a8f4818fb196f0b3b0a48f173090511635b37dc17f2304246071ad2d9.scope: Deactivated successfully. Nov 1 00:45:03.656794 env[1212]: time="2025-11-01T00:45:03.656739637Z" level=info msg="shim disconnected" id=83f8389a8f4818fb196f0b3b0a48f173090511635b37dc17f2304246071ad2d9 Nov 1 00:45:03.657108 env[1212]: time="2025-11-01T00:45:03.656794262Z" level=warning msg="cleaning up after shim disconnected" id=83f8389a8f4818fb196f0b3b0a48f173090511635b37dc17f2304246071ad2d9 namespace=k8s.io Nov 1 00:45:03.657108 env[1212]: time="2025-11-01T00:45:03.656810502Z" level=info msg="cleaning up dead shim" Nov 1 00:45:03.666460 env[1212]: time="2025-11-01T00:45:03.666394077Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:45:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4069 runtime=io.containerd.runc.v2\n" Nov 1 00:45:03.989168 kubelet[1934]: W1101 00:45:03.988980 1934 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1a7e07eb_246c_4571_aeb7_8574c6bf1a48.slice/cri-containerd-c6c7a7ad9673031267d4e3a922fe0be3c793356cc2799eca8e02648d9c51eb1b.scope WatchSource:0}: container "c6c7a7ad9673031267d4e3a922fe0be3c793356cc2799eca8e02648d9c51eb1b" in namespace "k8s.io": not found Nov 1 00:45:04.277964 kubelet[1934]: E1101 00:45:04.277816 1934 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:45:04.279943 kubelet[1934]: I1101 00:45:04.279898 1934 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a7e07eb-246c-4571-aeb7-8574c6bf1a48" path="/var/lib/kubelet/pods/1a7e07eb-246c-4571-aeb7-8574c6bf1a48/volumes" Nov 1 00:45:04.568603 kubelet[1934]: E1101 00:45:04.568573 1934 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:45:04.572798 env[1212]: time="2025-11-01T00:45:04.572746739Z" level=info msg="CreateContainer within sandbox \"a418181279e12dac4614cdcce3561668cb1504e4a70b62c9140c09f4e0a78227\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 1 00:45:04.592196 env[1212]: time="2025-11-01T00:45:04.592135810Z" level=info msg="CreateContainer within sandbox \"a418181279e12dac4614cdcce3561668cb1504e4a70b62c9140c09f4e0a78227\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"cfd4273c48c236a9f521e36b400f34d6f6144a8678fb7bcd3e9df5c6d7475b21\"" Nov 1 00:45:04.592799 env[1212]: time="2025-11-01T00:45:04.592762690Z" level=info msg="StartContainer for \"cfd4273c48c236a9f521e36b400f34d6f6144a8678fb7bcd3e9df5c6d7475b21\"" Nov 1 00:45:04.610595 systemd[1]: Started cri-containerd-cfd4273c48c236a9f521e36b400f34d6f6144a8678fb7bcd3e9df5c6d7475b21.scope. Nov 1 00:45:04.635512 env[1212]: time="2025-11-01T00:45:04.635454454Z" level=info msg="StartContainer for \"cfd4273c48c236a9f521e36b400f34d6f6144a8678fb7bcd3e9df5c6d7475b21\" returns successfully" Nov 1 00:45:04.638396 systemd[1]: cri-containerd-cfd4273c48c236a9f521e36b400f34d6f6144a8678fb7bcd3e9df5c6d7475b21.scope: Deactivated successfully. Nov 1 00:45:04.660111 env[1212]: time="2025-11-01T00:45:04.660035064Z" level=info msg="shim disconnected" id=cfd4273c48c236a9f521e36b400f34d6f6144a8678fb7bcd3e9df5c6d7475b21 Nov 1 00:45:04.660111 env[1212]: time="2025-11-01T00:45:04.660094737Z" level=warning msg="cleaning up after shim disconnected" id=cfd4273c48c236a9f521e36b400f34d6f6144a8678fb7bcd3e9df5c6d7475b21 namespace=k8s.io Nov 1 00:45:04.660111 env[1212]: time="2025-11-01T00:45:04.660103183Z" level=info msg="cleaning up dead shim" Nov 1 00:45:04.667182 env[1212]: time="2025-11-01T00:45:04.667135367Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:45:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4124 runtime=io.containerd.runc.v2\n" Nov 1 00:45:04.806111 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cfd4273c48c236a9f521e36b400f34d6f6144a8678fb7bcd3e9df5c6d7475b21-rootfs.mount: Deactivated successfully. Nov 1 00:45:05.339571 kubelet[1934]: E1101 00:45:05.339524 1934 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 1 00:45:05.572992 kubelet[1934]: E1101 00:45:05.572958 1934 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:45:05.578003 env[1212]: time="2025-11-01T00:45:05.577925330Z" level=info msg="CreateContainer within sandbox \"a418181279e12dac4614cdcce3561668cb1504e4a70b62c9140c09f4e0a78227\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 1 00:45:05.592479 env[1212]: time="2025-11-01T00:45:05.592357530Z" level=info msg="CreateContainer within sandbox \"a418181279e12dac4614cdcce3561668cb1504e4a70b62c9140c09f4e0a78227\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"875bd55450f49ed7a0f42657cd21a7f057658080e4d22044a5ee90d037e8bcee\"" Nov 1 00:45:05.594270 env[1212]: time="2025-11-01T00:45:05.594209315Z" level=info msg="StartContainer for \"875bd55450f49ed7a0f42657cd21a7f057658080e4d22044a5ee90d037e8bcee\"" Nov 1 00:45:05.613210 systemd[1]: Started cri-containerd-875bd55450f49ed7a0f42657cd21a7f057658080e4d22044a5ee90d037e8bcee.scope. Nov 1 00:45:05.638189 systemd[1]: cri-containerd-875bd55450f49ed7a0f42657cd21a7f057658080e4d22044a5ee90d037e8bcee.scope: Deactivated successfully. Nov 1 00:45:05.639471 env[1212]: time="2025-11-01T00:45:05.639414138Z" level=info msg="StartContainer for \"875bd55450f49ed7a0f42657cd21a7f057658080e4d22044a5ee90d037e8bcee\" returns successfully" Nov 1 00:45:05.663058 env[1212]: time="2025-11-01T00:45:05.662981010Z" level=info msg="shim disconnected" id=875bd55450f49ed7a0f42657cd21a7f057658080e4d22044a5ee90d037e8bcee Nov 1 00:45:05.663058 env[1212]: time="2025-11-01T00:45:05.663043920Z" level=warning msg="cleaning up after shim disconnected" id=875bd55450f49ed7a0f42657cd21a7f057658080e4d22044a5ee90d037e8bcee namespace=k8s.io Nov 1 00:45:05.663058 env[1212]: time="2025-11-01T00:45:05.663056203Z" level=info msg="cleaning up dead shim" Nov 1 00:45:05.669604 env[1212]: time="2025-11-01T00:45:05.669542877Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:45:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4181 runtime=io.containerd.runc.v2\n" Nov 1 00:45:05.805803 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-875bd55450f49ed7a0f42657cd21a7f057658080e4d22044a5ee90d037e8bcee-rootfs.mount: Deactivated successfully. Nov 1 00:45:06.577384 kubelet[1934]: E1101 00:45:06.577346 1934 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:45:06.592145 env[1212]: time="2025-11-01T00:45:06.592092657Z" level=info msg="CreateContainer within sandbox \"a418181279e12dac4614cdcce3561668cb1504e4a70b62c9140c09f4e0a78227\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 1 00:45:06.616153 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1146554535.mount: Deactivated successfully. Nov 1 00:45:06.619201 env[1212]: time="2025-11-01T00:45:06.619154692Z" level=info msg="CreateContainer within sandbox \"a418181279e12dac4614cdcce3561668cb1504e4a70b62c9140c09f4e0a78227\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d86554a226f24f9f277d6c2ce2fe458b0923e1669279cd162e5ae6044b48cbdc\"" Nov 1 00:45:06.619723 env[1212]: time="2025-11-01T00:45:06.619695428Z" level=info msg="StartContainer for \"d86554a226f24f9f277d6c2ce2fe458b0923e1669279cd162e5ae6044b48cbdc\"" Nov 1 00:45:06.639904 systemd[1]: Started cri-containerd-d86554a226f24f9f277d6c2ce2fe458b0923e1669279cd162e5ae6044b48cbdc.scope. Nov 1 00:45:06.672817 env[1212]: time="2025-11-01T00:45:06.672733705Z" level=info msg="StartContainer for \"d86554a226f24f9f277d6c2ce2fe458b0923e1669279cd162e5ae6044b48cbdc\" returns successfully" Nov 1 00:45:06.805712 systemd[1]: run-containerd-runc-k8s.io-d86554a226f24f9f277d6c2ce2fe458b0923e1669279cd162e5ae6044b48cbdc-runc.IQgn66.mount: Deactivated successfully. Nov 1 00:45:07.016069 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Nov 1 00:45:07.099751 kubelet[1934]: W1101 00:45:07.099679 1934 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7bb9a0ba_10b9_4f36_98a2_0164d14864f6.slice/cri-containerd-3859e677509da3d9c0dc54f841ad94e127a5847ef7004687430f8982afb3d025.scope WatchSource:0}: task 3859e677509da3d9c0dc54f841ad94e127a5847ef7004687430f8982afb3d025 not found Nov 1 00:45:07.582383 kubelet[1934]: E1101 00:45:07.582334 1934 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:45:07.595913 kubelet[1934]: I1101 00:45:07.595821 1934 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-dwscl" podStartSLOduration=5.595798858 podStartE2EDuration="5.595798858s" podCreationTimestamp="2025-11-01 00:45:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:45:07.595511854 +0000 UTC m=+97.427543147" watchObservedRunningTime="2025-11-01 00:45:07.595798858 +0000 UTC m=+97.427830151" Nov 1 00:45:08.917901 kubelet[1934]: E1101 00:45:08.917858 1934 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:45:09.725682 systemd-networkd[1035]: lxc_health: Link UP Nov 1 00:45:09.734888 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Nov 1 00:45:09.734206 systemd-networkd[1035]: lxc_health: Gained carrier Nov 1 00:45:10.208090 kubelet[1934]: W1101 00:45:10.208041 1934 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7bb9a0ba_10b9_4f36_98a2_0164d14864f6.slice/cri-containerd-83f8389a8f4818fb196f0b3b0a48f173090511635b37dc17f2304246071ad2d9.scope WatchSource:0}: task 83f8389a8f4818fb196f0b3b0a48f173090511635b37dc17f2304246071ad2d9 not found Nov 1 00:45:10.882206 systemd-networkd[1035]: lxc_health: Gained IPv6LL Nov 1 00:45:10.919253 kubelet[1934]: E1101 00:45:10.919213 1934 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:45:11.591747 kubelet[1934]: E1101 00:45:11.591686 1934 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:45:12.277831 kubelet[1934]: E1101 00:45:12.277788 1934 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:45:12.594461 kubelet[1934]: E1101 00:45:12.594423 1934 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:45:13.315720 kubelet[1934]: W1101 00:45:13.315641 1934 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7bb9a0ba_10b9_4f36_98a2_0164d14864f6.slice/cri-containerd-cfd4273c48c236a9f521e36b400f34d6f6144a8678fb7bcd3e9df5c6d7475b21.scope WatchSource:0}: task cfd4273c48c236a9f521e36b400f34d6f6144a8678fb7bcd3e9df5c6d7475b21 not found Nov 1 00:45:14.277913 kubelet[1934]: E1101 00:45:14.277864 1934 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:45:15.549642 sshd[3788]: pam_unix(sshd:session): session closed for user core Nov 1 00:45:15.552257 systemd[1]: sshd@28-10.0.0.111:22-10.0.0.1:43800.service: Deactivated successfully. Nov 1 00:45:15.552986 systemd[1]: session-29.scope: Deactivated successfully. Nov 1 00:45:15.553687 systemd-logind[1194]: Session 29 logged out. Waiting for processes to exit. Nov 1 00:45:15.554331 systemd-logind[1194]: Removed session 29. Nov 1 00:45:16.424059 kubelet[1934]: W1101 00:45:16.423948 1934 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7bb9a0ba_10b9_4f36_98a2_0164d14864f6.slice/cri-containerd-875bd55450f49ed7a0f42657cd21a7f057658080e4d22044a5ee90d037e8bcee.scope WatchSource:0}: task 875bd55450f49ed7a0f42657cd21a7f057658080e4d22044a5ee90d037e8bcee not found