Nov 1 00:37:07.266281 kernel: Linux version 5.15.192-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Oct 31 23:02:53 -00 2025 Nov 1 00:37:07.266315 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c4c72a4f851a6da01cbc7150799371516ef8311ea786098908d8eb164df01ee2 Nov 1 00:37:07.266326 kernel: BIOS-provided physical RAM map: Nov 1 00:37:07.266339 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Nov 1 00:37:07.266352 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Nov 1 00:37:07.266362 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 1 00:37:07.266372 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Nov 1 00:37:07.266398 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Nov 1 00:37:07.266417 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Nov 1 00:37:07.266425 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Nov 1 00:37:07.266434 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 1 00:37:07.266442 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 1 00:37:07.266453 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 1 00:37:07.266461 kernel: NX (Execute Disable) protection: active Nov 1 00:37:07.266474 kernel: SMBIOS 2.8 present. Nov 1 00:37:07.266494 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Nov 1 00:37:07.266511 kernel: Hypervisor detected: KVM Nov 1 00:37:07.266521 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 1 00:37:07.266534 kernel: kvm-clock: cpu 0, msr 311a0001, primary cpu clock Nov 1 00:37:07.266547 kernel: kvm-clock: using sched offset of 3487866174 cycles Nov 1 00:37:07.266557 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 1 00:37:07.266565 kernel: tsc: Detected 2794.748 MHz processor Nov 1 00:37:07.266576 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 1 00:37:07.266587 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 1 00:37:07.266605 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Nov 1 00:37:07.266615 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 1 00:37:07.266624 kernel: Using GB pages for direct mapping Nov 1 00:37:07.266636 kernel: ACPI: Early table checksum verification disabled Nov 1 00:37:07.266645 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Nov 1 00:37:07.266663 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:37:07.266676 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:37:07.266685 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:37:07.266697 kernel: ACPI: FACS 0x000000009CFE0000 000040 Nov 1 00:37:07.266706 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:37:07.266715 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:37:07.266724 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:37:07.266733 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:37:07.266743 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Nov 1 00:37:07.266752 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Nov 1 00:37:07.266761 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Nov 1 00:37:07.266777 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Nov 1 00:37:07.266792 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Nov 1 00:37:07.266806 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Nov 1 00:37:07.266816 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Nov 1 00:37:07.266826 kernel: No NUMA configuration found Nov 1 00:37:07.266836 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Nov 1 00:37:07.266848 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Nov 1 00:37:07.266857 kernel: Zone ranges: Nov 1 00:37:07.266867 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 1 00:37:07.266877 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Nov 1 00:37:07.266887 kernel: Normal empty Nov 1 00:37:07.266896 kernel: Movable zone start for each node Nov 1 00:37:07.266906 kernel: Early memory node ranges Nov 1 00:37:07.266916 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 1 00:37:07.266939 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Nov 1 00:37:07.266974 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Nov 1 00:37:07.266984 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 1 00:37:07.266994 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 1 00:37:07.267004 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Nov 1 00:37:07.267014 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 1 00:37:07.267023 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 1 00:37:07.267033 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 1 00:37:07.267043 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 1 00:37:07.267052 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 1 00:37:07.267062 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 1 00:37:07.267079 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 1 00:37:07.267089 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 1 00:37:07.267099 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 1 00:37:07.267108 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 1 00:37:07.267118 kernel: TSC deadline timer available Nov 1 00:37:07.267128 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Nov 1 00:37:07.267137 kernel: kvm-guest: KVM setup pv remote TLB flush Nov 1 00:37:07.267146 kernel: kvm-guest: setup PV sched yield Nov 1 00:37:07.267179 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Nov 1 00:37:07.267196 kernel: Booting paravirtualized kernel on KVM Nov 1 00:37:07.267217 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 1 00:37:07.267228 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Nov 1 00:37:07.267240 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 Nov 1 00:37:07.267251 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 Nov 1 00:37:07.267261 kernel: pcpu-alloc: [0] 0 1 2 3 Nov 1 00:37:07.267271 kernel: kvm-guest: setup async PF for cpu 0 Nov 1 00:37:07.267305 kernel: kvm-guest: stealtime: cpu 0, msr 9a41c0c0 Nov 1 00:37:07.267316 kernel: kvm-guest: PV spinlocks enabled Nov 1 00:37:07.267334 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 1 00:37:07.267348 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Nov 1 00:37:07.267358 kernel: Policy zone: DMA32 Nov 1 00:37:07.267369 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c4c72a4f851a6da01cbc7150799371516ef8311ea786098908d8eb164df01ee2 Nov 1 00:37:07.267394 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Nov 1 00:37:07.267415 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 1 00:37:07.267458 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 1 00:37:07.267489 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 1 00:37:07.267512 kernel: Memory: 2436696K/2571752K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47496K init, 4084K bss, 134796K reserved, 0K cma-reserved) Nov 1 00:37:07.267523 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 1 00:37:07.267533 kernel: ftrace: allocating 34614 entries in 136 pages Nov 1 00:37:07.267542 kernel: ftrace: allocated 136 pages with 2 groups Nov 1 00:37:07.267552 kernel: rcu: Hierarchical RCU implementation. Nov 1 00:37:07.267561 kernel: rcu: RCU event tracing is enabled. Nov 1 00:37:07.267595 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 1 00:37:07.267615 kernel: Rude variant of Tasks RCU enabled. Nov 1 00:37:07.267625 kernel: Tracing variant of Tasks RCU enabled. Nov 1 00:37:07.267647 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 1 00:37:07.267669 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 1 00:37:07.267679 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Nov 1 00:37:07.267699 kernel: random: crng init done Nov 1 00:37:07.267714 kernel: Console: colour VGA+ 80x25 Nov 1 00:37:07.267724 kernel: printk: console [ttyS0] enabled Nov 1 00:37:07.267733 kernel: ACPI: Core revision 20210730 Nov 1 00:37:07.267743 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 1 00:37:07.267753 kernel: APIC: Switch to symmetric I/O mode setup Nov 1 00:37:07.267781 kernel: x2apic enabled Nov 1 00:37:07.267796 kernel: Switched APIC routing to physical x2apic. Nov 1 00:37:07.267810 kernel: kvm-guest: setup PV IPIs Nov 1 00:37:07.267829 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 1 00:37:07.267839 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Nov 1 00:37:07.267849 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Nov 1 00:37:07.267859 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 1 00:37:07.267884 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 1 00:37:07.267895 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 1 00:37:07.267914 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 1 00:37:07.267931 kernel: Spectre V2 : Mitigation: Retpolines Nov 1 00:37:07.267945 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 1 00:37:07.267958 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Nov 1 00:37:07.267968 kernel: active return thunk: retbleed_return_thunk Nov 1 00:37:07.267978 kernel: RETBleed: Mitigation: untrained return thunk Nov 1 00:37:07.267988 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 1 00:37:07.267999 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Nov 1 00:37:07.268010 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 1 00:37:07.268022 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 1 00:37:07.268033 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 1 00:37:07.268059 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 1 00:37:07.268071 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Nov 1 00:37:07.268081 kernel: Freeing SMP alternatives memory: 32K Nov 1 00:37:07.268091 kernel: pid_max: default: 32768 minimum: 301 Nov 1 00:37:07.268102 kernel: LSM: Security Framework initializing Nov 1 00:37:07.268114 kernel: SELinux: Initializing. Nov 1 00:37:07.268125 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 1 00:37:07.268135 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 1 00:37:07.268146 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Nov 1 00:37:07.268166 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 1 00:37:07.268184 kernel: ... version: 0 Nov 1 00:37:07.268195 kernel: ... bit width: 48 Nov 1 00:37:07.268219 kernel: ... generic registers: 6 Nov 1 00:37:07.268239 kernel: ... value mask: 0000ffffffffffff Nov 1 00:37:07.268263 kernel: ... max period: 00007fffffffffff Nov 1 00:37:07.268277 kernel: ... fixed-purpose events: 0 Nov 1 00:37:07.268295 kernel: ... event mask: 000000000000003f Nov 1 00:37:07.268310 kernel: signal: max sigframe size: 1776 Nov 1 00:37:07.268320 kernel: rcu: Hierarchical SRCU implementation. Nov 1 00:37:07.268331 kernel: smp: Bringing up secondary CPUs ... Nov 1 00:37:07.268341 kernel: x86: Booting SMP configuration: Nov 1 00:37:07.268351 kernel: .... node #0, CPUs: #1 Nov 1 00:37:07.268362 kernel: kvm-clock: cpu 1, msr 311a0041, secondary cpu clock Nov 1 00:37:07.268403 kernel: kvm-guest: setup async PF for cpu 1 Nov 1 00:37:07.268418 kernel: kvm-guest: stealtime: cpu 1, msr 9a49c0c0 Nov 1 00:37:07.268427 kernel: #2 Nov 1 00:37:07.268438 kernel: kvm-clock: cpu 2, msr 311a0081, secondary cpu clock Nov 1 00:37:07.268448 kernel: kvm-guest: setup async PF for cpu 2 Nov 1 00:37:07.268458 kernel: kvm-guest: stealtime: cpu 2, msr 9a51c0c0 Nov 1 00:37:07.268473 kernel: #3 Nov 1 00:37:07.268503 kernel: kvm-clock: cpu 3, msr 311a00c1, secondary cpu clock Nov 1 00:37:07.268513 kernel: kvm-guest: setup async PF for cpu 3 Nov 1 00:37:07.268524 kernel: kvm-guest: stealtime: cpu 3, msr 9a59c0c0 Nov 1 00:37:07.268537 kernel: smp: Brought up 1 node, 4 CPUs Nov 1 00:37:07.268558 kernel: smpboot: Max logical packages: 1 Nov 1 00:37:07.268575 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Nov 1 00:37:07.268594 kernel: devtmpfs: initialized Nov 1 00:37:07.268605 kernel: x86/mm: Memory block size: 128MB Nov 1 00:37:07.268615 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 1 00:37:07.268625 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 1 00:37:07.268636 kernel: pinctrl core: initialized pinctrl subsystem Nov 1 00:37:07.268646 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 1 00:37:07.268660 kernel: audit: initializing netlink subsys (disabled) Nov 1 00:37:07.268685 kernel: audit: type=2000 audit(1761957425.781:1): state=initialized audit_enabled=0 res=1 Nov 1 00:37:07.268711 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 1 00:37:07.268722 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 1 00:37:07.268732 kernel: cpuidle: using governor menu Nov 1 00:37:07.268748 kernel: ACPI: bus type PCI registered Nov 1 00:37:07.268766 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 1 00:37:07.268777 kernel: dca service started, version 1.12.1 Nov 1 00:37:07.268787 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Nov 1 00:37:07.268801 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 Nov 1 00:37:07.268827 kernel: PCI: Using configuration type 1 for base access Nov 1 00:37:07.268838 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 1 00:37:07.268849 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Nov 1 00:37:07.268859 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Nov 1 00:37:07.268870 kernel: ACPI: Added _OSI(Module Device) Nov 1 00:37:07.268880 kernel: ACPI: Added _OSI(Processor Device) Nov 1 00:37:07.268890 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 1 00:37:07.268912 kernel: ACPI: Added _OSI(Linux-Dell-Video) Nov 1 00:37:07.268931 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Nov 1 00:37:07.268942 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Nov 1 00:37:07.268952 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 1 00:37:07.268962 kernel: ACPI: Interpreter enabled Nov 1 00:37:07.268988 kernel: ACPI: PM: (supports S0 S3 S5) Nov 1 00:37:07.268999 kernel: ACPI: Using IOAPIC for interrupt routing Nov 1 00:37:07.269010 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 1 00:37:07.269020 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 1 00:37:07.269031 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 1 00:37:07.269267 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 1 00:37:07.269474 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 1 00:37:07.269627 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 1 00:37:07.269662 kernel: PCI host bridge to bus 0000:00 Nov 1 00:37:07.269873 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 1 00:37:07.270037 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 1 00:37:07.270205 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 1 00:37:07.270343 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Nov 1 00:37:07.270501 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 1 00:37:07.270625 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Nov 1 00:37:07.270731 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 1 00:37:07.270852 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Nov 1 00:37:07.271008 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Nov 1 00:37:07.271201 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Nov 1 00:37:07.271343 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Nov 1 00:37:07.271477 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Nov 1 00:37:07.271653 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 1 00:37:07.271800 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Nov 1 00:37:07.271933 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Nov 1 00:37:07.272067 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Nov 1 00:37:07.272232 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Nov 1 00:37:07.272362 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Nov 1 00:37:07.272530 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Nov 1 00:37:07.272635 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Nov 1 00:37:07.272736 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Nov 1 00:37:07.272872 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Nov 1 00:37:07.272978 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Nov 1 00:37:07.273108 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Nov 1 00:37:07.273241 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Nov 1 00:37:07.273340 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Nov 1 00:37:07.273492 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Nov 1 00:37:07.273663 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 1 00:37:07.273817 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Nov 1 00:37:07.273953 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Nov 1 00:37:07.274110 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Nov 1 00:37:07.274257 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Nov 1 00:37:07.274396 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Nov 1 00:37:07.274419 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 1 00:37:07.274429 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 1 00:37:07.274439 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 1 00:37:07.274448 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 1 00:37:07.274461 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 1 00:37:07.274470 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 1 00:37:07.274491 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 1 00:37:07.274510 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 1 00:37:07.274520 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 1 00:37:07.274529 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 1 00:37:07.274538 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 1 00:37:07.274548 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 1 00:37:07.274557 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 1 00:37:07.274569 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 1 00:37:07.274579 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 1 00:37:07.274588 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 1 00:37:07.274597 kernel: iommu: Default domain type: Translated Nov 1 00:37:07.274607 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 1 00:37:07.274748 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 1 00:37:07.274887 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 1 00:37:07.275023 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 1 00:37:07.275044 kernel: vgaarb: loaded Nov 1 00:37:07.275054 kernel: pps_core: LinuxPPS API ver. 1 registered Nov 1 00:37:07.275065 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Nov 1 00:37:07.275076 kernel: PTP clock support registered Nov 1 00:37:07.275086 kernel: PCI: Using ACPI for IRQ routing Nov 1 00:37:07.275097 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 1 00:37:07.275107 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Nov 1 00:37:07.275117 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Nov 1 00:37:07.275127 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 1 00:37:07.275140 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 1 00:37:07.275151 kernel: clocksource: Switched to clocksource kvm-clock Nov 1 00:37:07.275161 kernel: VFS: Disk quotas dquot_6.6.0 Nov 1 00:37:07.275172 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 1 00:37:07.275182 kernel: pnp: PnP ACPI init Nov 1 00:37:07.275315 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Nov 1 00:37:07.275333 kernel: pnp: PnP ACPI: found 6 devices Nov 1 00:37:07.275344 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 1 00:37:07.275358 kernel: NET: Registered PF_INET protocol family Nov 1 00:37:07.275369 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 1 00:37:07.275394 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 1 00:37:07.275406 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 1 00:37:07.275417 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 1 00:37:07.275427 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Nov 1 00:37:07.275438 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 1 00:37:07.275449 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 1 00:37:07.275459 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 1 00:37:07.275472 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 1 00:37:07.275493 kernel: NET: Registered PF_XDP protocol family Nov 1 00:37:07.275609 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 1 00:37:07.275721 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 1 00:37:07.275823 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 1 00:37:07.275959 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Nov 1 00:37:07.276065 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Nov 1 00:37:07.276188 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Nov 1 00:37:07.276207 kernel: PCI: CLS 0 bytes, default 64 Nov 1 00:37:07.276218 kernel: Initialise system trusted keyrings Nov 1 00:37:07.276228 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 1 00:37:07.276238 kernel: Key type asymmetric registered Nov 1 00:37:07.276249 kernel: Asymmetric key parser 'x509' registered Nov 1 00:37:07.276259 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Nov 1 00:37:07.276270 kernel: io scheduler mq-deadline registered Nov 1 00:37:07.276280 kernel: io scheduler kyber registered Nov 1 00:37:07.276290 kernel: io scheduler bfq registered Nov 1 00:37:07.276300 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 1 00:37:07.276314 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 1 00:37:07.276325 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 1 00:37:07.276335 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Nov 1 00:37:07.276345 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 1 00:37:07.276356 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 1 00:37:07.276366 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 1 00:37:07.276377 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 1 00:37:07.276403 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 1 00:37:07.276546 kernel: rtc_cmos 00:04: RTC can wake from S4 Nov 1 00:37:07.276567 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 1 00:37:07.276687 kernel: rtc_cmos 00:04: registered as rtc0 Nov 1 00:37:07.276852 kernel: rtc_cmos 00:04: setting system clock to 2025-11-01T00:37:06 UTC (1761957426) Nov 1 00:37:07.277013 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Nov 1 00:37:07.277030 kernel: NET: Registered PF_INET6 protocol family Nov 1 00:37:07.277041 kernel: Segment Routing with IPv6 Nov 1 00:37:07.277051 kernel: In-situ OAM (IOAM) with IPv6 Nov 1 00:37:07.277061 kernel: NET: Registered PF_PACKET protocol family Nov 1 00:37:07.277085 kernel: Key type dns_resolver registered Nov 1 00:37:07.277095 kernel: IPI shorthand broadcast: enabled Nov 1 00:37:07.277106 kernel: sched_clock: Marking stable (581180891, 193014733)->(902639451, -128443827) Nov 1 00:37:07.277117 kernel: registered taskstats version 1 Nov 1 00:37:07.277127 kernel: Loading compiled-in X.509 certificates Nov 1 00:37:07.277143 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.192-flatcar: f2055682e6899ad8548fd369019e7b47939b46a0' Nov 1 00:37:07.277161 kernel: Key type .fscrypt registered Nov 1 00:37:07.277174 kernel: Key type fscrypt-provisioning registered Nov 1 00:37:07.277187 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 1 00:37:07.277209 kernel: ima: Allocated hash algorithm: sha1 Nov 1 00:37:07.277224 kernel: ima: No architecture policies found Nov 1 00:37:07.277241 kernel: clk: Disabling unused clocks Nov 1 00:37:07.277254 kernel: Freeing unused kernel image (initmem) memory: 47496K Nov 1 00:37:07.277267 kernel: Write protecting the kernel read-only data: 28672k Nov 1 00:37:07.277280 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Nov 1 00:37:07.277293 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Nov 1 00:37:07.277307 kernel: Run /init as init process Nov 1 00:37:07.277323 kernel: with arguments: Nov 1 00:37:07.277336 kernel: /init Nov 1 00:37:07.277346 kernel: with environment: Nov 1 00:37:07.277362 kernel: HOME=/ Nov 1 00:37:07.277373 kernel: TERM=linux Nov 1 00:37:07.277397 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Nov 1 00:37:07.277415 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Nov 1 00:37:07.277429 systemd[1]: Detected virtualization kvm. Nov 1 00:37:07.277445 systemd[1]: Detected architecture x86-64. Nov 1 00:37:07.277462 systemd[1]: Running in initrd. Nov 1 00:37:07.277487 systemd[1]: No hostname configured, using default hostname. Nov 1 00:37:07.277501 systemd[1]: Hostname set to . Nov 1 00:37:07.277513 systemd[1]: Initializing machine ID from VM UUID. Nov 1 00:37:07.277536 systemd[1]: Queued start job for default target initrd.target. Nov 1 00:37:07.277548 systemd[1]: Started systemd-ask-password-console.path. Nov 1 00:37:07.277567 systemd[1]: Reached target cryptsetup.target. Nov 1 00:37:07.277579 systemd[1]: Reached target paths.target. Nov 1 00:37:07.277593 systemd[1]: Reached target slices.target. Nov 1 00:37:07.277613 systemd[1]: Reached target swap.target. Nov 1 00:37:07.277626 systemd[1]: Reached target timers.target. Nov 1 00:37:07.277639 systemd[1]: Listening on iscsid.socket. Nov 1 00:37:07.277650 systemd[1]: Listening on iscsiuio.socket. Nov 1 00:37:07.277664 systemd[1]: Listening on systemd-journald-audit.socket. Nov 1 00:37:07.277675 systemd[1]: Listening on systemd-journald-dev-log.socket. Nov 1 00:37:07.277687 systemd[1]: Listening on systemd-journald.socket. Nov 1 00:37:07.277699 systemd[1]: Listening on systemd-networkd.socket. Nov 1 00:37:07.277710 systemd[1]: Listening on systemd-udevd-control.socket. Nov 1 00:37:07.277722 systemd[1]: Listening on systemd-udevd-kernel.socket. Nov 1 00:37:07.277733 systemd[1]: Reached target sockets.target. Nov 1 00:37:07.277744 systemd[1]: Starting kmod-static-nodes.service... Nov 1 00:37:07.277758 systemd[1]: Finished network-cleanup.service. Nov 1 00:37:07.277771 systemd[1]: Starting systemd-fsck-usr.service... Nov 1 00:37:07.277783 systemd[1]: Starting systemd-journald.service... Nov 1 00:37:07.277794 systemd[1]: Starting systemd-modules-load.service... Nov 1 00:37:07.277805 systemd[1]: Starting systemd-resolved.service... Nov 1 00:37:07.277817 systemd[1]: Starting systemd-vconsole-setup.service... Nov 1 00:37:07.277829 systemd[1]: Finished kmod-static-nodes.service. Nov 1 00:37:07.277840 systemd[1]: Finished systemd-fsck-usr.service. Nov 1 00:37:07.277854 systemd-journald[199]: Journal started Nov 1 00:37:07.278025 systemd-journald[199]: Runtime Journal (/run/log/journal/8c20ae0c5ef14bafa89d54d49872abaf) is 6.0M, max 48.5M, 42.5M free. Nov 1 00:37:07.252661 systemd-modules-load[200]: Inserted module 'overlay' Nov 1 00:37:07.348189 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 1 00:37:07.348219 kernel: Bridge firewalling registered Nov 1 00:37:07.348230 kernel: SCSI subsystem initialized Nov 1 00:37:07.348240 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 1 00:37:07.348251 kernel: device-mapper: uevent: version 1.0.3 Nov 1 00:37:07.348260 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Nov 1 00:37:07.348269 kernel: audit: type=1130 audit(1761957427.347:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:07.347000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:07.355674 systemd[1]: Started systemd-journald.service. Nov 1 00:37:07.279716 systemd-resolved[201]: Positive Trust Anchors: Nov 1 00:37:07.358000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:07.279739 systemd-resolved[201]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 00:37:07.394366 kernel: audit: type=1130 audit(1761957427.358:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:07.394420 kernel: audit: type=1130 audit(1761957427.364:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:07.394431 kernel: audit: type=1130 audit(1761957427.371:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:07.394442 kernel: audit: type=1130 audit(1761957427.377:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:07.364000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:07.371000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:07.377000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:07.279776 systemd-resolved[201]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Nov 1 00:37:07.411656 kernel: audit: type=1130 audit(1761957427.403:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:07.403000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:07.411000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:07.283730 systemd-resolved[201]: Defaulting to hostname 'linux'. Nov 1 00:37:07.420694 kernel: audit: type=1130 audit(1761957427.411:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:07.420715 kernel: audit: type=1130 audit(1761957427.420:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:07.420000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:07.292027 systemd-modules-load[200]: Inserted module 'br_netfilter' Nov 1 00:37:07.323227 systemd-modules-load[200]: Inserted module 'dm_multipath' Nov 1 00:37:07.359091 systemd[1]: Started systemd-resolved.service. Nov 1 00:37:07.365305 systemd[1]: Finished systemd-modules-load.service. Nov 1 00:37:07.433468 dracut-cmdline[223]: dracut-dracut-053 Nov 1 00:37:07.372272 systemd[1]: Finished systemd-vconsole-setup.service. Nov 1 00:37:07.378926 systemd[1]: Reached target nss-lookup.target. Nov 1 00:37:07.437970 dracut-cmdline[223]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c4c72a4f851a6da01cbc7150799371516ef8311ea786098908d8eb164df01ee2 Nov 1 00:37:07.386717 systemd[1]: Starting dracut-cmdline-ask.service... Nov 1 00:37:07.387581 systemd[1]: Starting systemd-sysctl.service... Nov 1 00:37:07.388600 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Nov 1 00:37:07.397466 systemd[1]: Finished systemd-sysctl.service. Nov 1 00:37:07.404337 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Nov 1 00:37:07.411985 systemd[1]: Finished dracut-cmdline-ask.service. Nov 1 00:37:07.421738 systemd[1]: Starting dracut-cmdline.service... Nov 1 00:37:07.500425 kernel: Loading iSCSI transport class v2.0-870. Nov 1 00:37:07.517424 kernel: iscsi: registered transport (tcp) Nov 1 00:37:07.539443 kernel: iscsi: registered transport (qla4xxx) Nov 1 00:37:07.539581 kernel: QLogic iSCSI HBA Driver Nov 1 00:37:07.570455 systemd[1]: Finished dracut-cmdline.service. Nov 1 00:37:07.579197 kernel: audit: type=1130 audit(1761957427.570:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:07.570000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:07.571937 systemd[1]: Starting dracut-pre-udev.service... Nov 1 00:37:07.622419 kernel: raid6: avx2x4 gen() 20011 MB/s Nov 1 00:37:07.640416 kernel: raid6: avx2x4 xor() 7517 MB/s Nov 1 00:37:07.658405 kernel: raid6: avx2x2 gen() 27704 MB/s Nov 1 00:37:07.676413 kernel: raid6: avx2x2 xor() 18670 MB/s Nov 1 00:37:07.694415 kernel: raid6: avx2x1 gen() 25160 MB/s Nov 1 00:37:07.712415 kernel: raid6: avx2x1 xor() 14518 MB/s Nov 1 00:37:07.730415 kernel: raid6: sse2x4 gen() 12722 MB/s Nov 1 00:37:07.748433 kernel: raid6: sse2x4 xor() 6711 MB/s Nov 1 00:37:07.766438 kernel: raid6: sse2x2 gen() 15249 MB/s Nov 1 00:37:07.784437 kernel: raid6: sse2x2 xor() 9404 MB/s Nov 1 00:37:07.802436 kernel: raid6: sse2x1 gen() 11708 MB/s Nov 1 00:37:07.821106 kernel: raid6: sse2x1 xor() 7324 MB/s Nov 1 00:37:07.821200 kernel: raid6: using algorithm avx2x2 gen() 27704 MB/s Nov 1 00:37:07.821215 kernel: raid6: .... xor() 18670 MB/s, rmw enabled Nov 1 00:37:07.822415 kernel: raid6: using avx2x2 recovery algorithm Nov 1 00:37:07.837424 kernel: xor: automatically using best checksumming function avx Nov 1 00:37:07.952420 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Nov 1 00:37:07.962133 systemd[1]: Finished dracut-pre-udev.service. Nov 1 00:37:07.963000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:07.964000 audit: BPF prog-id=7 op=LOAD Nov 1 00:37:07.964000 audit: BPF prog-id=8 op=LOAD Nov 1 00:37:07.965550 systemd[1]: Starting systemd-udevd.service... Nov 1 00:37:07.981172 systemd-udevd[401]: Using default interface naming scheme 'v252'. Nov 1 00:37:07.985774 systemd[1]: Started systemd-udevd.service. Nov 1 00:37:07.985000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:07.988133 systemd[1]: Starting dracut-pre-trigger.service... Nov 1 00:37:07.999659 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Nov 1 00:37:08.028436 systemd[1]: Finished dracut-pre-trigger.service. Nov 1 00:37:08.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:08.032195 systemd[1]: Starting systemd-udev-trigger.service... Nov 1 00:37:08.075449 systemd[1]: Finished systemd-udev-trigger.service. Nov 1 00:37:08.078000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:08.115022 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Nov 1 00:37:08.126103 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 1 00:37:08.126125 kernel: GPT:9289727 != 19775487 Nov 1 00:37:08.126139 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 1 00:37:08.126153 kernel: GPT:9289727 != 19775487 Nov 1 00:37:08.126172 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 1 00:37:08.126186 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 00:37:08.128408 kernel: cryptd: max_cpu_qlen set to 1000 Nov 1 00:37:08.150424 kernel: libata version 3.00 loaded. Nov 1 00:37:08.164407 kernel: AVX2 version of gcm_enc/dec engaged. Nov 1 00:37:08.164476 kernel: AES CTR mode by8 optimization enabled Nov 1 00:37:08.172404 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (450) Nov 1 00:37:08.177863 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Nov 1 00:37:08.252819 kernel: ahci 0000:00:1f.2: version 3.0 Nov 1 00:37:08.253011 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 1 00:37:08.253023 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Nov 1 00:37:08.253107 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 1 00:37:08.253184 kernel: scsi host0: ahci Nov 1 00:37:08.253287 kernel: scsi host1: ahci Nov 1 00:37:08.253376 kernel: scsi host2: ahci Nov 1 00:37:08.253573 kernel: scsi host3: ahci Nov 1 00:37:08.253667 kernel: scsi host4: ahci Nov 1 00:37:08.253751 kernel: scsi host5: ahci Nov 1 00:37:08.253835 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Nov 1 00:37:08.253845 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Nov 1 00:37:08.253854 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Nov 1 00:37:08.253863 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Nov 1 00:37:08.253875 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Nov 1 00:37:08.253884 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Nov 1 00:37:08.252813 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Nov 1 00:37:08.260305 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Nov 1 00:37:08.271477 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Nov 1 00:37:08.279645 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Nov 1 00:37:08.283753 systemd[1]: Starting disk-uuid.service... Nov 1 00:37:08.294510 disk-uuid[523]: Primary Header is updated. Nov 1 00:37:08.294510 disk-uuid[523]: Secondary Entries is updated. Nov 1 00:37:08.294510 disk-uuid[523]: Secondary Header is updated. Nov 1 00:37:08.301427 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 00:37:08.306182 kernel: GPT:disk_guids don't match. Nov 1 00:37:08.306240 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 1 00:37:08.306258 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 00:37:08.312441 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 00:37:08.505472 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 1 00:37:08.505555 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 1 00:37:08.506403 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 1 00:37:08.507406 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 1 00:37:08.509430 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Nov 1 00:37:08.511412 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 1 00:37:08.512426 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Nov 1 00:37:08.514600 kernel: ata3.00: applying bridge limits Nov 1 00:37:08.515817 kernel: ata3.00: configured for UDMA/100 Nov 1 00:37:08.516409 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Nov 1 00:37:08.551795 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Nov 1 00:37:08.568528 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 1 00:37:08.568551 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Nov 1 00:37:09.359350 disk-uuid[524]: The operation has completed successfully. Nov 1 00:37:09.361712 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 00:37:09.383283 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 1 00:37:09.383000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:09.383000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:09.383367 systemd[1]: Finished disk-uuid.service. Nov 1 00:37:09.390238 systemd[1]: Starting verity-setup.service... Nov 1 00:37:09.405408 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Nov 1 00:37:09.423574 systemd[1]: Found device dev-mapper-usr.device. Nov 1 00:37:09.425585 systemd[1]: Mounting sysusr-usr.mount... Nov 1 00:37:09.429533 systemd[1]: Finished verity-setup.service. Nov 1 00:37:09.429000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:09.487405 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Nov 1 00:37:09.487588 systemd[1]: Mounted sysusr-usr.mount. Nov 1 00:37:09.488453 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Nov 1 00:37:09.489084 systemd[1]: Starting ignition-setup.service... Nov 1 00:37:09.503444 systemd[1]: Starting parse-ip-for-networkd.service... Nov 1 00:37:09.511569 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:37:09.511598 kernel: BTRFS info (device vda6): using free space tree Nov 1 00:37:09.511608 kernel: BTRFS info (device vda6): has skinny extents Nov 1 00:37:09.521146 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 1 00:37:09.559316 systemd[1]: Finished parse-ip-for-networkd.service. Nov 1 00:37:09.562000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:09.562000 audit: BPF prog-id=9 op=LOAD Nov 1 00:37:09.563675 systemd[1]: Starting systemd-networkd.service... Nov 1 00:37:09.584951 systemd-networkd[708]: lo: Link UP Nov 1 00:37:09.584959 systemd-networkd[708]: lo: Gained carrier Nov 1 00:37:09.585437 systemd-networkd[708]: Enumeration completed Nov 1 00:37:09.585758 systemd-networkd[708]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 00:37:09.586573 systemd[1]: Started systemd-networkd.service. Nov 1 00:37:09.586851 systemd-networkd[708]: eth0: Link UP Nov 1 00:37:09.586855 systemd-networkd[708]: eth0: Gained carrier Nov 1 00:37:09.595000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:09.595843 systemd[1]: Reached target network.target. Nov 1 00:37:09.599168 systemd[1]: Starting iscsiuio.service... Nov 1 00:37:09.603975 systemd[1]: Started iscsiuio.service. Nov 1 00:37:09.605495 systemd-networkd[708]: eth0: DHCPv4 address 10.0.0.57/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 1 00:37:09.606000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:09.609457 systemd[1]: Starting iscsid.service... Nov 1 00:37:09.613213 iscsid[713]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Nov 1 00:37:09.613213 iscsid[713]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Nov 1 00:37:09.613213 iscsid[713]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Nov 1 00:37:09.613213 iscsid[713]: If using hardware iscsi like qla4xxx this message can be ignored. Nov 1 00:37:09.613213 iscsid[713]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Nov 1 00:37:09.613213 iscsid[713]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Nov 1 00:37:09.626000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:09.615828 systemd[1]: Started iscsid.service. Nov 1 00:37:09.634312 systemd[1]: Starting dracut-initqueue.service... Nov 1 00:37:09.643928 systemd[1]: Finished dracut-initqueue.service. Nov 1 00:37:09.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:09.646868 systemd[1]: Reached target remote-fs-pre.target. Nov 1 00:37:09.650073 systemd[1]: Reached target remote-cryptsetup.target. Nov 1 00:37:09.653096 systemd[1]: Reached target remote-fs.target. Nov 1 00:37:09.656394 systemd[1]: Starting dracut-pre-mount.service... Nov 1 00:37:09.664169 systemd[1]: Finished dracut-pre-mount.service. Nov 1 00:37:09.665000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:09.834431 systemd[1]: Finished ignition-setup.service. Nov 1 00:37:09.834000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:09.835909 systemd[1]: Starting ignition-fetch-offline.service... Nov 1 00:37:09.883470 ignition[728]: Ignition 2.14.0 Nov 1 00:37:09.883479 ignition[728]: Stage: fetch-offline Nov 1 00:37:09.883531 ignition[728]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:37:09.883540 ignition[728]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 00:37:09.883639 ignition[728]: parsed url from cmdline: "" Nov 1 00:37:09.883642 ignition[728]: no config URL provided Nov 1 00:37:09.883647 ignition[728]: reading system config file "/usr/lib/ignition/user.ign" Nov 1 00:37:09.883653 ignition[728]: no config at "/usr/lib/ignition/user.ign" Nov 1 00:37:09.883669 ignition[728]: op(1): [started] loading QEMU firmware config module Nov 1 00:37:09.883673 ignition[728]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 1 00:37:09.886965 ignition[728]: op(1): [finished] loading QEMU firmware config module Nov 1 00:37:09.976513 ignition[728]: parsing config with SHA512: 3a7d96ec83896b3bdd5e4a0bbac60529c71b907fa0e09b2b6650cc3e16d62a4b0060bd97d0aafa42ba29b609989d1cffb993fff9f85d2e9c26141caa74e09ba7 Nov 1 00:37:09.982792 unknown[728]: fetched base config from "system" Nov 1 00:37:09.982802 unknown[728]: fetched user config from "qemu" Nov 1 00:37:09.985694 ignition[728]: fetch-offline: fetch-offline passed Nov 1 00:37:09.987002 ignition[728]: Ignition finished successfully Nov 1 00:37:09.988521 systemd[1]: Finished ignition-fetch-offline.service. Nov 1 00:37:09.988000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:09.989233 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 1 00:37:09.989944 systemd[1]: Starting ignition-kargs.service... Nov 1 00:37:10.001682 ignition[736]: Ignition 2.14.0 Nov 1 00:37:10.001690 ignition[736]: Stage: kargs Nov 1 00:37:10.001782 ignition[736]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:37:10.001791 ignition[736]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 00:37:10.002793 ignition[736]: kargs: kargs passed Nov 1 00:37:10.002832 ignition[736]: Ignition finished successfully Nov 1 00:37:10.009431 systemd[1]: Finished ignition-kargs.service. Nov 1 00:37:10.009000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:10.010818 systemd[1]: Starting ignition-disks.service... Nov 1 00:37:10.016804 ignition[742]: Ignition 2.14.0 Nov 1 00:37:10.016811 ignition[742]: Stage: disks Nov 1 00:37:10.016913 ignition[742]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:37:10.016922 ignition[742]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 00:37:10.017937 ignition[742]: disks: disks passed Nov 1 00:37:10.017975 ignition[742]: Ignition finished successfully Nov 1 00:37:10.024327 systemd[1]: Finished ignition-disks.service. Nov 1 00:37:10.024000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:10.025043 systemd[1]: Reached target initrd-root-device.target. Nov 1 00:37:10.027261 systemd[1]: Reached target local-fs-pre.target. Nov 1 00:37:10.029956 systemd[1]: Reached target local-fs.target. Nov 1 00:37:10.032348 systemd[1]: Reached target sysinit.target. Nov 1 00:37:10.034934 systemd[1]: Reached target basic.target. Nov 1 00:37:10.037562 systemd[1]: Starting systemd-fsck-root.service... Nov 1 00:37:10.050144 systemd-fsck[750]: ROOT: clean, 637/553520 files, 56032/553472 blocks Nov 1 00:37:10.055248 systemd[1]: Finished systemd-fsck-root.service. Nov 1 00:37:10.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:10.056839 systemd[1]: Mounting sysroot.mount... Nov 1 00:37:10.063399 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Nov 1 00:37:10.063984 systemd[1]: Mounted sysroot.mount. Nov 1 00:37:10.067218 systemd[1]: Reached target initrd-root-fs.target. Nov 1 00:37:10.070664 systemd[1]: Mounting sysroot-usr.mount... Nov 1 00:37:10.073124 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Nov 1 00:37:10.073158 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 1 00:37:10.075397 systemd[1]: Reached target ignition-diskful.target. Nov 1 00:37:10.081601 systemd[1]: Mounted sysroot-usr.mount. Nov 1 00:37:10.084521 systemd[1]: Starting initrd-setup-root.service... Nov 1 00:37:10.088221 initrd-setup-root[760]: cut: /sysroot/etc/passwd: No such file or directory Nov 1 00:37:10.092530 initrd-setup-root[768]: cut: /sysroot/etc/group: No such file or directory Nov 1 00:37:10.096787 initrd-setup-root[776]: cut: /sysroot/etc/shadow: No such file or directory Nov 1 00:37:10.100900 initrd-setup-root[784]: cut: /sysroot/etc/gshadow: No such file or directory Nov 1 00:37:10.122899 systemd[1]: Finished initrd-setup-root.service. Nov 1 00:37:10.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:10.126329 systemd[1]: Starting ignition-mount.service... Nov 1 00:37:10.129456 systemd[1]: Starting sysroot-boot.service... Nov 1 00:37:10.132679 bash[801]: umount: /sysroot/usr/share/oem: not mounted. Nov 1 00:37:10.142404 ignition[803]: INFO : Ignition 2.14.0 Nov 1 00:37:10.142404 ignition[803]: INFO : Stage: mount Nov 1 00:37:10.144941 ignition[803]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 00:37:10.144941 ignition[803]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 00:37:10.144941 ignition[803]: INFO : mount: mount passed Nov 1 00:37:10.144941 ignition[803]: INFO : Ignition finished successfully Nov 1 00:37:10.151357 systemd[1]: Finished ignition-mount.service. Nov 1 00:37:10.153000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:10.154190 systemd[1]: Finished sysroot-boot.service. Nov 1 00:37:10.156000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:10.436631 systemd[1]: Mounting sysroot-usr-share-oem.mount... Nov 1 00:37:10.447736 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (811) Nov 1 00:37:10.447820 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:37:10.447848 kernel: BTRFS info (device vda6): using free space tree Nov 1 00:37:10.449473 kernel: BTRFS info (device vda6): has skinny extents Nov 1 00:37:10.455128 systemd[1]: Mounted sysroot-usr-share-oem.mount. Nov 1 00:37:10.457046 systemd[1]: Starting ignition-files.service... Nov 1 00:37:10.471991 ignition[831]: INFO : Ignition 2.14.0 Nov 1 00:37:10.471991 ignition[831]: INFO : Stage: files Nov 1 00:37:10.475074 ignition[831]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 00:37:10.475074 ignition[831]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 00:37:10.475074 ignition[831]: DEBUG : files: compiled without relabeling support, skipping Nov 1 00:37:10.482795 ignition[831]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 1 00:37:10.482795 ignition[831]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 1 00:37:10.489651 ignition[831]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 1 00:37:10.492724 ignition[831]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 1 00:37:10.495429 ignition[831]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 1 00:37:10.495429 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 1 00:37:10.495429 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 1 00:37:10.495429 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 1 00:37:10.495429 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Nov 1 00:37:10.493801 unknown[831]: wrote ssh authorized keys file for user: core Nov 1 00:37:10.557886 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 1 00:37:10.652524 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 1 00:37:10.656545 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 1 00:37:10.656545 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Nov 1 00:37:10.672634 systemd-networkd[708]: eth0: Gained IPv6LL Nov 1 00:37:10.892595 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Nov 1 00:37:11.171431 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 1 00:37:11.171431 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Nov 1 00:37:11.177720 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Nov 1 00:37:11.177720 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 1 00:37:11.184115 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 1 00:37:11.184115 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 00:37:11.190483 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 00:37:11.193548 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 00:37:11.196647 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 00:37:11.199840 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 00:37:11.203275 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 00:37:11.206055 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 00:37:11.210090 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 00:37:11.228613 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 00:37:11.232041 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Nov 1 00:37:11.468156 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Nov 1 00:37:11.867826 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 00:37:11.867826 ignition[831]: INFO : files: op(d): [started] processing unit "containerd.service" Nov 1 00:37:11.875978 ignition[831]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 1 00:37:11.875978 ignition[831]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 1 00:37:11.875978 ignition[831]: INFO : files: op(d): [finished] processing unit "containerd.service" Nov 1 00:37:11.875978 ignition[831]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Nov 1 00:37:11.875978 ignition[831]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 00:37:11.875978 ignition[831]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 00:37:11.875978 ignition[831]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Nov 1 00:37:11.875978 ignition[831]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Nov 1 00:37:11.875978 ignition[831]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 1 00:37:11.875978 ignition[831]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 1 00:37:11.875978 ignition[831]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Nov 1 00:37:11.875978 ignition[831]: INFO : files: op(13): [started] setting preset to enabled for "prepare-helm.service" Nov 1 00:37:11.875978 ignition[831]: INFO : files: op(13): [finished] setting preset to enabled for "prepare-helm.service" Nov 1 00:37:11.875978 ignition[831]: INFO : files: op(14): [started] setting preset to disabled for "coreos-metadata.service" Nov 1 00:37:11.875978 ignition[831]: INFO : files: op(14): op(15): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 1 00:37:11.956373 ignition[831]: INFO : files: op(14): op(15): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 1 00:37:11.959751 ignition[831]: INFO : files: op(14): [finished] setting preset to disabled for "coreos-metadata.service" Nov 1 00:37:11.959751 ignition[831]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 1 00:37:11.959751 ignition[831]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 1 00:37:11.959751 ignition[831]: INFO : files: files passed Nov 1 00:37:11.959751 ignition[831]: INFO : Ignition finished successfully Nov 1 00:37:11.983059 kernel: kauditd_printk_skb: 24 callbacks suppressed Nov 1 00:37:11.983091 kernel: audit: type=1130 audit(1761957431.967:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:11.967000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:11.966765 systemd[1]: Finished ignition-files.service. Nov 1 00:37:11.968995 systemd[1]: Starting initrd-setup-root-after-ignition.service... Nov 1 00:37:11.980868 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Nov 1 00:37:11.990521 initrd-setup-root-after-ignition[856]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Nov 1 00:37:11.993112 initrd-setup-root-after-ignition[858]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 00:37:11.994031 systemd[1]: Starting ignition-quench.service... Nov 1 00:37:11.998862 systemd[1]: Finished initrd-setup-root-after-ignition.service. Nov 1 00:37:12.007953 kernel: audit: type=1130 audit(1761957431.999:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:11.999000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:12.000248 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 1 00:37:12.000360 systemd[1]: Finished ignition-quench.service. Nov 1 00:37:12.022689 kernel: audit: type=1130 audit(1761957432.010:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:12.022712 kernel: audit: type=1131 audit(1761957432.010:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:12.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:12.010000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:12.011437 systemd[1]: Reached target ignition-complete.target. Nov 1 00:37:12.024315 systemd[1]: Starting initrd-parse-etc.service... Nov 1 00:37:12.036798 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 1 00:37:12.036907 systemd[1]: Finished initrd-parse-etc.service. Nov 1 00:37:12.051510 kernel: audit: type=1130 audit(1761957432.039:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:12.051553 kernel: audit: type=1131 audit(1761957432.039:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:12.039000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:12.039000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:12.039860 systemd[1]: Reached target initrd-fs.target. Nov 1 00:37:12.052223 systemd[1]: Reached target initrd.target. Nov 1 00:37:12.055155 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Nov 1 00:37:12.056547 systemd[1]: Starting dracut-pre-pivot.service... Nov 1 00:37:12.072761 systemd[1]: Finished dracut-pre-pivot.service. Nov 1 00:37:12.080896 kernel: audit: type=1130 audit(1761957432.073:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:12.073000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:12.074677 systemd[1]: Starting initrd-cleanup.service... Nov 1 00:37:12.088209 systemd[1]: Stopped target nss-lookup.target. Nov 1 00:37:12.089022 systemd[1]: Stopped target remote-cryptsetup.target. Nov 1 00:37:12.092180 systemd[1]: Stopped target timers.target. Nov 1 00:37:12.093160 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 1 00:37:12.104707 kernel: audit: type=1131 audit(1761957432.096:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:12.096000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:12.093362 systemd[1]: Stopped dracut-pre-pivot.service. Nov 1 00:37:12.096585 systemd[1]: Stopped target initrd.target. Nov 1 00:37:12.104789 systemd[1]: Stopped target basic.target. Nov 1 00:37:12.106157 systemd[1]: Stopped target ignition-complete.target. Nov 1 00:37:12.108847 systemd[1]: Stopped target ignition-diskful.target. Nov 1 00:37:12.111694 systemd[1]: Stopped target initrd-root-device.target. Nov 1 00:37:12.114473 systemd[1]: Stopped target remote-fs.target. Nov 1 00:37:12.117111 systemd[1]: Stopped target remote-fs-pre.target. Nov 1 00:37:12.119934 systemd[1]: Stopped target sysinit.target. Nov 1 00:37:12.122570 systemd[1]: Stopped target local-fs.target. Nov 1 00:37:12.125026 systemd[1]: Stopped target local-fs-pre.target. Nov 1 00:37:12.140975 kernel: audit: type=1131 audit(1761957432.132:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:12.132000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:12.127658 systemd[1]: Stopped target swap.target. Nov 1 00:37:12.129960 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 1 00:37:12.152371 kernel: audit: type=1131 audit(1761957432.143:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:12.143000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:12.152000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:12.130060 systemd[1]: Stopped dracut-pre-mount.service. Nov 1 00:37:12.132719 systemd[1]: Stopped target cryptsetup.target. Nov 1 00:37:12.141013 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 1 00:37:12.141112 systemd[1]: Stopped dracut-initqueue.service. Nov 1 00:37:12.143565 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 1 00:37:12.143652 systemd[1]: Stopped ignition-fetch-offline.service. Nov 1 00:37:12.152481 systemd[1]: Stopped target paths.target. Nov 1 00:37:12.168000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:12.153180 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 1 00:37:12.170000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:12.158490 systemd[1]: Stopped systemd-ask-password-console.path. Nov 1 00:37:12.177430 iscsid[713]: iscsid shutting down. Nov 1 00:37:12.160201 systemd[1]: Stopped target slices.target. Nov 1 00:37:12.162649 systemd[1]: Stopped target sockets.target. Nov 1 00:37:12.183000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:12.165602 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 1 00:37:12.165778 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Nov 1 00:37:12.186000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:12.168637 systemd[1]: ignition-files.service: Deactivated successfully. Nov 1 00:37:12.191000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:12.195256 ignition[871]: INFO : Ignition 2.14.0 Nov 1 00:37:12.195256 ignition[871]: INFO : Stage: umount Nov 1 00:37:12.195256 ignition[871]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 00:37:12.195256 ignition[871]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 00:37:12.195256 ignition[871]: INFO : umount: umount passed Nov 1 00:37:12.195256 ignition[871]: INFO : Ignition finished successfully Nov 1 00:37:12.202000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:12.202000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:12.206000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:12.208000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:12.168769 systemd[1]: Stopped ignition-files.service. Nov 1 00:37:12.214000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:12.217000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:12.218000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:12.172185 systemd[1]: Stopping ignition-mount.service... Nov 1 00:37:12.174689 systemd[1]: Stopping iscsid.service... Nov 1 00:37:12.178188 systemd[1]: Stopping sysroot-boot.service... Nov 1 00:37:12.180542 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 1 00:37:12.180800 systemd[1]: Stopped systemd-udev-trigger.service. Nov 1 00:37:12.229000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:12.183643 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 1 00:37:12.183783 systemd[1]: Stopped dracut-pre-trigger.service. Nov 1 00:37:12.189518 systemd[1]: iscsid.service: Deactivated successfully. Nov 1 00:37:12.255000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:12.189640 systemd[1]: Stopped iscsid.service. Nov 1 00:37:12.192726 systemd[1]: iscsid.socket: Deactivated successfully. Nov 1 00:37:12.262000 audit: BPF prog-id=6 op=UNLOAD Nov 1 00:37:12.264000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:12.192819 systemd[1]: Closed iscsid.socket. Nov 1 00:37:12.264000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:12.268000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:12.196884 systemd[1]: Stopping iscsiuio.service... Nov 1 00:37:12.199665 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 1 00:37:12.199818 systemd[1]: Finished initrd-cleanup.service. Nov 1 00:37:12.204038 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 1 00:37:12.204655 systemd[1]: iscsiuio.service: Deactivated successfully. Nov 1 00:37:12.204797 systemd[1]: Stopped iscsiuio.service. Nov 1 00:37:12.206543 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 1 00:37:12.280000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:12.206673 systemd[1]: Stopped ignition-mount.service. Nov 1 00:37:12.283000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:12.210150 systemd[1]: Stopped target network.target. Nov 1 00:37:12.212011 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 1 00:37:12.212050 systemd[1]: Closed iscsiuio.socket. Nov 1 00:37:12.213266 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 1 00:37:12.292000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:12.295000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:12.213311 systemd[1]: Stopped ignition-disks.service. Nov 1 00:37:12.214708 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 1 00:37:12.298000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:12.214767 systemd[1]: Stopped ignition-kargs.service. Nov 1 00:37:12.301000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:12.305000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:12.217457 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 1 00:37:12.217502 systemd[1]: Stopped ignition-setup.service. Nov 1 00:37:12.218993 systemd[1]: Stopping systemd-networkd.service... Nov 1 00:37:12.308000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:12.312000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:12.312000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:12.220279 systemd[1]: Stopping systemd-resolved.service... Nov 1 00:37:12.224480 systemd-networkd[708]: eth0: DHCPv6 lease lost Nov 1 00:37:12.315000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:12.315000 audit: BPF prog-id=9 op=UNLOAD Nov 1 00:37:12.227020 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 1 00:37:12.319000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:12.227097 systemd[1]: Stopped systemd-networkd.service. Nov 1 00:37:12.251720 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 1 00:37:12.251814 systemd[1]: Stopped systemd-resolved.service. Nov 1 00:37:12.256452 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 1 00:37:12.256515 systemd[1]: Closed systemd-networkd.socket. Nov 1 00:37:12.259968 systemd[1]: Stopping network-cleanup.service... Nov 1 00:37:12.329000 audit: BPF prog-id=5 op=UNLOAD Nov 1 00:37:12.329000 audit: BPF prog-id=4 op=UNLOAD Nov 1 00:37:12.329000 audit: BPF prog-id=3 op=UNLOAD Nov 1 00:37:12.261442 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 1 00:37:12.330000 audit: BPF prog-id=8 op=UNLOAD Nov 1 00:37:12.330000 audit: BPF prog-id=7 op=UNLOAD Nov 1 00:37:12.261541 systemd[1]: Stopped parse-ip-for-networkd.service. Nov 1 00:37:12.264591 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 1 00:37:12.264625 systemd[1]: Stopped systemd-sysctl.service. Nov 1 00:37:12.266248 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 1 00:37:12.266284 systemd[1]: Stopped systemd-modules-load.service. Nov 1 00:37:12.268858 systemd[1]: Stopping systemd-udevd.service... Nov 1 00:37:12.271145 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Nov 1 00:37:12.278216 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 1 00:37:12.347503 systemd-journald[199]: Received SIGTERM from PID 1 (n/a). Nov 1 00:37:12.278321 systemd[1]: Stopped network-cleanup.service. Nov 1 00:37:12.280778 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 1 00:37:12.280923 systemd[1]: Stopped systemd-udevd.service. Nov 1 00:37:12.285072 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 1 00:37:12.285107 systemd[1]: Closed systemd-udevd-control.socket. Nov 1 00:37:12.287564 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 1 00:37:12.287603 systemd[1]: Closed systemd-udevd-kernel.socket. Nov 1 00:37:12.290321 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 1 00:37:12.290435 systemd[1]: Stopped dracut-pre-udev.service. Nov 1 00:37:12.292814 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 1 00:37:12.292858 systemd[1]: Stopped dracut-cmdline.service. Nov 1 00:37:12.295571 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 1 00:37:12.295608 systemd[1]: Stopped dracut-cmdline-ask.service. Nov 1 00:37:12.299471 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Nov 1 00:37:12.301408 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 1 00:37:12.301466 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Nov 1 00:37:12.303022 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 1 00:37:12.303070 systemd[1]: Stopped kmod-static-nodes.service. Nov 1 00:37:12.305671 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 00:37:12.305724 systemd[1]: Stopped systemd-vconsole-setup.service. Nov 1 00:37:12.309472 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Nov 1 00:37:12.309961 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 1 00:37:12.310047 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Nov 1 00:37:12.312790 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 1 00:37:12.312860 systemd[1]: Stopped sysroot-boot.service. Nov 1 00:37:12.315288 systemd[1]: Reached target initrd-switch-root.target. Nov 1 00:37:12.317826 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 1 00:37:12.317865 systemd[1]: Stopped initrd-setup-root.service. Nov 1 00:37:12.319737 systemd[1]: Starting initrd-switch-root.service... Nov 1 00:37:12.326591 systemd[1]: Switching root. Nov 1 00:37:12.356883 systemd-journald[199]: Journal stopped Nov 1 00:37:15.785011 kernel: SELinux: Class mctp_socket not defined in policy. Nov 1 00:37:15.785057 kernel: SELinux: Class anon_inode not defined in policy. Nov 1 00:37:15.785070 kernel: SELinux: the above unknown classes and permissions will be allowed Nov 1 00:37:15.785080 kernel: SELinux: policy capability network_peer_controls=1 Nov 1 00:37:15.785089 kernel: SELinux: policy capability open_perms=1 Nov 1 00:37:15.785099 kernel: SELinux: policy capability extended_socket_class=1 Nov 1 00:37:15.785108 kernel: SELinux: policy capability always_check_network=0 Nov 1 00:37:15.785120 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 1 00:37:15.785130 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 1 00:37:15.785140 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 1 00:37:15.785152 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 1 00:37:15.785162 systemd[1]: Successfully loaded SELinux policy in 50.839ms. Nov 1 00:37:15.785175 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 14.476ms. Nov 1 00:37:15.785186 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Nov 1 00:37:15.785197 systemd[1]: Detected virtualization kvm. Nov 1 00:37:15.785207 systemd[1]: Detected architecture x86-64. Nov 1 00:37:15.785217 systemd[1]: Detected first boot. Nov 1 00:37:15.785229 systemd[1]: Initializing machine ID from VM UUID. Nov 1 00:37:15.785249 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Nov 1 00:37:15.785259 systemd[1]: Populated /etc with preset unit settings. Nov 1 00:37:15.785270 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 00:37:15.785281 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 00:37:15.785292 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:37:15.785305 systemd[1]: Queued start job for default target multi-user.target. Nov 1 00:37:15.785315 systemd[1]: Unnecessary job was removed for dev-vda6.device. Nov 1 00:37:15.785328 systemd[1]: Created slice system-addon\x2dconfig.slice. Nov 1 00:37:15.785339 systemd[1]: Created slice system-addon\x2drun.slice. Nov 1 00:37:15.785348 systemd[1]: Created slice system-getty.slice. Nov 1 00:37:15.785359 systemd[1]: Created slice system-modprobe.slice. Nov 1 00:37:15.785368 systemd[1]: Created slice system-serial\x2dgetty.slice. Nov 1 00:37:15.785400 systemd[1]: Created slice system-system\x2dcloudinit.slice. Nov 1 00:37:15.785411 systemd[1]: Created slice system-systemd\x2dfsck.slice. Nov 1 00:37:15.785421 systemd[1]: Created slice user.slice. Nov 1 00:37:15.785431 systemd[1]: Started systemd-ask-password-console.path. Nov 1 00:37:15.785442 systemd[1]: Started systemd-ask-password-wall.path. Nov 1 00:37:15.785453 systemd[1]: Set up automount boot.automount. Nov 1 00:37:15.785463 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Nov 1 00:37:15.785473 systemd[1]: Reached target integritysetup.target. Nov 1 00:37:15.785484 systemd[1]: Reached target remote-cryptsetup.target. Nov 1 00:37:15.785498 systemd[1]: Reached target remote-fs.target. Nov 1 00:37:15.785509 systemd[1]: Reached target slices.target. Nov 1 00:37:15.785521 systemd[1]: Reached target swap.target. Nov 1 00:37:15.785533 systemd[1]: Reached target torcx.target. Nov 1 00:37:15.785543 systemd[1]: Reached target veritysetup.target. Nov 1 00:37:15.785553 systemd[1]: Listening on systemd-coredump.socket. Nov 1 00:37:15.785563 systemd[1]: Listening on systemd-initctl.socket. Nov 1 00:37:15.785573 systemd[1]: Listening on systemd-journald-audit.socket. Nov 1 00:37:15.785584 systemd[1]: Listening on systemd-journald-dev-log.socket. Nov 1 00:37:15.785594 systemd[1]: Listening on systemd-journald.socket. Nov 1 00:37:15.785604 systemd[1]: Listening on systemd-networkd.socket. Nov 1 00:37:15.785614 systemd[1]: Listening on systemd-udevd-control.socket. Nov 1 00:37:15.785624 systemd[1]: Listening on systemd-udevd-kernel.socket. Nov 1 00:37:15.785636 systemd[1]: Listening on systemd-userdbd.socket. Nov 1 00:37:15.785646 systemd[1]: Mounting dev-hugepages.mount... Nov 1 00:37:15.785656 systemd[1]: Mounting dev-mqueue.mount... Nov 1 00:37:15.785666 systemd[1]: Mounting media.mount... Nov 1 00:37:15.785677 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:37:15.785687 systemd[1]: Mounting sys-kernel-debug.mount... Nov 1 00:37:15.785697 systemd[1]: Mounting sys-kernel-tracing.mount... Nov 1 00:37:15.785708 systemd[1]: Mounting tmp.mount... Nov 1 00:37:15.785718 systemd[1]: Starting flatcar-tmpfiles.service... Nov 1 00:37:15.785730 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:37:15.785740 systemd[1]: Starting kmod-static-nodes.service... Nov 1 00:37:15.785750 systemd[1]: Starting modprobe@configfs.service... Nov 1 00:37:15.785760 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:37:15.785770 systemd[1]: Starting modprobe@drm.service... Nov 1 00:37:15.785780 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:37:15.785790 systemd[1]: Starting modprobe@fuse.service... Nov 1 00:37:15.785800 systemd[1]: Starting modprobe@loop.service... Nov 1 00:37:15.785810 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 1 00:37:15.785822 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Nov 1 00:37:15.785832 kernel: loop: module loaded Nov 1 00:37:15.785842 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Nov 1 00:37:15.785852 kernel: fuse: init (API version 7.34) Nov 1 00:37:15.785862 systemd[1]: Starting systemd-journald.service... Nov 1 00:37:15.785873 systemd[1]: Starting systemd-modules-load.service... Nov 1 00:37:15.785883 systemd[1]: Starting systemd-network-generator.service... Nov 1 00:37:15.785893 systemd[1]: Starting systemd-remount-fs.service... Nov 1 00:37:15.785905 systemd[1]: Starting systemd-udev-trigger.service... Nov 1 00:37:15.785915 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:37:15.785927 systemd[1]: Mounted dev-hugepages.mount. Nov 1 00:37:15.785939 systemd-journald[1027]: Journal started Nov 1 00:37:15.785976 systemd-journald[1027]: Runtime Journal (/run/log/journal/8c20ae0c5ef14bafa89d54d49872abaf) is 6.0M, max 48.5M, 42.5M free. Nov 1 00:37:15.664000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Nov 1 00:37:15.664000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Nov 1 00:37:15.783000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Nov 1 00:37:15.783000 audit[1027]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffd5011f120 a2=4000 a3=7ffd5011f1bc items=0 ppid=1 pid=1027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:37:15.783000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Nov 1 00:37:15.788767 systemd[1]: Started systemd-journald.service. Nov 1 00:37:15.788000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:15.789758 systemd[1]: Mounted dev-mqueue.mount. Nov 1 00:37:15.791305 systemd[1]: Mounted media.mount. Nov 1 00:37:15.792803 systemd[1]: Mounted sys-kernel-debug.mount. Nov 1 00:37:15.794529 systemd[1]: Mounted sys-kernel-tracing.mount. Nov 1 00:37:15.796256 systemd[1]: Mounted tmp.mount. Nov 1 00:37:15.798187 systemd[1]: Finished flatcar-tmpfiles.service. Nov 1 00:37:15.799000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:15.800462 systemd[1]: Finished kmod-static-nodes.service. Nov 1 00:37:15.801000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:15.802434 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 1 00:37:15.802736 systemd[1]: Finished modprobe@configfs.service. Nov 1 00:37:15.804000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:15.804000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:15.804770 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:37:15.805033 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:37:15.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:15.806000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:15.807000 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 00:37:15.807433 systemd[1]: Finished modprobe@drm.service. Nov 1 00:37:15.808000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:15.808000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:15.809441 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:37:15.809603 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:37:15.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:15.810000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:15.811348 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 1 00:37:15.811544 systemd[1]: Finished modprobe@fuse.service. Nov 1 00:37:15.812000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:15.812000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:15.813250 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:37:15.813482 systemd[1]: Finished modprobe@loop.service. Nov 1 00:37:15.814000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:15.814000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:15.815191 systemd[1]: Finished systemd-modules-load.service. Nov 1 00:37:15.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:15.817091 systemd[1]: Finished systemd-network-generator.service. Nov 1 00:37:15.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:15.819076 systemd[1]: Finished systemd-remount-fs.service. Nov 1 00:37:15.820000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:15.821045 systemd[1]: Reached target network-pre.target. Nov 1 00:37:15.823685 systemd[1]: Mounting sys-fs-fuse-connections.mount... Nov 1 00:37:15.826043 systemd[1]: Mounting sys-kernel-config.mount... Nov 1 00:37:15.827573 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 1 00:37:15.829287 systemd[1]: Starting systemd-hwdb-update.service... Nov 1 00:37:15.831962 systemd[1]: Starting systemd-journal-flush.service... Nov 1 00:37:15.833494 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:37:15.834952 systemd[1]: Starting systemd-random-seed.service... Nov 1 00:37:15.836456 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 00:37:15.837777 systemd[1]: Starting systemd-sysctl.service... Nov 1 00:37:15.840158 systemd-journald[1027]: Time spent on flushing to /var/log/journal/8c20ae0c5ef14bafa89d54d49872abaf is 27.217ms for 1047 entries. Nov 1 00:37:15.840158 systemd-journald[1027]: System Journal (/var/log/journal/8c20ae0c5ef14bafa89d54d49872abaf) is 8.0M, max 195.6M, 187.6M free. Nov 1 00:37:15.883211 systemd-journald[1027]: Received client request to flush runtime journal. Nov 1 00:37:15.847000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:15.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:15.862000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:15.866000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:15.840399 systemd[1]: Starting systemd-sysusers.service... Nov 1 00:37:15.846072 systemd[1]: Finished systemd-udev-trigger.service. Nov 1 00:37:15.848080 systemd[1]: Mounted sys-fs-fuse-connections.mount. Nov 1 00:37:15.884132 udevadm[1057]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Nov 1 00:37:15.849675 systemd[1]: Mounted sys-kernel-config.mount. Nov 1 00:37:15.852252 systemd[1]: Starting systemd-udev-settle.service... Nov 1 00:37:15.853864 systemd[1]: Finished systemd-random-seed.service. Nov 1 00:37:15.855470 systemd[1]: Reached target first-boot-complete.target. Nov 1 00:37:15.860798 systemd[1]: Finished systemd-sysctl.service. Nov 1 00:37:15.865282 systemd[1]: Finished systemd-sysusers.service. Nov 1 00:37:15.869616 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Nov 1 00:37:15.885498 systemd[1]: Finished systemd-journal-flush.service. Nov 1 00:37:15.887000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:15.890300 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Nov 1 00:37:15.891000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:16.305587 systemd[1]: Finished systemd-hwdb-update.service. Nov 1 00:37:16.307000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:16.308736 systemd[1]: Starting systemd-udevd.service... Nov 1 00:37:16.326418 systemd-udevd[1066]: Using default interface naming scheme 'v252'. Nov 1 00:37:16.339259 systemd[1]: Started systemd-udevd.service. Nov 1 00:37:16.340000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:16.342073 systemd[1]: Starting systemd-networkd.service... Nov 1 00:37:16.347776 systemd[1]: Starting systemd-userdbd.service... Nov 1 00:37:16.361115 systemd[1]: Found device dev-ttyS0.device. Nov 1 00:37:16.384000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:16.383447 systemd[1]: Started systemd-userdbd.service. Nov 1 00:37:16.395421 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Nov 1 00:37:16.401400 kernel: ACPI: button: Power Button [PWRF] Nov 1 00:37:16.418637 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Nov 1 00:37:16.417000 audit[1076]: AVC avc: denied { confidentiality } for pid=1076 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Nov 1 00:37:16.417000 audit[1076]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55ddd6973090 a1=338ec a2=7f6067c94bc5 a3=5 items=110 ppid=1066 pid=1076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:37:16.417000 audit: CWD cwd="/" Nov 1 00:37:16.417000 audit: PATH item=0 name=(null) inode=44 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=1 name=(null) inode=12819 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=2 name=(null) inode=12819 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=3 name=(null) inode=12820 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=4 name=(null) inode=12819 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=5 name=(null) inode=12821 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=6 name=(null) inode=12819 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=7 name=(null) inode=12822 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=8 name=(null) inode=12822 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=9 name=(null) inode=12823 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=10 name=(null) inode=12822 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=11 name=(null) inode=12824 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=12 name=(null) inode=12822 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=13 name=(null) inode=12825 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=14 name=(null) inode=12822 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=15 name=(null) inode=12826 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=16 name=(null) inode=12822 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=17 name=(null) inode=12827 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=18 name=(null) inode=12819 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=19 name=(null) inode=12828 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=20 name=(null) inode=12828 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=21 name=(null) inode=12829 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=22 name=(null) inode=12828 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=23 name=(null) inode=12830 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=24 name=(null) inode=12828 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=25 name=(null) inode=12831 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=26 name=(null) inode=12828 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=27 name=(null) inode=12832 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=28 name=(null) inode=12828 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=29 name=(null) inode=12833 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=30 name=(null) inode=12819 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=31 name=(null) inode=12834 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=32 name=(null) inode=12834 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=33 name=(null) inode=12835 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=34 name=(null) inode=12834 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=35 name=(null) inode=12836 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=36 name=(null) inode=12834 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=37 name=(null) inode=12837 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=38 name=(null) inode=12834 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=39 name=(null) inode=12838 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=40 name=(null) inode=12834 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=41 name=(null) inode=12839 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=42 name=(null) inode=12819 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=43 name=(null) inode=12840 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=44 name=(null) inode=12840 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=45 name=(null) inode=12841 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=46 name=(null) inode=12840 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=47 name=(null) inode=12842 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=48 name=(null) inode=12840 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=49 name=(null) inode=12843 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=50 name=(null) inode=12840 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=51 name=(null) inode=12844 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=52 name=(null) inode=12840 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=53 name=(null) inode=12845 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=54 name=(null) inode=44 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=55 name=(null) inode=12846 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=56 name=(null) inode=12846 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=57 name=(null) inode=12847 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=58 name=(null) inode=12846 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=59 name=(null) inode=12848 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=60 name=(null) inode=12846 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=61 name=(null) inode=12849 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=62 name=(null) inode=12849 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=63 name=(null) inode=12850 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=64 name=(null) inode=12849 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=65 name=(null) inode=12851 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=66 name=(null) inode=12849 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=67 name=(null) inode=12852 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=68 name=(null) inode=12849 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=69 name=(null) inode=12853 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=70 name=(null) inode=12849 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=71 name=(null) inode=12854 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=72 name=(null) inode=12846 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=73 name=(null) inode=12855 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=74 name=(null) inode=12855 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=75 name=(null) inode=12856 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=76 name=(null) inode=12855 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=77 name=(null) inode=12857 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=78 name=(null) inode=12855 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=79 name=(null) inode=12858 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=80 name=(null) inode=12855 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=81 name=(null) inode=12859 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=82 name=(null) inode=12855 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=83 name=(null) inode=12860 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=84 name=(null) inode=12846 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=85 name=(null) inode=12861 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=86 name=(null) inode=12861 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=87 name=(null) inode=12862 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=88 name=(null) inode=12861 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=89 name=(null) inode=12863 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=90 name=(null) inode=12861 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=91 name=(null) inode=12864 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=92 name=(null) inode=12861 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=93 name=(null) inode=12865 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=94 name=(null) inode=12861 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=95 name=(null) inode=12866 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=96 name=(null) inode=12846 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=97 name=(null) inode=12867 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=98 name=(null) inode=12867 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=99 name=(null) inode=12868 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=100 name=(null) inode=12867 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=101 name=(null) inode=12869 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=102 name=(null) inode=12867 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=103 name=(null) inode=12870 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=104 name=(null) inode=12867 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=105 name=(null) inode=12871 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=106 name=(null) inode=12867 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=107 name=(null) inode=12872 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PATH item=109 name=(null) inode=12873 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:37:16.417000 audit: PROCTITLE proctitle="(udev-worker)" Nov 1 00:37:16.431789 systemd-networkd[1077]: lo: Link UP Nov 1 00:37:16.431797 systemd-networkd[1077]: lo: Gained carrier Nov 1 00:37:16.432167 systemd-networkd[1077]: Enumeration completed Nov 1 00:37:16.432300 systemd[1]: Started systemd-networkd.service. Nov 1 00:37:16.433244 systemd-networkd[1077]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 00:37:16.433000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:16.434862 systemd-networkd[1077]: eth0: Link UP Nov 1 00:37:16.434868 systemd-networkd[1077]: eth0: Gained carrier Nov 1 00:37:16.448570 systemd-networkd[1077]: eth0: DHCPv4 address 10.0.0.57/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 1 00:37:16.451408 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 1 00:37:16.452514 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Nov 1 00:37:16.452652 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 1 00:37:16.453403 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Nov 1 00:37:16.460421 kernel: mousedev: PS/2 mouse device common for all mice Nov 1 00:37:16.520428 kernel: kvm: Nested Virtualization enabled Nov 1 00:37:16.520618 kernel: SVM: kvm: Nested Paging enabled Nov 1 00:37:16.520642 kernel: SVM: Virtual VMLOAD VMSAVE supported Nov 1 00:37:16.520666 kernel: SVM: Virtual GIF supported Nov 1 00:37:16.554488 kernel: EDAC MC: Ver: 3.0.0 Nov 1 00:37:16.580795 systemd[1]: Finished systemd-udev-settle.service. Nov 1 00:37:16.581000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:16.583539 systemd[1]: Starting lvm2-activation-early.service... Nov 1 00:37:16.593466 lvm[1102]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 00:37:16.620114 systemd[1]: Finished lvm2-activation-early.service. Nov 1 00:37:16.621000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:16.622010 systemd[1]: Reached target cryptsetup.target. Nov 1 00:37:16.624732 systemd[1]: Starting lvm2-activation.service... Nov 1 00:37:16.628229 lvm[1104]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 00:37:16.657556 systemd[1]: Finished lvm2-activation.service. Nov 1 00:37:16.658000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:16.659024 systemd[1]: Reached target local-fs-pre.target. Nov 1 00:37:16.660359 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 1 00:37:16.660398 systemd[1]: Reached target local-fs.target. Nov 1 00:37:16.661661 systemd[1]: Reached target machines.target. Nov 1 00:37:16.663984 systemd[1]: Starting ldconfig.service... Nov 1 00:37:16.665472 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:37:16.665510 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:37:16.666458 systemd[1]: Starting systemd-boot-update.service... Nov 1 00:37:16.668713 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Nov 1 00:37:16.671527 systemd[1]: Starting systemd-machine-id-commit.service... Nov 1 00:37:16.675762 systemd[1]: Starting systemd-sysext.service... Nov 1 00:37:16.677326 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1107 (bootctl) Nov 1 00:37:16.678255 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Nov 1 00:37:16.685000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:16.685158 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Nov 1 00:37:16.691117 systemd[1]: Unmounting usr-share-oem.mount... Nov 1 00:37:16.694111 systemd[1]: usr-share-oem.mount: Deactivated successfully. Nov 1 00:37:16.694317 systemd[1]: Unmounted usr-share-oem.mount. Nov 1 00:37:16.705409 kernel: loop0: detected capacity change from 0 to 224512 Nov 1 00:37:16.722521 systemd-fsck[1116]: fsck.fat 4.2 (2021-01-31) Nov 1 00:37:16.722521 systemd-fsck[1116]: /dev/vda1: 790 files, 120773/258078 clusters Nov 1 00:37:16.724093 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Nov 1 00:37:16.726000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:16.728759 systemd[1]: Mounting boot.mount... Nov 1 00:37:16.742642 systemd[1]: Mounted boot.mount. Nov 1 00:37:16.754699 systemd[1]: Finished systemd-boot-update.service. Nov 1 00:37:16.756000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:17.702882 ldconfig[1106]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 1 00:37:17.705424 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 1 00:37:17.710282 systemd[1]: Finished ldconfig.service. Nov 1 00:37:17.719431 kernel: kauditd_printk_skb: 201 callbacks suppressed Nov 1 00:37:17.719576 kernel: audit: type=1130 audit(1761957437.711:124): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:17.711000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:17.723026 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 1 00:37:17.723902 systemd[1]: Finished systemd-machine-id-commit.service. Nov 1 00:37:17.726400 kernel: loop1: detected capacity change from 0 to 224512 Nov 1 00:37:17.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:17.734414 kernel: audit: type=1130 audit(1761957437.727:125): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:17.734941 (sd-sysext)[1129]: Using extensions 'kubernetes'. Nov 1 00:37:17.735647 (sd-sysext)[1129]: Merged extensions into '/usr'. Nov 1 00:37:17.749646 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:37:17.751051 systemd[1]: Mounting usr-share-oem.mount... Nov 1 00:37:17.752448 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:37:17.753557 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:37:17.755833 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:37:17.758334 systemd[1]: Starting modprobe@loop.service... Nov 1 00:37:17.759701 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:37:17.759862 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:37:17.760004 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:37:17.762902 systemd[1]: Mounted usr-share-oem.mount. Nov 1 00:37:17.764531 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:37:17.764674 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:37:17.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:17.766448 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:37:17.766571 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:37:17.765000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:17.772419 kernel: audit: type=1130 audit(1761957437.765:126): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:17.772476 kernel: audit: type=1131 audit(1761957437.765:127): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:17.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:17.778594 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:37:17.778743 systemd[1]: Finished modprobe@loop.service. Nov 1 00:37:17.777000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:17.784411 kernel: audit: type=1130 audit(1761957437.777:128): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:17.784454 kernel: audit: type=1131 audit(1761957437.777:129): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:17.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:17.790956 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:37:17.791052 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 00:37:17.791909 systemd[1]: Finished systemd-sysext.service. Nov 1 00:37:17.790000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:17.796402 kernel: audit: type=1130 audit(1761957437.790:130): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:17.796436 kernel: audit: type=1131 audit(1761957437.790:131): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:17.802000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:17.803808 systemd[1]: Starting ensure-sysext.service... Nov 1 00:37:17.808422 kernel: audit: type=1130 audit(1761957437.802:132): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:17.810054 systemd[1]: Starting systemd-tmpfiles-setup.service... Nov 1 00:37:17.814825 systemd[1]: Reloading. Nov 1 00:37:17.822660 systemd-tmpfiles[1143]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Nov 1 00:37:17.824142 systemd-tmpfiles[1143]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 1 00:37:17.826082 systemd-tmpfiles[1143]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 1 00:37:17.862829 /usr/lib/systemd/system-generators/torcx-generator[1163]: time="2025-11-01T00:37:17Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Nov 1 00:37:17.863156 /usr/lib/systemd/system-generators/torcx-generator[1163]: time="2025-11-01T00:37:17Z" level=info msg="torcx already run" Nov 1 00:37:17.946917 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 00:37:17.946932 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 00:37:17.965837 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:37:18.015825 systemd[1]: Finished systemd-tmpfiles-setup.service. Nov 1 00:37:18.016000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:18.019716 systemd[1]: Starting audit-rules.service... Nov 1 00:37:18.023416 kernel: audit: type=1130 audit(1761957438.016:133): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:18.024895 systemd[1]: Starting clean-ca-certificates.service... Nov 1 00:37:18.027136 systemd[1]: Starting systemd-journal-catalog-update.service... Nov 1 00:37:18.029936 systemd[1]: Starting systemd-resolved.service... Nov 1 00:37:18.032914 systemd[1]: Starting systemd-timesyncd.service... Nov 1 00:37:18.035106 systemd[1]: Starting systemd-update-utmp.service... Nov 1 00:37:18.037029 systemd[1]: Finished clean-ca-certificates.service. Nov 1 00:37:18.038000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:18.038000 audit[1223]: SYSTEM_BOOT pid=1223 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Nov 1 00:37:18.042070 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 00:37:18.044350 systemd[1]: Finished systemd-update-utmp.service. Nov 1 00:37:18.045000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:18.046882 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:37:18.047279 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:37:18.048448 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:37:18.050867 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:37:18.053159 systemd[1]: Starting modprobe@loop.service... Nov 1 00:37:18.054458 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:37:18.054567 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:37:18.054674 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 00:37:18.054945 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:37:18.056007 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:37:18.056148 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:37:18.057000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:18.057000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:37:18.058107 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:37:18.058251 systemd[1]: Finished modprobe@loop.service. Nov 1 00:37:18.058664 augenrules[1238]: No rules Nov 1 00:37:18.058000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Nov 1 00:37:18.058000 audit[1238]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe23219c10 a2=420 a3=0 items=0 ppid=1211 pid=1238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:37:18.058000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Nov 1 00:37:18.060062 systemd[1]: Finished audit-rules.service. Nov 1 00:37:18.061744 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:37:18.062014 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:37:18.064125 systemd[1]: Finished systemd-journal-catalog-update.service. Nov 1 00:37:18.067607 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:37:18.067817 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:37:18.068906 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:37:18.071189 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:37:18.073724 systemd[1]: Starting modprobe@loop.service... Nov 1 00:37:18.075134 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:37:18.075244 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:37:18.076434 systemd[1]: Starting systemd-update-done.service... Nov 1 00:37:18.078134 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 00:37:18.078227 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:37:18.079137 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:37:18.079280 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:37:18.081105 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:37:18.081238 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:37:18.083027 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:37:18.083282 systemd[1]: Finished modprobe@loop.service. Nov 1 00:37:18.085092 systemd[1]: Finished systemd-update-done.service. Nov 1 00:37:18.087116 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:37:18.087350 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 00:37:18.089628 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:37:18.089835 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:37:18.090991 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:37:18.094471 systemd[1]: Starting modprobe@drm.service... Nov 1 00:37:18.097858 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:37:18.101062 systemd[1]: Starting modprobe@loop.service... Nov 1 00:37:18.102976 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:37:18.103187 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:37:18.108069 systemd[1]: Starting systemd-networkd-wait-online.service... Nov 1 00:37:18.111591 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 00:37:18.111741 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:37:18.112812 systemd[1]: Started systemd-timesyncd.service. Nov 1 00:37:18.114600 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:37:18.114772 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:37:18.115316 systemd-timesyncd[1222]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 1 00:37:18.115353 systemd-timesyncd[1222]: Initial clock synchronization to Sat 2025-11-01 00:37:18.167118 UTC. Nov 1 00:37:18.116772 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 00:37:18.116908 systemd[1]: Finished modprobe@drm.service. Nov 1 00:37:18.118474 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:37:18.118625 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:37:18.120322 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:37:18.120488 systemd[1]: Finished modprobe@loop.service. Nov 1 00:37:18.126919 systemd[1]: Reached target time-set.target. Nov 1 00:37:18.128244 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:37:18.128285 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 00:37:18.128677 systemd[1]: Finished ensure-sysext.service. Nov 1 00:37:18.130935 systemd-resolved[1220]: Positive Trust Anchors: Nov 1 00:37:18.131187 systemd-resolved[1220]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 00:37:18.131281 systemd-resolved[1220]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Nov 1 00:37:18.140951 systemd-resolved[1220]: Defaulting to hostname 'linux'. Nov 1 00:37:18.142404 systemd[1]: Started systemd-resolved.service. Nov 1 00:37:18.143795 systemd[1]: Reached target network.target. Nov 1 00:37:18.145011 systemd[1]: Reached target nss-lookup.target. Nov 1 00:37:18.146290 systemd[1]: Reached target sysinit.target. Nov 1 00:37:18.147582 systemd[1]: Started motdgen.path. Nov 1 00:37:18.148668 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Nov 1 00:37:18.150528 systemd[1]: Started logrotate.timer. Nov 1 00:37:18.151695 systemd[1]: Started mdadm.timer. Nov 1 00:37:18.152729 systemd[1]: Started systemd-tmpfiles-clean.timer. Nov 1 00:37:18.154064 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 1 00:37:18.154080 systemd[1]: Reached target paths.target. Nov 1 00:37:18.155244 systemd[1]: Reached target timers.target. Nov 1 00:37:18.156772 systemd[1]: Listening on dbus.socket. Nov 1 00:37:18.159079 systemd[1]: Starting docker.socket... Nov 1 00:37:18.161022 systemd[1]: Listening on sshd.socket. Nov 1 00:37:18.162223 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:37:18.162485 systemd[1]: Listening on docker.socket. Nov 1 00:37:18.163650 systemd[1]: Reached target sockets.target. Nov 1 00:37:18.164861 systemd[1]: Reached target basic.target. Nov 1 00:37:18.166137 systemd[1]: System is tainted: cgroupsv1 Nov 1 00:37:18.166189 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Nov 1 00:37:18.166210 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Nov 1 00:37:18.167048 systemd[1]: Starting containerd.service... Nov 1 00:37:18.169033 systemd[1]: Starting dbus.service... Nov 1 00:37:18.171017 systemd[1]: Starting enable-oem-cloudinit.service... Nov 1 00:37:18.173228 systemd[1]: Starting extend-filesystems.service... Nov 1 00:37:18.174712 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Nov 1 00:37:18.175665 systemd[1]: Starting motdgen.service... Nov 1 00:37:18.176554 jq[1274]: false Nov 1 00:37:18.177858 systemd[1]: Starting prepare-helm.service... Nov 1 00:37:18.179849 systemd[1]: Starting ssh-key-proc-cmdline.service... Nov 1 00:37:18.182217 systemd[1]: Starting sshd-keygen.service... Nov 1 00:37:18.187113 systemd[1]: Starting systemd-logind.service... Nov 1 00:37:18.189753 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:37:18.189810 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 1 00:37:18.190819 systemd[1]: Starting update-engine.service... Nov 1 00:37:18.193659 systemd[1]: Starting update-ssh-keys-after-ignition.service... Nov 1 00:37:18.196563 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 1 00:37:18.197767 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Nov 1 00:37:18.198191 jq[1294]: true Nov 1 00:37:18.198092 systemd[1]: motdgen.service: Deactivated successfully. Nov 1 00:37:18.198276 systemd[1]: Finished motdgen.service. Nov 1 00:37:18.201846 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 1 00:37:18.202189 systemd[1]: Finished ssh-key-proc-cmdline.service. Nov 1 00:37:18.211442 jq[1302]: true Nov 1 00:37:18.211577 tar[1301]: linux-amd64/LICENSE Nov 1 00:37:18.211577 tar[1301]: linux-amd64/helm Nov 1 00:37:18.214152 extend-filesystems[1275]: Found loop1 Nov 1 00:37:18.214152 extend-filesystems[1275]: Found sr0 Nov 1 00:37:18.214152 extend-filesystems[1275]: Found vda Nov 1 00:37:18.214152 extend-filesystems[1275]: Found vda1 Nov 1 00:37:18.214152 extend-filesystems[1275]: Found vda2 Nov 1 00:37:18.214152 extend-filesystems[1275]: Found vda3 Nov 1 00:37:18.214152 extend-filesystems[1275]: Found usr Nov 1 00:37:18.214152 extend-filesystems[1275]: Found vda4 Nov 1 00:37:18.214152 extend-filesystems[1275]: Found vda6 Nov 1 00:37:18.214152 extend-filesystems[1275]: Found vda7 Nov 1 00:37:18.214152 extend-filesystems[1275]: Found vda9 Nov 1 00:37:18.214152 extend-filesystems[1275]: Checking size of /dev/vda9 Nov 1 00:37:18.221698 systemd[1]: Started dbus.service. Nov 1 00:37:18.260931 update_engine[1292]: I1101 00:37:18.260635 1292 main.cc:92] Flatcar Update Engine starting Nov 1 00:37:18.221490 dbus-daemon[1273]: [system] SELinux support is enabled Nov 1 00:37:18.226949 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 1 00:37:18.226971 systemd[1]: Reached target system-config.target. Nov 1 00:37:18.231099 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 1 00:37:18.231115 systemd[1]: Reached target user-config.target. Nov 1 00:37:18.268002 systemd[1]: Started update-engine.service. Nov 1 00:37:18.270286 systemd-logind[1291]: Watching system buttons on /dev/input/event1 (Power Button) Nov 1 00:37:18.271122 update_engine[1292]: I1101 00:37:18.268018 1292 update_check_scheduler.cc:74] Next update check in 8m17s Nov 1 00:37:18.270301 systemd-logind[1291]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 1 00:37:18.270907 systemd[1]: Started locksmithd.service. Nov 1 00:37:18.271819 systemd-logind[1291]: New seat seat0. Nov 1 00:37:18.278131 systemd[1]: Started systemd-logind.service. Nov 1 00:37:18.289618 systemd-networkd[1077]: eth0: Gained IPv6LL Nov 1 00:37:18.292864 systemd[1]: Finished systemd-networkd-wait-online.service. Nov 1 00:37:18.294751 systemd[1]: Reached target network-online.target. Nov 1 00:37:18.297516 systemd[1]: Starting kubelet.service... Nov 1 00:37:18.303748 extend-filesystems[1275]: Resized partition /dev/vda9 Nov 1 00:37:18.326995 env[1303]: time="2025-11-01T00:37:18.326949142Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Nov 1 00:37:18.342050 env[1303]: time="2025-11-01T00:37:18.342024974Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 1 00:37:18.342204 env[1303]: time="2025-11-01T00:37:18.342186447Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:37:18.343503 env[1303]: time="2025-11-01T00:37:18.343452401Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.192-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:37:18.343503 env[1303]: time="2025-11-01T00:37:18.343497456Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:37:18.343779 env[1303]: time="2025-11-01T00:37:18.343747224Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:37:18.343779 env[1303]: time="2025-11-01T00:37:18.343768614Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 1 00:37:18.343824 env[1303]: time="2025-11-01T00:37:18.343780637Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Nov 1 00:37:18.343824 env[1303]: time="2025-11-01T00:37:18.343791116Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 1 00:37:18.343875 env[1303]: time="2025-11-01T00:37:18.343858382Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:37:18.344068 env[1303]: time="2025-11-01T00:37:18.344049882Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:37:18.344207 env[1303]: time="2025-11-01T00:37:18.344188341Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:37:18.344236 env[1303]: time="2025-11-01T00:37:18.344206696Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 1 00:37:18.344258 env[1303]: time="2025-11-01T00:37:18.344248464Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Nov 1 00:37:18.344281 env[1303]: time="2025-11-01T00:37:18.344259605Z" level=info msg="metadata content store policy set" policy=shared Nov 1 00:37:18.443509 extend-filesystems[1338]: resize2fs 1.46.5 (30-Dec-2021) Nov 1 00:37:18.529274 locksmithd[1328]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 1 00:37:18.566411 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Nov 1 00:37:18.628421 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Nov 1 00:37:18.810725 extend-filesystems[1338]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 1 00:37:18.810725 extend-filesystems[1338]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 1 00:37:18.810725 extend-filesystems[1338]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Nov 1 00:37:18.834909 extend-filesystems[1275]: Resized filesystem in /dev/vda9 Nov 1 00:37:18.836755 bash[1327]: Updated "/home/core/.ssh/authorized_keys" Nov 1 00:37:18.811215 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 1 00:37:18.837077 env[1303]: time="2025-11-01T00:37:18.815553169Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 1 00:37:18.837077 env[1303]: time="2025-11-01T00:37:18.815621336Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 1 00:37:18.837077 env[1303]: time="2025-11-01T00:37:18.815640933Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 1 00:37:18.837077 env[1303]: time="2025-11-01T00:37:18.815751400Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 1 00:37:18.837077 env[1303]: time="2025-11-01T00:37:18.815805131Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 1 00:37:18.837077 env[1303]: time="2025-11-01T00:37:18.815826541Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 1 00:37:18.837077 env[1303]: time="2025-11-01T00:37:18.815872037Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 1 00:37:18.837077 env[1303]: time="2025-11-01T00:37:18.815900820Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 1 00:37:18.837077 env[1303]: time="2025-11-01T00:37:18.815956184Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Nov 1 00:37:18.837077 env[1303]: time="2025-11-01T00:37:18.815979428Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 1 00:37:18.837077 env[1303]: time="2025-11-01T00:37:18.815994987Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 1 00:37:18.837077 env[1303]: time="2025-11-01T00:37:18.816009945Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 1 00:37:18.837077 env[1303]: time="2025-11-01T00:37:18.816215010Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 1 00:37:18.837077 env[1303]: time="2025-11-01T00:37:18.816324866Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 1 00:37:18.811467 systemd[1]: Finished extend-filesystems.service. Nov 1 00:37:18.839954 env[1303]: time="2025-11-01T00:37:18.816874997Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 1 00:37:18.839954 env[1303]: time="2025-11-01T00:37:18.816926885Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 1 00:37:18.839954 env[1303]: time="2025-11-01T00:37:18.816955268Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 1 00:37:18.839954 env[1303]: time="2025-11-01T00:37:18.817092776Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 1 00:37:18.839954 env[1303]: time="2025-11-01T00:37:18.817122251Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 1 00:37:18.839954 env[1303]: time="2025-11-01T00:37:18.817185810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 1 00:37:18.839954 env[1303]: time="2025-11-01T00:37:18.817211448Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 1 00:37:18.839954 env[1303]: time="2025-11-01T00:37:18.817265600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 1 00:37:18.839954 env[1303]: time="2025-11-01T00:37:18.817291058Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 1 00:37:18.839954 env[1303]: time="2025-11-01T00:37:18.817341993Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 1 00:37:18.839954 env[1303]: time="2025-11-01T00:37:18.817371148Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 1 00:37:18.839954 env[1303]: time="2025-11-01T00:37:18.817427233Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 1 00:37:18.839954 env[1303]: time="2025-11-01T00:37:18.817723278Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 1 00:37:18.839954 env[1303]: time="2025-11-01T00:37:18.817783241Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 1 00:37:18.839954 env[1303]: time="2025-11-01T00:37:18.817813517Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 1 00:37:18.816870 systemd[1]: Finished update-ssh-keys-after-ignition.service. Nov 1 00:37:18.840363 env[1303]: time="2025-11-01T00:37:18.817841239Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 1 00:37:18.840363 env[1303]: time="2025-11-01T00:37:18.817872518Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Nov 1 00:37:18.840363 env[1303]: time="2025-11-01T00:37:18.817899188Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 1 00:37:18.840363 env[1303]: time="2025-11-01T00:37:18.817934424Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Nov 1 00:37:18.840363 env[1303]: time="2025-11-01T00:37:18.817989207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 1 00:37:18.821755 systemd[1]: Started containerd.service. Nov 1 00:37:18.840563 env[1303]: time="2025-11-01T00:37:18.818277678Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 1 00:37:18.840563 env[1303]: time="2025-11-01T00:37:18.818359592Z" level=info msg="Connect containerd service" Nov 1 00:37:18.840563 env[1303]: time="2025-11-01T00:37:18.818434712Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 1 00:37:18.840563 env[1303]: time="2025-11-01T00:37:18.819121520Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 1 00:37:18.840563 env[1303]: time="2025-11-01T00:37:18.820174315Z" level=info msg="Start subscribing containerd event" Nov 1 00:37:18.840563 env[1303]: time="2025-11-01T00:37:18.820307394Z" level=info msg="Start recovering state" Nov 1 00:37:18.840563 env[1303]: time="2025-11-01T00:37:18.820397263Z" level=info msg="Start event monitor" Nov 1 00:37:18.840563 env[1303]: time="2025-11-01T00:37:18.820413894Z" level=info msg="Start snapshots syncer" Nov 1 00:37:18.840563 env[1303]: time="2025-11-01T00:37:18.820423302Z" level=info msg="Start cni network conf syncer for default" Nov 1 00:37:18.840563 env[1303]: time="2025-11-01T00:37:18.820433090Z" level=info msg="Start streaming server" Nov 1 00:37:18.840563 env[1303]: time="2025-11-01T00:37:18.821557018Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 1 00:37:18.840563 env[1303]: time="2025-11-01T00:37:18.821624344Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 1 00:37:18.840563 env[1303]: time="2025-11-01T00:37:18.823796728Z" level=info msg="containerd successfully booted in 0.497477s" Nov 1 00:37:18.930775 tar[1301]: linux-amd64/README.md Nov 1 00:37:18.937715 systemd[1]: Finished prepare-helm.service. Nov 1 00:37:19.182186 sshd_keygen[1300]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 1 00:37:19.203340 systemd[1]: Finished sshd-keygen.service. Nov 1 00:37:19.207280 systemd[1]: Starting issuegen.service... Nov 1 00:37:19.212630 systemd[1]: issuegen.service: Deactivated successfully. Nov 1 00:37:19.212985 systemd[1]: Finished issuegen.service. Nov 1 00:37:19.216725 systemd[1]: Starting systemd-user-sessions.service... Nov 1 00:37:19.229034 systemd[1]: Finished systemd-user-sessions.service. Nov 1 00:37:19.233113 systemd[1]: Started getty@tty1.service. Nov 1 00:37:19.236682 systemd[1]: Started serial-getty@ttyS0.service. Nov 1 00:37:19.238747 systemd[1]: Reached target getty.target. Nov 1 00:37:19.727215 systemd[1]: Started kubelet.service. Nov 1 00:37:19.729824 systemd[1]: Reached target multi-user.target. Nov 1 00:37:19.733759 systemd[1]: Starting systemd-update-utmp-runlevel.service... Nov 1 00:37:19.742044 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Nov 1 00:37:19.742351 systemd[1]: Finished systemd-update-utmp-runlevel.service. Nov 1 00:37:19.745984 systemd[1]: Startup finished in 6.463s (kernel) + 7.348s (userspace) = 13.812s. Nov 1 00:37:20.328001 systemd[1]: Created slice system-sshd.slice. Nov 1 00:37:20.329478 systemd[1]: Started sshd@0-10.0.0.57:22-10.0.0.1:52488.service. Nov 1 00:37:20.412257 sshd[1385]: Accepted publickey for core from 10.0.0.1 port 52488 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:37:20.422141 sshd[1385]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:37:20.433097 systemd-logind[1291]: New session 1 of user core. Nov 1 00:37:20.434085 systemd[1]: Created slice user-500.slice. Nov 1 00:37:20.436298 systemd[1]: Starting user-runtime-dir@500.service... Nov 1 00:37:20.445395 systemd[1]: Finished user-runtime-dir@500.service. Nov 1 00:37:20.447086 systemd[1]: Starting user@500.service... Nov 1 00:37:20.450258 (systemd)[1389]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:37:20.509735 kubelet[1376]: E1101 00:37:20.509684 1376 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:37:20.511711 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:37:20.511925 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:37:20.529573 systemd[1389]: Queued start job for default target default.target. Nov 1 00:37:20.529795 systemd[1389]: Reached target paths.target. Nov 1 00:37:20.529811 systemd[1389]: Reached target sockets.target. Nov 1 00:37:20.529823 systemd[1389]: Reached target timers.target. Nov 1 00:37:20.529833 systemd[1389]: Reached target basic.target. Nov 1 00:37:20.529962 systemd[1]: Started user@500.service. Nov 1 00:37:20.530731 systemd[1389]: Reached target default.target. Nov 1 00:37:20.530807 systemd[1389]: Startup finished in 71ms. Nov 1 00:37:20.530843 systemd[1]: Started session-1.scope. Nov 1 00:37:20.583024 systemd[1]: Started sshd@1-10.0.0.57:22-10.0.0.1:52498.service. Nov 1 00:37:20.625756 sshd[1400]: Accepted publickey for core from 10.0.0.1 port 52498 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:37:20.627111 sshd[1400]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:37:20.631709 systemd-logind[1291]: New session 2 of user core. Nov 1 00:37:20.632509 systemd[1]: Started session-2.scope. Nov 1 00:37:20.690974 sshd[1400]: pam_unix(sshd:session): session closed for user core Nov 1 00:37:20.693308 systemd[1]: Started sshd@2-10.0.0.57:22-10.0.0.1:52514.service. Nov 1 00:37:20.693748 systemd[1]: sshd@1-10.0.0.57:22-10.0.0.1:52498.service: Deactivated successfully. Nov 1 00:37:20.694626 systemd[1]: session-2.scope: Deactivated successfully. Nov 1 00:37:20.694690 systemd-logind[1291]: Session 2 logged out. Waiting for processes to exit. Nov 1 00:37:20.695490 systemd-logind[1291]: Removed session 2. Nov 1 00:37:20.728827 sshd[1405]: Accepted publickey for core from 10.0.0.1 port 52514 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:37:20.730013 sshd[1405]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:37:20.733778 systemd-logind[1291]: New session 3 of user core. Nov 1 00:37:20.734481 systemd[1]: Started session-3.scope. Nov 1 00:37:20.785707 sshd[1405]: pam_unix(sshd:session): session closed for user core Nov 1 00:37:20.788668 systemd[1]: Started sshd@3-10.0.0.57:22-10.0.0.1:52530.service. Nov 1 00:37:20.789292 systemd[1]: sshd@2-10.0.0.57:22-10.0.0.1:52514.service: Deactivated successfully. Nov 1 00:37:20.790252 systemd-logind[1291]: Session 3 logged out. Waiting for processes to exit. Nov 1 00:37:20.790252 systemd[1]: session-3.scope: Deactivated successfully. Nov 1 00:37:20.791433 systemd-logind[1291]: Removed session 3. Nov 1 00:37:20.827600 sshd[1412]: Accepted publickey for core from 10.0.0.1 port 52530 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:37:20.828930 sshd[1412]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:37:20.833910 systemd-logind[1291]: New session 4 of user core. Nov 1 00:37:20.834222 systemd[1]: Started session-4.scope. Nov 1 00:37:20.893496 sshd[1412]: pam_unix(sshd:session): session closed for user core Nov 1 00:37:20.896232 systemd[1]: Started sshd@4-10.0.0.57:22-10.0.0.1:52536.service. Nov 1 00:37:20.896756 systemd[1]: sshd@3-10.0.0.57:22-10.0.0.1:52530.service: Deactivated successfully. Nov 1 00:37:20.897871 systemd[1]: session-4.scope: Deactivated successfully. Nov 1 00:37:20.898016 systemd-logind[1291]: Session 4 logged out. Waiting for processes to exit. Nov 1 00:37:20.899069 systemd-logind[1291]: Removed session 4. Nov 1 00:37:20.936900 sshd[1420]: Accepted publickey for core from 10.0.0.1 port 52536 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:37:20.938554 sshd[1420]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:37:20.942635 systemd-logind[1291]: New session 5 of user core. Nov 1 00:37:20.943434 systemd[1]: Started session-5.scope. Nov 1 00:37:21.005906 sudo[1425]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 1 00:37:21.006343 sudo[1425]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Nov 1 00:37:21.084823 systemd[1]: Starting docker.service... Nov 1 00:37:21.206594 env[1437]: time="2025-11-01T00:37:21.206530795Z" level=info msg="Starting up" Nov 1 00:37:21.208478 env[1437]: time="2025-11-01T00:37:21.208417636Z" level=info msg="parsed scheme: \"unix\"" module=grpc Nov 1 00:37:21.208478 env[1437]: time="2025-11-01T00:37:21.208462722Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Nov 1 00:37:21.208640 env[1437]: time="2025-11-01T00:37:21.208500419Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Nov 1 00:37:21.208640 env[1437]: time="2025-11-01T00:37:21.208516367Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Nov 1 00:37:21.212942 env[1437]: time="2025-11-01T00:37:21.212899395Z" level=info msg="parsed scheme: \"unix\"" module=grpc Nov 1 00:37:21.212942 env[1437]: time="2025-11-01T00:37:21.212927315Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Nov 1 00:37:21.213425 env[1437]: time="2025-11-01T00:37:21.212948408Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Nov 1 00:37:21.213425 env[1437]: time="2025-11-01T00:37:21.212962060Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Nov 1 00:37:21.937333 env[1437]: time="2025-11-01T00:37:21.937277032Z" level=warning msg="Your kernel does not support cgroup blkio weight" Nov 1 00:37:21.937333 env[1437]: time="2025-11-01T00:37:21.937316782Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Nov 1 00:37:21.937603 env[1437]: time="2025-11-01T00:37:21.937585530Z" level=info msg="Loading containers: start." Nov 1 00:37:22.072423 kernel: Initializing XFRM netlink socket Nov 1 00:37:22.100126 env[1437]: time="2025-11-01T00:37:22.100086679Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Nov 1 00:37:22.153608 systemd-networkd[1077]: docker0: Link UP Nov 1 00:37:22.167427 env[1437]: time="2025-11-01T00:37:22.167379948Z" level=info msg="Loading containers: done." Nov 1 00:37:22.206888 env[1437]: time="2025-11-01T00:37:22.206772006Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 1 00:37:22.207331 env[1437]: time="2025-11-01T00:37:22.206970213Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Nov 1 00:37:22.207331 env[1437]: time="2025-11-01T00:37:22.207093868Z" level=info msg="Daemon has completed initialization" Nov 1 00:37:22.227429 systemd[1]: Started docker.service. Nov 1 00:37:22.240793 env[1437]: time="2025-11-01T00:37:22.240724815Z" level=info msg="API listen on /run/docker.sock" Nov 1 00:37:23.194673 env[1303]: time="2025-11-01T00:37:23.194631173Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Nov 1 00:37:25.095514 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4012652438.mount: Deactivated successfully. Nov 1 00:37:26.593278 env[1303]: time="2025-11-01T00:37:26.593203717Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:37:26.595008 env[1303]: time="2025-11-01T00:37:26.594970869Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:37:26.596699 env[1303]: time="2025-11-01T00:37:26.596665834Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:37:26.598314 env[1303]: time="2025-11-01T00:37:26.598291734Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:37:26.599024 env[1303]: time="2025-11-01T00:37:26.598988286Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Nov 1 00:37:26.599666 env[1303]: time="2025-11-01T00:37:26.599639217Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Nov 1 00:37:28.200080 env[1303]: time="2025-11-01T00:37:28.199997287Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:37:28.202459 env[1303]: time="2025-11-01T00:37:28.202412175Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:37:28.205029 env[1303]: time="2025-11-01T00:37:28.204991179Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:37:28.207278 env[1303]: time="2025-11-01T00:37:28.207216744Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:37:28.208046 env[1303]: time="2025-11-01T00:37:28.208008737Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Nov 1 00:37:28.208623 env[1303]: time="2025-11-01T00:37:28.208535658Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Nov 1 00:37:30.699711 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 1 00:37:30.699893 systemd[1]: Stopped kubelet.service. Nov 1 00:37:30.701479 systemd[1]: Starting kubelet.service... Nov 1 00:37:30.797152 systemd[1]: Started kubelet.service. Nov 1 00:37:31.001674 kubelet[1576]: E1101 00:37:31.001539 1576 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:37:31.005244 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:37:31.005483 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:37:31.705775 env[1303]: time="2025-11-01T00:37:31.705690641Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:37:31.709048 env[1303]: time="2025-11-01T00:37:31.708957269Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:37:31.712156 env[1303]: time="2025-11-01T00:37:31.712080602Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:37:31.714245 env[1303]: time="2025-11-01T00:37:31.714190826Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:37:31.715109 env[1303]: time="2025-11-01T00:37:31.715052533Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Nov 1 00:37:31.715836 env[1303]: time="2025-11-01T00:37:31.715792003Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Nov 1 00:37:33.014893 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3171070299.mount: Deactivated successfully. Nov 1 00:37:34.325402 env[1303]: time="2025-11-01T00:37:34.325312913Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:37:34.327250 env[1303]: time="2025-11-01T00:37:34.327189549Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:37:34.328692 env[1303]: time="2025-11-01T00:37:34.328627661Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:37:34.330080 env[1303]: time="2025-11-01T00:37:34.330042509Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:37:34.330442 env[1303]: time="2025-11-01T00:37:34.330409868Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Nov 1 00:37:34.331150 env[1303]: time="2025-11-01T00:37:34.331113622Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Nov 1 00:37:34.945715 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2476996847.mount: Deactivated successfully. Nov 1 00:37:37.054861 env[1303]: time="2025-11-01T00:37:37.054778153Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:37:37.057230 env[1303]: time="2025-11-01T00:37:37.057193724Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:37:37.062218 env[1303]: time="2025-11-01T00:37:37.062164131Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:37:37.064944 env[1303]: time="2025-11-01T00:37:37.064890618Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:37:37.066318 env[1303]: time="2025-11-01T00:37:37.066267134Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Nov 1 00:37:37.067192 env[1303]: time="2025-11-01T00:37:37.067132833Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 1 00:37:37.732533 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2997891699.mount: Deactivated successfully. Nov 1 00:37:37.738689 env[1303]: time="2025-11-01T00:37:37.738650918Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:37:37.740674 env[1303]: time="2025-11-01T00:37:37.740650236Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:37:37.743353 env[1303]: time="2025-11-01T00:37:37.743313037Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:37:37.746089 env[1303]: time="2025-11-01T00:37:37.746057297Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:37:37.746481 env[1303]: time="2025-11-01T00:37:37.746448339Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 1 00:37:37.746954 env[1303]: time="2025-11-01T00:37:37.746889053Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Nov 1 00:37:38.219098 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount233317528.mount: Deactivated successfully. Nov 1 00:37:41.199663 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 1 00:37:41.199862 systemd[1]: Stopped kubelet.service. Nov 1 00:37:41.201197 systemd[1]: Starting kubelet.service... Nov 1 00:37:42.292210 systemd[1]: Started kubelet.service. Nov 1 00:37:42.344935 kubelet[1592]: E1101 00:37:42.344881 1592 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:37:42.347397 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:37:42.347603 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:37:43.134196 env[1303]: time="2025-11-01T00:37:43.134093269Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:37:43.171027 env[1303]: time="2025-11-01T00:37:43.170965624Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:37:43.208395 env[1303]: time="2025-11-01T00:37:43.208319791Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:37:43.262175 env[1303]: time="2025-11-01T00:37:43.262080219Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:37:43.263336 env[1303]: time="2025-11-01T00:37:43.263270814Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Nov 1 00:37:45.515946 systemd[1]: Stopped kubelet.service. Nov 1 00:37:45.518310 systemd[1]: Starting kubelet.service... Nov 1 00:37:45.539828 systemd[1]: Reloading. Nov 1 00:37:45.601971 /usr/lib/systemd/system-generators/torcx-generator[1649]: time="2025-11-01T00:37:45Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Nov 1 00:37:45.602435 /usr/lib/systemd/system-generators/torcx-generator[1649]: time="2025-11-01T00:37:45Z" level=info msg="torcx already run" Nov 1 00:37:46.169164 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 00:37:46.169186 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 00:37:46.192126 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:37:46.260185 systemd[1]: Started kubelet.service. Nov 1 00:37:46.261525 systemd[1]: Stopping kubelet.service... Nov 1 00:37:46.261796 systemd[1]: kubelet.service: Deactivated successfully. Nov 1 00:37:46.262032 systemd[1]: Stopped kubelet.service. Nov 1 00:37:46.263881 systemd[1]: Starting kubelet.service... Nov 1 00:37:46.370048 systemd[1]: Started kubelet.service. Nov 1 00:37:46.406483 kubelet[1710]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:37:46.406483 kubelet[1710]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 00:37:46.406483 kubelet[1710]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:37:46.407037 kubelet[1710]: I1101 00:37:46.406522 1710 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 00:37:46.632313 kubelet[1710]: I1101 00:37:46.632257 1710 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 1 00:37:46.632313 kubelet[1710]: I1101 00:37:46.632292 1710 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 00:37:46.632649 kubelet[1710]: I1101 00:37:46.632596 1710 server.go:954] "Client rotation is on, will bootstrap in background" Nov 1 00:37:46.660299 kubelet[1710]: E1101 00:37:46.660215 1710 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.57:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.57:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:37:46.661749 kubelet[1710]: I1101 00:37:46.661696 1710 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 00:37:46.668343 kubelet[1710]: E1101 00:37:46.668287 1710 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 00:37:46.668343 kubelet[1710]: I1101 00:37:46.668327 1710 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 1 00:37:46.673605 kubelet[1710]: I1101 00:37:46.673548 1710 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 1 00:37:46.674063 kubelet[1710]: I1101 00:37:46.674024 1710 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 00:37:46.674262 kubelet[1710]: I1101 00:37:46.674053 1710 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Nov 1 00:37:46.674352 kubelet[1710]: I1101 00:37:46.674266 1710 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 00:37:46.674352 kubelet[1710]: I1101 00:37:46.674275 1710 container_manager_linux.go:304] "Creating device plugin manager" Nov 1 00:37:46.674439 kubelet[1710]: I1101 00:37:46.674429 1710 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:37:46.677941 kubelet[1710]: I1101 00:37:46.677900 1710 kubelet.go:446] "Attempting to sync node with API server" Nov 1 00:37:46.677941 kubelet[1710]: I1101 00:37:46.677948 1710 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 00:37:46.678096 kubelet[1710]: I1101 00:37:46.677975 1710 kubelet.go:352] "Adding apiserver pod source" Nov 1 00:37:46.678096 kubelet[1710]: I1101 00:37:46.677989 1710 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 00:37:46.689573 kubelet[1710]: W1101 00:37:46.689485 1710 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.57:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.57:6443: connect: connection refused Nov 1 00:37:46.689573 kubelet[1710]: E1101 00:37:46.689565 1710 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.57:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.57:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:37:46.691584 kubelet[1710]: W1101 00:37:46.691528 1710 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.57:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.57:6443: connect: connection refused Nov 1 00:37:46.691665 kubelet[1710]: E1101 00:37:46.691584 1710 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.57:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.57:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:37:46.692748 kubelet[1710]: I1101 00:37:46.692714 1710 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Nov 1 00:37:46.693130 kubelet[1710]: I1101 00:37:46.693102 1710 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 1 00:37:46.697332 kubelet[1710]: W1101 00:37:46.697280 1710 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 1 00:37:46.700782 kubelet[1710]: I1101 00:37:46.700720 1710 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 1 00:37:46.700782 kubelet[1710]: I1101 00:37:46.700757 1710 server.go:1287] "Started kubelet" Nov 1 00:37:46.704559 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Nov 1 00:37:46.704656 kubelet[1710]: I1101 00:37:46.704634 1710 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 00:37:46.710700 kubelet[1710]: I1101 00:37:46.710148 1710 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 00:37:46.710898 kubelet[1710]: I1101 00:37:46.710878 1710 server.go:479] "Adding debug handlers to kubelet server" Nov 1 00:37:46.711086 kubelet[1710]: I1101 00:37:46.711041 1710 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 1 00:37:46.711448 kubelet[1710]: E1101 00:37:46.711422 1710 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 1 00:37:46.711773 kubelet[1710]: I1101 00:37:46.711731 1710 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 00:37:46.711925 kubelet[1710]: I1101 00:37:46.711910 1710 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 00:37:46.712081 kubelet[1710]: I1101 00:37:46.712071 1710 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 00:37:46.719880 kubelet[1710]: I1101 00:37:46.719309 1710 factory.go:221] Registration of the systemd container factory successfully Nov 1 00:37:46.719880 kubelet[1710]: I1101 00:37:46.719407 1710 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 00:37:46.720226 kubelet[1710]: I1101 00:37:46.720209 1710 factory.go:221] Registration of the containerd container factory successfully Nov 1 00:37:46.724811 kubelet[1710]: I1101 00:37:46.724784 1710 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 1 00:37:46.724940 kubelet[1710]: I1101 00:37:46.724925 1710 reconciler.go:26] "Reconciler: start to sync state" Nov 1 00:37:46.725562 kubelet[1710]: E1101 00:37:46.725510 1710 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.57:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.57:6443: connect: connection refused" interval="200ms" Nov 1 00:37:46.725733 kubelet[1710]: W1101 00:37:46.725585 1710 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.57:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.57:6443: connect: connection refused Nov 1 00:37:46.725733 kubelet[1710]: E1101 00:37:46.725626 1710 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.57:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.57:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:37:46.729862 kubelet[1710]: E1101 00:37:46.728663 1710 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.57:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.57:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1873bafb100b626a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-01 00:37:46.700735082 +0000 UTC m=+0.326626003,LastTimestamp:2025-11-01 00:37:46.700735082 +0000 UTC m=+0.326626003,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 1 00:37:46.735475 kubelet[1710]: I1101 00:37:46.735425 1710 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 1 00:37:46.736565 kubelet[1710]: I1101 00:37:46.736545 1710 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 1 00:37:46.736565 kubelet[1710]: I1101 00:37:46.736567 1710 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 1 00:37:46.736732 kubelet[1710]: I1101 00:37:46.736712 1710 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 00:37:46.736732 kubelet[1710]: I1101 00:37:46.736724 1710 kubelet.go:2382] "Starting kubelet main sync loop" Nov 1 00:37:46.736871 kubelet[1710]: E1101 00:37:46.736763 1710 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 00:37:46.737235 kubelet[1710]: W1101 00:37:46.737188 1710 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.57:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.57:6443: connect: connection refused Nov 1 00:37:46.737235 kubelet[1710]: E1101 00:37:46.737231 1710 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.57:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.57:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:37:46.743207 kubelet[1710]: I1101 00:37:46.743187 1710 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 00:37:46.743313 kubelet[1710]: I1101 00:37:46.743288 1710 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 00:37:46.743313 kubelet[1710]: I1101 00:37:46.743315 1710 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:37:46.812478 kubelet[1710]: E1101 00:37:46.812435 1710 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 1 00:37:46.836951 kubelet[1710]: E1101 00:37:46.836905 1710 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 1 00:37:46.912781 kubelet[1710]: E1101 00:37:46.912671 1710 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 1 00:37:46.926452 kubelet[1710]: E1101 00:37:46.926410 1710 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.57:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.57:6443: connect: connection refused" interval="400ms" Nov 1 00:37:47.013663 kubelet[1710]: E1101 00:37:47.013581 1710 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 1 00:37:47.037289 kubelet[1710]: E1101 00:37:47.037229 1710 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 1 00:37:47.114545 kubelet[1710]: E1101 00:37:47.114477 1710 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 1 00:37:47.214796 kubelet[1710]: E1101 00:37:47.214726 1710 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 1 00:37:47.315366 kubelet[1710]: E1101 00:37:47.315302 1710 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 1 00:37:47.327192 kubelet[1710]: E1101 00:37:47.327140 1710 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.57:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.57:6443: connect: connection refused" interval="800ms" Nov 1 00:37:47.416489 kubelet[1710]: E1101 00:37:47.416430 1710 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 1 00:37:47.438191 kubelet[1710]: E1101 00:37:47.438136 1710 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 1 00:37:47.471978 kubelet[1710]: I1101 00:37:47.471832 1710 policy_none.go:49] "None policy: Start" Nov 1 00:37:47.471978 kubelet[1710]: I1101 00:37:47.471884 1710 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 1 00:37:47.471978 kubelet[1710]: I1101 00:37:47.471914 1710 state_mem.go:35] "Initializing new in-memory state store" Nov 1 00:37:47.477576 kubelet[1710]: I1101 00:37:47.477547 1710 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 1 00:37:47.477729 kubelet[1710]: I1101 00:37:47.477714 1710 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 00:37:47.477789 kubelet[1710]: I1101 00:37:47.477731 1710 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 00:37:47.478880 kubelet[1710]: I1101 00:37:47.478580 1710 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 00:37:47.479310 kubelet[1710]: E1101 00:37:47.479292 1710 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 00:37:47.479367 kubelet[1710]: E1101 00:37:47.479324 1710 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 1 00:37:47.579441 kubelet[1710]: I1101 00:37:47.579374 1710 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 00:37:47.579756 kubelet[1710]: E1101 00:37:47.579726 1710 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.57:6443/api/v1/nodes\": dial tcp 10.0.0.57:6443: connect: connection refused" node="localhost" Nov 1 00:37:47.781009 kubelet[1710]: I1101 00:37:47.780887 1710 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 00:37:47.781246 kubelet[1710]: E1101 00:37:47.781220 1710 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.57:6443/api/v1/nodes\": dial tcp 10.0.0.57:6443: connect: connection refused" node="localhost" Nov 1 00:37:48.128849 kubelet[1710]: E1101 00:37:48.128692 1710 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.57:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.57:6443: connect: connection refused" interval="1.6s" Nov 1 00:37:48.157551 kubelet[1710]: W1101 00:37:48.157477 1710 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.57:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.57:6443: connect: connection refused Nov 1 00:37:48.157551 kubelet[1710]: E1101 00:37:48.157526 1710 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.57:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.57:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:37:48.160243 kubelet[1710]: W1101 00:37:48.160206 1710 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.57:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.57:6443: connect: connection refused Nov 1 00:37:48.160243 kubelet[1710]: E1101 00:37:48.160235 1710 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.57:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.57:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:37:48.178892 kubelet[1710]: W1101 00:37:48.178804 1710 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.57:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.57:6443: connect: connection refused Nov 1 00:37:48.178892 kubelet[1710]: E1101 00:37:48.178862 1710 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.57:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.57:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:37:48.183105 kubelet[1710]: I1101 00:37:48.183085 1710 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 00:37:48.183506 kubelet[1710]: E1101 00:37:48.183459 1710 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.57:6443/api/v1/nodes\": dial tcp 10.0.0.57:6443: connect: connection refused" node="localhost" Nov 1 00:37:48.197355 kubelet[1710]: W1101 00:37:48.197250 1710 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.57:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.57:6443: connect: connection refused Nov 1 00:37:48.197451 kubelet[1710]: E1101 00:37:48.197358 1710 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.57:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.57:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:37:48.243057 kubelet[1710]: E1101 00:37:48.243030 1710 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:37:48.247902 kubelet[1710]: E1101 00:37:48.247865 1710 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:37:48.248428 kubelet[1710]: E1101 00:37:48.248393 1710 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:37:48.335207 kubelet[1710]: I1101 00:37:48.335138 1710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6109992bac8086f629411b16b62a0225-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6109992bac8086f629411b16b62a0225\") " pod="kube-system/kube-apiserver-localhost" Nov 1 00:37:48.335207 kubelet[1710]: I1101 00:37:48.335205 1710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:37:48.335411 kubelet[1710]: I1101 00:37:48.335244 1710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:37:48.335411 kubelet[1710]: I1101 00:37:48.335277 1710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:37:48.335411 kubelet[1710]: I1101 00:37:48.335298 1710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Nov 1 00:37:48.335411 kubelet[1710]: I1101 00:37:48.335326 1710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6109992bac8086f629411b16b62a0225-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6109992bac8086f629411b16b62a0225\") " pod="kube-system/kube-apiserver-localhost" Nov 1 00:37:48.335411 kubelet[1710]: I1101 00:37:48.335357 1710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6109992bac8086f629411b16b62a0225-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6109992bac8086f629411b16b62a0225\") " pod="kube-system/kube-apiserver-localhost" Nov 1 00:37:48.335562 kubelet[1710]: I1101 00:37:48.335401 1710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:37:48.335562 kubelet[1710]: I1101 00:37:48.335445 1710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:37:48.544755 kubelet[1710]: E1101 00:37:48.544633 1710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:37:48.545418 env[1303]: time="2025-11-01T00:37:48.545349108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6109992bac8086f629411b16b62a0225,Namespace:kube-system,Attempt:0,}" Nov 1 00:37:48.548722 kubelet[1710]: E1101 00:37:48.548676 1710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:37:48.548838 kubelet[1710]: E1101 00:37:48.548796 1710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:37:48.549207 env[1303]: time="2025-11-01T00:37:48.549153195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,}" Nov 1 00:37:48.549395 env[1303]: time="2025-11-01T00:37:48.549251733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,}" Nov 1 00:37:48.799479 kubelet[1710]: E1101 00:37:48.799321 1710 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.57:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.57:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:37:48.984984 kubelet[1710]: I1101 00:37:48.984951 1710 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 00:37:48.985376 kubelet[1710]: E1101 00:37:48.985330 1710 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.57:6443/api/v1/nodes\": dial tcp 10.0.0.57:6443: connect: connection refused" node="localhost" Nov 1 00:37:49.096058 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount118347946.mount: Deactivated successfully. Nov 1 00:37:49.374926 env[1303]: time="2025-11-01T00:37:49.374797219Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:37:49.378041 env[1303]: time="2025-11-01T00:37:49.377979693Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:37:49.379299 env[1303]: time="2025-11-01T00:37:49.379256968Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:37:49.380577 env[1303]: time="2025-11-01T00:37:49.380531950Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:37:49.382695 env[1303]: time="2025-11-01T00:37:49.382656585Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:37:49.384081 env[1303]: time="2025-11-01T00:37:49.384049781Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:37:49.385240 env[1303]: time="2025-11-01T00:37:49.385213680Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:37:49.397258 env[1303]: time="2025-11-01T00:37:49.397228277Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:37:49.426392 env[1303]: time="2025-11-01T00:37:49.426344752Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:37:49.445636 env[1303]: time="2025-11-01T00:37:49.445574940Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:37:49.469199 env[1303]: time="2025-11-01T00:37:49.469163836Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:37:49.471347 env[1303]: time="2025-11-01T00:37:49.471293241Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:37:49.550102 env[1303]: time="2025-11-01T00:37:49.550019885Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:37:49.550102 env[1303]: time="2025-11-01T00:37:49.550065236Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:37:49.550102 env[1303]: time="2025-11-01T00:37:49.550097922Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:37:49.550949 env[1303]: time="2025-11-01T00:37:49.550861333Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/eef1e4857612c4883fb6d82b58425fc5df3cd62852e029ca48053aa0ee33ac17 pid=1754 runtime=io.containerd.runc.v2 Nov 1 00:37:49.557553 env[1303]: time="2025-11-01T00:37:49.557476728Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:37:49.557553 env[1303]: time="2025-11-01T00:37:49.557517409Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:37:49.557553 env[1303]: time="2025-11-01T00:37:49.557542610Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:37:49.558081 env[1303]: time="2025-11-01T00:37:49.557751546Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f1524e1692acc3667b3d2aab0b01a0138d02524ac5a7214db5869819ffc01116 pid=1771 runtime=io.containerd.runc.v2 Nov 1 00:37:49.561047 env[1303]: time="2025-11-01T00:37:49.560844992Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:37:49.561047 env[1303]: time="2025-11-01T00:37:49.560896845Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:37:49.561047 env[1303]: time="2025-11-01T00:37:49.560907867Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:37:49.561871 env[1303]: time="2025-11-01T00:37:49.561197354Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/695e78b72474cad959d66d294c30b0bece3c6c84c04bd03e4ebda4ff72ea4da5 pid=1767 runtime=io.containerd.runc.v2 Nov 1 00:37:49.728875 env[1303]: time="2025-11-01T00:37:49.728823059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,} returns sandbox id \"695e78b72474cad959d66d294c30b0bece3c6c84c04bd03e4ebda4ff72ea4da5\"" Nov 1 00:37:49.729480 env[1303]: time="2025-11-01T00:37:49.729460639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,} returns sandbox id \"eef1e4857612c4883fb6d82b58425fc5df3cd62852e029ca48053aa0ee33ac17\"" Nov 1 00:37:49.731623 kubelet[1710]: E1101 00:37:49.730605 1710 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.57:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.57:6443: connect: connection refused" interval="3.2s" Nov 1 00:37:49.732928 kubelet[1710]: E1101 00:37:49.732697 1710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:37:49.734768 kubelet[1710]: E1101 00:37:49.733175 1710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:37:49.739095 env[1303]: time="2025-11-01T00:37:49.739064712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6109992bac8086f629411b16b62a0225,Namespace:kube-system,Attempt:0,} returns sandbox id \"f1524e1692acc3667b3d2aab0b01a0138d02524ac5a7214db5869819ffc01116\"" Nov 1 00:37:49.740740 env[1303]: time="2025-11-01T00:37:49.740719901Z" level=info msg="CreateContainer within sandbox \"eef1e4857612c4883fb6d82b58425fc5df3cd62852e029ca48053aa0ee33ac17\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 1 00:37:49.740891 kubelet[1710]: E1101 00:37:49.740858 1710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:37:49.741185 env[1303]: time="2025-11-01T00:37:49.741163324Z" level=info msg="CreateContainer within sandbox \"695e78b72474cad959d66d294c30b0bece3c6c84c04bd03e4ebda4ff72ea4da5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 1 00:37:49.743051 env[1303]: time="2025-11-01T00:37:49.743027660Z" level=info msg="CreateContainer within sandbox \"f1524e1692acc3667b3d2aab0b01a0138d02524ac5a7214db5869819ffc01116\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 1 00:37:49.941775 env[1303]: time="2025-11-01T00:37:49.941714765Z" level=info msg="CreateContainer within sandbox \"eef1e4857612c4883fb6d82b58425fc5df3cd62852e029ca48053aa0ee33ac17\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"70587839f40bb1f1863ddaf4bc520806b0b15966606bc72ad32a41e727333a77\"" Nov 1 00:37:49.942347 env[1303]: time="2025-11-01T00:37:49.942313919Z" level=info msg="StartContainer for \"70587839f40bb1f1863ddaf4bc520806b0b15966606bc72ad32a41e727333a77\"" Nov 1 00:37:49.945389 env[1303]: time="2025-11-01T00:37:49.945346924Z" level=info msg="CreateContainer within sandbox \"f1524e1692acc3667b3d2aab0b01a0138d02524ac5a7214db5869819ffc01116\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6c9709ef3a198c4a83cee3f98fdd17326c2c31c0bd3e00667e1013512181f022\"" Nov 1 00:37:49.945621 env[1303]: time="2025-11-01T00:37:49.945601201Z" level=info msg="StartContainer for \"6c9709ef3a198c4a83cee3f98fdd17326c2c31c0bd3e00667e1013512181f022\"" Nov 1 00:37:49.946885 env[1303]: time="2025-11-01T00:37:49.946857496Z" level=info msg="CreateContainer within sandbox \"695e78b72474cad959d66d294c30b0bece3c6c84c04bd03e4ebda4ff72ea4da5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"8ece81da8d437a133c599ab3a6f8336cde3913260a1dc52adfb7aec28d23254b\"" Nov 1 00:37:49.947124 env[1303]: time="2025-11-01T00:37:49.947096873Z" level=info msg="StartContainer for \"8ece81da8d437a133c599ab3a6f8336cde3913260a1dc52adfb7aec28d23254b\"" Nov 1 00:37:50.012868 env[1303]: time="2025-11-01T00:37:50.012733109Z" level=info msg="StartContainer for \"70587839f40bb1f1863ddaf4bc520806b0b15966606bc72ad32a41e727333a77\" returns successfully" Nov 1 00:37:50.022293 env[1303]: time="2025-11-01T00:37:50.022252903Z" level=info msg="StartContainer for \"8ece81da8d437a133c599ab3a6f8336cde3913260a1dc52adfb7aec28d23254b\" returns successfully" Nov 1 00:37:50.031042 env[1303]: time="2025-11-01T00:37:50.031010911Z" level=info msg="StartContainer for \"6c9709ef3a198c4a83cee3f98fdd17326c2c31c0bd3e00667e1013512181f022\" returns successfully" Nov 1 00:37:50.586676 kubelet[1710]: I1101 00:37:50.586648 1710 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 00:37:50.749957 kubelet[1710]: E1101 00:37:50.749915 1710 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:37:50.750355 kubelet[1710]: E1101 00:37:50.750061 1710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:37:50.751288 kubelet[1710]: E1101 00:37:50.751269 1710 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:37:50.751567 kubelet[1710]: E1101 00:37:50.751555 1710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:37:50.753368 kubelet[1710]: E1101 00:37:50.753356 1710 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:37:50.753592 kubelet[1710]: E1101 00:37:50.753577 1710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:37:51.777704 kubelet[1710]: E1101 00:37:51.777662 1710 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:37:51.778101 kubelet[1710]: E1101 00:37:51.777781 1710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:37:51.778101 kubelet[1710]: E1101 00:37:51.778058 1710 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:37:51.778148 kubelet[1710]: E1101 00:37:51.778138 1710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:37:51.778317 kubelet[1710]: E1101 00:37:51.778295 1710 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:37:51.778414 kubelet[1710]: E1101 00:37:51.778390 1710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:37:51.977003 kubelet[1710]: I1101 00:37:51.976950 1710 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 1 00:37:51.977003 kubelet[1710]: E1101 00:37:51.976999 1710 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Nov 1 00:37:52.024907 kubelet[1710]: I1101 00:37:52.024853 1710 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 1 00:37:52.030089 kubelet[1710]: E1101 00:37:52.029907 1710 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 1 00:37:52.030089 kubelet[1710]: I1101 00:37:52.029951 1710 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 1 00:37:52.031940 kubelet[1710]: E1101 00:37:52.031895 1710 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Nov 1 00:37:52.031940 kubelet[1710]: I1101 00:37:52.031930 1710 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 1 00:37:52.033368 kubelet[1710]: E1101 00:37:52.033326 1710 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 1 00:37:52.693729 kubelet[1710]: I1101 00:37:52.693659 1710 apiserver.go:52] "Watching apiserver" Nov 1 00:37:52.724981 kubelet[1710]: I1101 00:37:52.724916 1710 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 1 00:37:52.778053 kubelet[1710]: I1101 00:37:52.778002 1710 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 1 00:37:52.778578 kubelet[1710]: I1101 00:37:52.778541 1710 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 1 00:37:52.784733 kubelet[1710]: E1101 00:37:52.784707 1710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:37:52.785356 kubelet[1710]: E1101 00:37:52.785327 1710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:37:53.780170 kubelet[1710]: E1101 00:37:53.780132 1710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:37:53.780170 kubelet[1710]: E1101 00:37:53.780176 1710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:37:54.366632 systemd[1]: Reloading. Nov 1 00:37:54.431573 /usr/lib/systemd/system-generators/torcx-generator[2000]: time="2025-11-01T00:37:54Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Nov 1 00:37:54.431609 /usr/lib/systemd/system-generators/torcx-generator[2000]: time="2025-11-01T00:37:54Z" level=info msg="torcx already run" Nov 1 00:37:54.494926 kubelet[1710]: I1101 00:37:54.494860 1710 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 1 00:37:54.505231 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 00:37:54.505247 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 00:37:54.525399 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:37:54.602414 systemd[1]: Stopping kubelet.service... Nov 1 00:37:54.624786 systemd[1]: kubelet.service: Deactivated successfully. Nov 1 00:37:54.625113 systemd[1]: Stopped kubelet.service. Nov 1 00:37:54.626979 systemd[1]: Starting kubelet.service... Nov 1 00:37:54.790768 systemd[1]: Started kubelet.service. Nov 1 00:37:54.836730 kubelet[2056]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:37:54.836730 kubelet[2056]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 00:37:54.836730 kubelet[2056]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:37:54.837123 kubelet[2056]: I1101 00:37:54.836761 2056 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 00:37:54.843713 kubelet[2056]: I1101 00:37:54.843669 2056 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 1 00:37:54.843713 kubelet[2056]: I1101 00:37:54.843695 2056 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 00:37:54.843925 kubelet[2056]: I1101 00:37:54.843913 2056 server.go:954] "Client rotation is on, will bootstrap in background" Nov 1 00:37:54.844997 kubelet[2056]: I1101 00:37:54.844977 2056 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 1 00:37:54.846829 kubelet[2056]: I1101 00:37:54.846798 2056 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 00:37:54.849693 kubelet[2056]: E1101 00:37:54.849663 2056 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 00:37:54.849693 kubelet[2056]: I1101 00:37:54.849692 2056 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 1 00:37:54.853731 kubelet[2056]: I1101 00:37:54.853709 2056 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 1 00:37:54.854301 kubelet[2056]: I1101 00:37:54.854260 2056 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 00:37:54.854528 kubelet[2056]: I1101 00:37:54.854301 2056 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Nov 1 00:37:54.854697 kubelet[2056]: I1101 00:37:54.854532 2056 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 00:37:54.854697 kubelet[2056]: I1101 00:37:54.854545 2056 container_manager_linux.go:304] "Creating device plugin manager" Nov 1 00:37:54.854697 kubelet[2056]: I1101 00:37:54.854606 2056 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:37:54.854850 kubelet[2056]: I1101 00:37:54.854756 2056 kubelet.go:446] "Attempting to sync node with API server" Nov 1 00:37:54.855100 kubelet[2056]: I1101 00:37:54.855071 2056 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 00:37:54.855158 kubelet[2056]: I1101 00:37:54.855112 2056 kubelet.go:352] "Adding apiserver pod source" Nov 1 00:37:54.855158 kubelet[2056]: I1101 00:37:54.855133 2056 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 00:37:54.857731 kubelet[2056]: I1101 00:37:54.857703 2056 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Nov 1 00:37:54.858605 kubelet[2056]: I1101 00:37:54.858575 2056 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 1 00:37:54.859341 kubelet[2056]: I1101 00:37:54.859291 2056 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 1 00:37:54.859437 kubelet[2056]: I1101 00:37:54.859355 2056 server.go:1287] "Started kubelet" Nov 1 00:37:54.862808 kubelet[2056]: I1101 00:37:54.859790 2056 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 00:37:54.862808 kubelet[2056]: I1101 00:37:54.861160 2056 server.go:479] "Adding debug handlers to kubelet server" Nov 1 00:37:54.862808 kubelet[2056]: I1101 00:37:54.862683 2056 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 00:37:54.863018 kubelet[2056]: I1101 00:37:54.862929 2056 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 00:37:54.867418 kubelet[2056]: I1101 00:37:54.864750 2056 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 00:37:54.867695 kubelet[2056]: I1101 00:37:54.867497 2056 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 00:37:54.872652 kubelet[2056]: E1101 00:37:54.872583 2056 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 1 00:37:54.873100 kubelet[2056]: I1101 00:37:54.873084 2056 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 1 00:37:54.874556 kubelet[2056]: I1101 00:37:54.874506 2056 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 1 00:37:54.876353 kubelet[2056]: I1101 00:37:54.874997 2056 reconciler.go:26] "Reconciler: start to sync state" Nov 1 00:37:54.876503 kubelet[2056]: I1101 00:37:54.876051 2056 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 00:37:54.878417 kubelet[2056]: E1101 00:37:54.878365 2056 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 00:37:54.879287 kubelet[2056]: I1101 00:37:54.879252 2056 factory.go:221] Registration of the containerd container factory successfully Nov 1 00:37:54.879287 kubelet[2056]: I1101 00:37:54.879273 2056 factory.go:221] Registration of the systemd container factory successfully Nov 1 00:37:54.885552 kubelet[2056]: I1101 00:37:54.885503 2056 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 1 00:37:54.886559 kubelet[2056]: I1101 00:37:54.886531 2056 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 1 00:37:54.886670 kubelet[2056]: I1101 00:37:54.886564 2056 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 1 00:37:54.886670 kubelet[2056]: I1101 00:37:54.886614 2056 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 00:37:54.886670 kubelet[2056]: I1101 00:37:54.886630 2056 kubelet.go:2382] "Starting kubelet main sync loop" Nov 1 00:37:54.886794 kubelet[2056]: E1101 00:37:54.886685 2056 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 00:37:54.922525 kubelet[2056]: I1101 00:37:54.922491 2056 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 00:37:54.922525 kubelet[2056]: I1101 00:37:54.922513 2056 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 00:37:54.922719 kubelet[2056]: I1101 00:37:54.922539 2056 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:37:54.922771 kubelet[2056]: I1101 00:37:54.922729 2056 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 1 00:37:54.922771 kubelet[2056]: I1101 00:37:54.922742 2056 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 1 00:37:54.922771 kubelet[2056]: I1101 00:37:54.922762 2056 policy_none.go:49] "None policy: Start" Nov 1 00:37:54.922875 kubelet[2056]: I1101 00:37:54.922773 2056 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 1 00:37:54.922875 kubelet[2056]: I1101 00:37:54.922784 2056 state_mem.go:35] "Initializing new in-memory state store" Nov 1 00:37:54.922935 kubelet[2056]: I1101 00:37:54.922912 2056 state_mem.go:75] "Updated machine memory state" Nov 1 00:37:54.923933 kubelet[2056]: I1101 00:37:54.923900 2056 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 1 00:37:54.924107 kubelet[2056]: I1101 00:37:54.924081 2056 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 00:37:54.924170 kubelet[2056]: I1101 00:37:54.924101 2056 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 00:37:54.924545 kubelet[2056]: I1101 00:37:54.924515 2056 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 00:37:54.925006 kubelet[2056]: E1101 00:37:54.924992 2056 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 00:37:54.987683 kubelet[2056]: I1101 00:37:54.987645 2056 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 1 00:37:54.987820 kubelet[2056]: I1101 00:37:54.987645 2056 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 1 00:37:54.987859 kubelet[2056]: I1101 00:37:54.987648 2056 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 1 00:37:55.014217 kubelet[2056]: E1101 00:37:55.014174 2056 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Nov 1 00:37:55.014217 kubelet[2056]: E1101 00:37:55.014174 2056 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 1 00:37:55.014504 kubelet[2056]: E1101 00:37:55.014278 2056 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 1 00:37:55.050359 kubelet[2056]: I1101 00:37:55.050326 2056 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 00:37:55.077736 kubelet[2056]: I1101 00:37:55.077674 2056 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6109992bac8086f629411b16b62a0225-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6109992bac8086f629411b16b62a0225\") " pod="kube-system/kube-apiserver-localhost" Nov 1 00:37:55.077736 kubelet[2056]: I1101 00:37:55.077727 2056 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:37:55.077947 kubelet[2056]: I1101 00:37:55.077760 2056 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:37:55.077947 kubelet[2056]: I1101 00:37:55.077785 2056 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:37:55.077947 kubelet[2056]: I1101 00:37:55.077803 2056 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Nov 1 00:37:55.077947 kubelet[2056]: I1101 00:37:55.077843 2056 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6109992bac8086f629411b16b62a0225-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6109992bac8086f629411b16b62a0225\") " pod="kube-system/kube-apiserver-localhost" Nov 1 00:37:55.078221 kubelet[2056]: I1101 00:37:55.077874 2056 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:37:55.078259 kubelet[2056]: I1101 00:37:55.078234 2056 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:37:55.078259 kubelet[2056]: I1101 00:37:55.078251 2056 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6109992bac8086f629411b16b62a0225-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6109992bac8086f629411b16b62a0225\") " pod="kube-system/kube-apiserver-localhost" Nov 1 00:37:55.315213 kubelet[2056]: E1101 00:37:55.315171 2056 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:37:55.315446 kubelet[2056]: E1101 00:37:55.315171 2056 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:37:55.315446 kubelet[2056]: E1101 00:37:55.315301 2056 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:37:55.348758 kubelet[2056]: I1101 00:37:55.348718 2056 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Nov 1 00:37:55.348914 kubelet[2056]: I1101 00:37:55.348837 2056 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 1 00:37:55.407744 sudo[2090]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Nov 1 00:37:55.408041 sudo[2090]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Nov 1 00:37:55.855866 kubelet[2056]: I1101 00:37:55.855799 2056 apiserver.go:52] "Watching apiserver" Nov 1 00:37:55.875305 kubelet[2056]: I1101 00:37:55.875199 2056 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 1 00:37:55.899004 kubelet[2056]: I1101 00:37:55.898974 2056 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 1 00:37:55.899694 kubelet[2056]: E1101 00:37:55.899635 2056 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:37:55.900152 kubelet[2056]: I1101 00:37:55.900124 2056 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 1 00:37:55.922313 kubelet[2056]: E1101 00:37:55.921348 2056 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Nov 1 00:37:55.922313 kubelet[2056]: E1101 00:37:55.921575 2056 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:37:55.922313 kubelet[2056]: E1101 00:37:55.921794 2056 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 1 00:37:55.922313 kubelet[2056]: E1101 00:37:55.921899 2056 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:37:55.937851 kubelet[2056]: I1101 00:37:55.937739 2056 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.937686918 podStartE2EDuration="3.937686918s" podCreationTimestamp="2025-11-01 00:37:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:37:55.924551083 +0000 UTC m=+1.123190411" watchObservedRunningTime="2025-11-01 00:37:55.937686918 +0000 UTC m=+1.136326216" Nov 1 00:37:55.938106 kubelet[2056]: I1101 00:37:55.937993 2056 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.937985187 podStartE2EDuration="1.937985187s" podCreationTimestamp="2025-11-01 00:37:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:37:55.937964166 +0000 UTC m=+1.136603464" watchObservedRunningTime="2025-11-01 00:37:55.937985187 +0000 UTC m=+1.136624485" Nov 1 00:37:55.948653 kubelet[2056]: I1101 00:37:55.948570 2056 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.948527947 podStartE2EDuration="3.948527947s" podCreationTimestamp="2025-11-01 00:37:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:37:55.948131996 +0000 UTC m=+1.146771324" watchObservedRunningTime="2025-11-01 00:37:55.948527947 +0000 UTC m=+1.147167245" Nov 1 00:37:56.015425 sudo[2090]: pam_unix(sudo:session): session closed for user root Nov 1 00:37:56.970153 kubelet[2056]: E1101 00:37:56.970119 2056 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:37:56.970635 kubelet[2056]: E1101 00:37:56.970291 2056 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:37:56.970635 kubelet[2056]: E1101 00:37:56.970568 2056 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:37:57.930577 sudo[1425]: pam_unix(sudo:session): session closed for user root Nov 1 00:37:57.971356 kubelet[2056]: E1101 00:37:57.971311 2056 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:37:58.142547 sshd[1420]: pam_unix(sshd:session): session closed for user core Nov 1 00:37:58.144710 systemd[1]: sshd@4-10.0.0.57:22-10.0.0.1:52536.service: Deactivated successfully. Nov 1 00:37:58.145807 systemd-logind[1291]: Session 5 logged out. Waiting for processes to exit. Nov 1 00:37:58.145860 systemd[1]: session-5.scope: Deactivated successfully. Nov 1 00:37:58.147036 systemd-logind[1291]: Removed session 5. Nov 1 00:37:59.358901 kubelet[2056]: E1101 00:37:59.358812 2056 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:37:59.705312 kubelet[2056]: I1101 00:37:59.705282 2056 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 1 00:37:59.705655 env[1303]: time="2025-11-01T00:37:59.705599397Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 1 00:37:59.706088 kubelet[2056]: I1101 00:37:59.705785 2056 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 1 00:38:00.613095 kubelet[2056]: I1101 00:38:00.613014 2056 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/42ff1dd8-0fa8-4203-9456-376e4add7b0f-bpf-maps\") pod \"cilium-fkd72\" (UID: \"42ff1dd8-0fa8-4203-9456-376e4add7b0f\") " pod="kube-system/cilium-fkd72" Nov 1 00:38:00.613095 kubelet[2056]: I1101 00:38:00.613091 2056 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/42ff1dd8-0fa8-4203-9456-376e4add7b0f-hostproc\") pod \"cilium-fkd72\" (UID: \"42ff1dd8-0fa8-4203-9456-376e4add7b0f\") " pod="kube-system/cilium-fkd72" Nov 1 00:38:00.613589 kubelet[2056]: I1101 00:38:00.613114 2056 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/42ff1dd8-0fa8-4203-9456-376e4add7b0f-hubble-tls\") pod \"cilium-fkd72\" (UID: \"42ff1dd8-0fa8-4203-9456-376e4add7b0f\") " pod="kube-system/cilium-fkd72" Nov 1 00:38:00.613589 kubelet[2056]: I1101 00:38:00.613139 2056 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdlcp\" (UniqueName: \"kubernetes.io/projected/8068d99c-edc6-499e-b9d3-893eaeb33504-kube-api-access-jdlcp\") pod \"kube-proxy-v6j82\" (UID: \"8068d99c-edc6-499e-b9d3-893eaeb33504\") " pod="kube-system/kube-proxy-v6j82" Nov 1 00:38:00.613589 kubelet[2056]: I1101 00:38:00.613168 2056 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/42ff1dd8-0fa8-4203-9456-376e4add7b0f-cilium-run\") pod \"cilium-fkd72\" (UID: \"42ff1dd8-0fa8-4203-9456-376e4add7b0f\") " pod="kube-system/cilium-fkd72" Nov 1 00:38:00.613589 kubelet[2056]: I1101 00:38:00.613193 2056 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/42ff1dd8-0fa8-4203-9456-376e4add7b0f-cilium-cgroup\") pod \"cilium-fkd72\" (UID: \"42ff1dd8-0fa8-4203-9456-376e4add7b0f\") " pod="kube-system/cilium-fkd72" Nov 1 00:38:00.613589 kubelet[2056]: I1101 00:38:00.613221 2056 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/42ff1dd8-0fa8-4203-9456-376e4add7b0f-host-proc-sys-net\") pod \"cilium-fkd72\" (UID: \"42ff1dd8-0fa8-4203-9456-376e4add7b0f\") " pod="kube-system/cilium-fkd72" Nov 1 00:38:00.613589 kubelet[2056]: I1101 00:38:00.613246 2056 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8068d99c-edc6-499e-b9d3-893eaeb33504-kube-proxy\") pod \"kube-proxy-v6j82\" (UID: \"8068d99c-edc6-499e-b9d3-893eaeb33504\") " pod="kube-system/kube-proxy-v6j82" Nov 1 00:38:00.613764 kubelet[2056]: I1101 00:38:00.613266 2056 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8068d99c-edc6-499e-b9d3-893eaeb33504-lib-modules\") pod \"kube-proxy-v6j82\" (UID: \"8068d99c-edc6-499e-b9d3-893eaeb33504\") " pod="kube-system/kube-proxy-v6j82" Nov 1 00:38:00.613764 kubelet[2056]: I1101 00:38:00.613285 2056 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/42ff1dd8-0fa8-4203-9456-376e4add7b0f-cilium-config-path\") pod \"cilium-fkd72\" (UID: \"42ff1dd8-0fa8-4203-9456-376e4add7b0f\") " pod="kube-system/cilium-fkd72" Nov 1 00:38:00.613764 kubelet[2056]: I1101 00:38:00.613329 2056 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxrss\" (UniqueName: \"kubernetes.io/projected/42ff1dd8-0fa8-4203-9456-376e4add7b0f-kube-api-access-hxrss\") pod \"cilium-fkd72\" (UID: \"42ff1dd8-0fa8-4203-9456-376e4add7b0f\") " pod="kube-system/cilium-fkd72" Nov 1 00:38:00.613764 kubelet[2056]: I1101 00:38:00.613362 2056 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8068d99c-edc6-499e-b9d3-893eaeb33504-xtables-lock\") pod \"kube-proxy-v6j82\" (UID: \"8068d99c-edc6-499e-b9d3-893eaeb33504\") " pod="kube-system/kube-proxy-v6j82" Nov 1 00:38:00.613764 kubelet[2056]: I1101 00:38:00.613401 2056 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/42ff1dd8-0fa8-4203-9456-376e4add7b0f-xtables-lock\") pod \"cilium-fkd72\" (UID: \"42ff1dd8-0fa8-4203-9456-376e4add7b0f\") " pod="kube-system/cilium-fkd72" Nov 1 00:38:00.613872 kubelet[2056]: I1101 00:38:00.613448 2056 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/42ff1dd8-0fa8-4203-9456-376e4add7b0f-etc-cni-netd\") pod \"cilium-fkd72\" (UID: \"42ff1dd8-0fa8-4203-9456-376e4add7b0f\") " pod="kube-system/cilium-fkd72" Nov 1 00:38:00.613872 kubelet[2056]: I1101 00:38:00.613483 2056 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/42ff1dd8-0fa8-4203-9456-376e4add7b0f-lib-modules\") pod \"cilium-fkd72\" (UID: \"42ff1dd8-0fa8-4203-9456-376e4add7b0f\") " pod="kube-system/cilium-fkd72" Nov 1 00:38:00.613872 kubelet[2056]: I1101 00:38:00.613522 2056 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/42ff1dd8-0fa8-4203-9456-376e4add7b0f-cni-path\") pod \"cilium-fkd72\" (UID: \"42ff1dd8-0fa8-4203-9456-376e4add7b0f\") " pod="kube-system/cilium-fkd72" Nov 1 00:38:00.613872 kubelet[2056]: I1101 00:38:00.613546 2056 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/42ff1dd8-0fa8-4203-9456-376e4add7b0f-host-proc-sys-kernel\") pod \"cilium-fkd72\" (UID: \"42ff1dd8-0fa8-4203-9456-376e4add7b0f\") " pod="kube-system/cilium-fkd72" Nov 1 00:38:00.613872 kubelet[2056]: I1101 00:38:00.613568 2056 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/42ff1dd8-0fa8-4203-9456-376e4add7b0f-clustermesh-secrets\") pod \"cilium-fkd72\" (UID: \"42ff1dd8-0fa8-4203-9456-376e4add7b0f\") " pod="kube-system/cilium-fkd72" Nov 1 00:38:00.714731 kubelet[2056]: I1101 00:38:00.714683 2056 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Nov 1 00:38:00.814820 kubelet[2056]: I1101 00:38:00.814764 2056 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8c15cc8d-d91f-465f-9473-8fb8521f361d-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-zz6x5\" (UID: \"8c15cc8d-d91f-465f-9473-8fb8521f361d\") " pod="kube-system/cilium-operator-6c4d7847fc-zz6x5" Nov 1 00:38:00.814820 kubelet[2056]: I1101 00:38:00.814827 2056 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5j56\" (UniqueName: \"kubernetes.io/projected/8c15cc8d-d91f-465f-9473-8fb8521f361d-kube-api-access-w5j56\") pod \"cilium-operator-6c4d7847fc-zz6x5\" (UID: \"8c15cc8d-d91f-465f-9473-8fb8521f361d\") " pod="kube-system/cilium-operator-6c4d7847fc-zz6x5" Nov 1 00:38:00.841305 kubelet[2056]: E1101 00:38:00.841261 2056 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:38:00.841986 env[1303]: time="2025-11-01T00:38:00.841901519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-v6j82,Uid:8068d99c-edc6-499e-b9d3-893eaeb33504,Namespace:kube-system,Attempt:0,}" Nov 1 00:38:00.861488 kubelet[2056]: E1101 00:38:00.861463 2056 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:38:00.862031 env[1303]: time="2025-11-01T00:38:00.861970423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fkd72,Uid:42ff1dd8-0fa8-4203-9456-376e4add7b0f,Namespace:kube-system,Attempt:0,}" Nov 1 00:38:01.070588 kubelet[2056]: E1101 00:38:01.070525 2056 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:38:01.071420 env[1303]: time="2025-11-01T00:38:01.071179193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-zz6x5,Uid:8c15cc8d-d91f-465f-9473-8fb8521f361d,Namespace:kube-system,Attempt:0,}" Nov 1 00:38:01.315002 env[1303]: time="2025-11-01T00:38:01.314895964Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:38:01.315186 env[1303]: time="2025-11-01T00:38:01.315000467Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:38:01.315186 env[1303]: time="2025-11-01T00:38:01.315048321Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:38:01.315279 env[1303]: time="2025-11-01T00:38:01.315218532Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4034e340c24cd1bd301667e849cf539e7f643b2a75e5bd8a22a0d42c9950cf1a pid=2152 runtime=io.containerd.runc.v2 Nov 1 00:38:01.323233 env[1303]: time="2025-11-01T00:38:01.323018207Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:38:01.323233 env[1303]: time="2025-11-01T00:38:01.323164852Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:38:01.323520 env[1303]: time="2025-11-01T00:38:01.323237424Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:38:01.323626 env[1303]: time="2025-11-01T00:38:01.323543130Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/af50cc3ea70c0317f50b0b60306f649b3f1fc6b1286f7579cfec46daa2932012 pid=2182 runtime=io.containerd.runc.v2 Nov 1 00:38:01.324115 env[1303]: time="2025-11-01T00:38:01.324026361Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:38:01.324115 env[1303]: time="2025-11-01T00:38:01.324067270Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:38:01.324115 env[1303]: time="2025-11-01T00:38:01.324083271Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:38:01.324310 env[1303]: time="2025-11-01T00:38:01.324242472Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e64f66a53060ab59ff41d28e801b4b233d9af237299527ebbfcf1602771a809f pid=2183 runtime=io.containerd.runc.v2 Nov 1 00:38:01.359448 env[1303]: time="2025-11-01T00:38:01.359355965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-v6j82,Uid:8068d99c-edc6-499e-b9d3-893eaeb33504,Namespace:kube-system,Attempt:0,} returns sandbox id \"4034e340c24cd1bd301667e849cf539e7f643b2a75e5bd8a22a0d42c9950cf1a\"" Nov 1 00:38:01.360451 kubelet[2056]: E1101 00:38:01.360412 2056 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:38:01.365213 env[1303]: time="2025-11-01T00:38:01.365150595Z" level=info msg="CreateContainer within sandbox \"4034e340c24cd1bd301667e849cf539e7f643b2a75e5bd8a22a0d42c9950cf1a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 1 00:38:01.380222 env[1303]: time="2025-11-01T00:38:01.380154904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fkd72,Uid:42ff1dd8-0fa8-4203-9456-376e4add7b0f,Namespace:kube-system,Attempt:0,} returns sandbox id \"af50cc3ea70c0317f50b0b60306f649b3f1fc6b1286f7579cfec46daa2932012\"" Nov 1 00:38:01.381724 kubelet[2056]: E1101 00:38:01.381680 2056 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:38:01.384008 env[1303]: time="2025-11-01T00:38:01.383938205Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Nov 1 00:38:01.392053 env[1303]: time="2025-11-01T00:38:01.391868784Z" level=info msg="CreateContainer within sandbox \"4034e340c24cd1bd301667e849cf539e7f643b2a75e5bd8a22a0d42c9950cf1a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4103ca707227ede068b3f6848cf9e43108232d7e03850d945cb5c4abd64ab8c2\"" Nov 1 00:38:01.393359 env[1303]: time="2025-11-01T00:38:01.393329700Z" level=info msg="StartContainer for \"4103ca707227ede068b3f6848cf9e43108232d7e03850d945cb5c4abd64ab8c2\"" Nov 1 00:38:01.399435 env[1303]: time="2025-11-01T00:38:01.398922235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-zz6x5,Uid:8c15cc8d-d91f-465f-9473-8fb8521f361d,Namespace:kube-system,Attempt:0,} returns sandbox id \"e64f66a53060ab59ff41d28e801b4b233d9af237299527ebbfcf1602771a809f\"" Nov 1 00:38:01.400340 kubelet[2056]: E1101 00:38:01.400290 2056 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:38:01.452428 env[1303]: time="2025-11-01T00:38:01.447701341Z" level=info msg="StartContainer for \"4103ca707227ede068b3f6848cf9e43108232d7e03850d945cb5c4abd64ab8c2\" returns successfully" Nov 1 00:38:01.980369 kubelet[2056]: E1101 00:38:01.980333 2056 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:38:01.993331 kubelet[2056]: I1101 00:38:01.993264 2056 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-v6j82" podStartSLOduration=1.993241801 podStartE2EDuration="1.993241801s" podCreationTimestamp="2025-11-01 00:38:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:38:01.99126511 +0000 UTC m=+7.189904408" watchObservedRunningTime="2025-11-01 00:38:01.993241801 +0000 UTC m=+7.191881099" Nov 1 00:38:03.934628 update_engine[1292]: I1101 00:38:03.934554 1292 update_attempter.cc:509] Updating boot flags... Nov 1 00:38:04.900866 kubelet[2056]: E1101 00:38:04.900833 2056 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:38:04.990038 kubelet[2056]: E1101 00:38:04.990008 2056 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:38:05.993536 kubelet[2056]: E1101 00:38:05.993488 2056 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:38:07.186029 kubelet[2056]: E1101 00:38:07.185983 2056 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:38:08.009833 kubelet[2056]: E1101 00:38:08.009781 2056 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:38:08.589106 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount317904960.mount: Deactivated successfully. Nov 1 00:38:09.364982 kubelet[2056]: E1101 00:38:09.364932 2056 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:38:14.242817 env[1303]: time="2025-11-01T00:38:14.242733737Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:38:14.245143 env[1303]: time="2025-11-01T00:38:14.245094688Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:38:14.247129 env[1303]: time="2025-11-01T00:38:14.247093214Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:38:14.247913 env[1303]: time="2025-11-01T00:38:14.247866956Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Nov 1 00:38:14.251952 env[1303]: time="2025-11-01T00:38:14.251899307Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Nov 1 00:38:14.254150 env[1303]: time="2025-11-01T00:38:14.254105111Z" level=info msg="CreateContainer within sandbox \"af50cc3ea70c0317f50b0b60306f649b3f1fc6b1286f7579cfec46daa2932012\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 1 00:38:14.271693 env[1303]: time="2025-11-01T00:38:14.271625026Z" level=info msg="CreateContainer within sandbox \"af50cc3ea70c0317f50b0b60306f649b3f1fc6b1286f7579cfec46daa2932012\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"381bcdc0941a40298fe56c83b6647c862c50711033786ee1aeaf16b9f8939120\"" Nov 1 00:38:14.272172 env[1303]: time="2025-11-01T00:38:14.272146215Z" level=info msg="StartContainer for \"381bcdc0941a40298fe56c83b6647c862c50711033786ee1aeaf16b9f8939120\"" Nov 1 00:38:14.494917 env[1303]: time="2025-11-01T00:38:14.494749141Z" level=info msg="StartContainer for \"381bcdc0941a40298fe56c83b6647c862c50711033786ee1aeaf16b9f8939120\" returns successfully" Nov 1 00:38:14.951400 env[1303]: time="2025-11-01T00:38:14.951260680Z" level=error msg="collecting metrics for 381bcdc0941a40298fe56c83b6647c862c50711033786ee1aeaf16b9f8939120" error="cgroups: cgroup deleted: unknown" Nov 1 00:38:15.022731 kubelet[2056]: E1101 00:38:15.022694 2056 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:38:15.103959 env[1303]: time="2025-11-01T00:38:15.103904591Z" level=info msg="shim disconnected" id=381bcdc0941a40298fe56c83b6647c862c50711033786ee1aeaf16b9f8939120 Nov 1 00:38:15.104199 env[1303]: time="2025-11-01T00:38:15.104161763Z" level=warning msg="cleaning up after shim disconnected" id=381bcdc0941a40298fe56c83b6647c862c50711033786ee1aeaf16b9f8939120 namespace=k8s.io Nov 1 00:38:15.104199 env[1303]: time="2025-11-01T00:38:15.104182302Z" level=info msg="cleaning up dead shim" Nov 1 00:38:15.112848 env[1303]: time="2025-11-01T00:38:15.112799218Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:38:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2496 runtime=io.containerd.runc.v2\n" Nov 1 00:38:15.269283 systemd[1]: run-containerd-runc-k8s.io-381bcdc0941a40298fe56c83b6647c862c50711033786ee1aeaf16b9f8939120-runc.GYsjOS.mount: Deactivated successfully. Nov 1 00:38:15.271179 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-381bcdc0941a40298fe56c83b6647c862c50711033786ee1aeaf16b9f8939120-rootfs.mount: Deactivated successfully. Nov 1 00:38:16.026219 kubelet[2056]: E1101 00:38:16.026184 2056 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:38:16.028007 env[1303]: time="2025-11-01T00:38:16.027922178Z" level=info msg="CreateContainer within sandbox \"af50cc3ea70c0317f50b0b60306f649b3f1fc6b1286f7579cfec46daa2932012\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 1 00:38:16.084088 env[1303]: time="2025-11-01T00:38:16.084000390Z" level=info msg="CreateContainer within sandbox \"af50cc3ea70c0317f50b0b60306f649b3f1fc6b1286f7579cfec46daa2932012\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d719ccc3ab5c106158a0fa98014502df3f59a51837a16439b6e265e1e6e8248b\"" Nov 1 00:38:16.084726 env[1303]: time="2025-11-01T00:38:16.084689147Z" level=info msg="StartContainer for \"d719ccc3ab5c106158a0fa98014502df3f59a51837a16439b6e265e1e6e8248b\"" Nov 1 00:38:16.132521 env[1303]: time="2025-11-01T00:38:16.132439199Z" level=info msg="StartContainer for \"d719ccc3ab5c106158a0fa98014502df3f59a51837a16439b6e265e1e6e8248b\" returns successfully" Nov 1 00:38:16.142353 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 1 00:38:16.142634 systemd[1]: Stopped systemd-sysctl.service. Nov 1 00:38:16.142858 systemd[1]: Stopping systemd-sysctl.service... Nov 1 00:38:16.144273 systemd[1]: Starting systemd-sysctl.service... Nov 1 00:38:16.153985 systemd[1]: Finished systemd-sysctl.service. Nov 1 00:38:16.169973 env[1303]: time="2025-11-01T00:38:16.169910752Z" level=info msg="shim disconnected" id=d719ccc3ab5c106158a0fa98014502df3f59a51837a16439b6e265e1e6e8248b Nov 1 00:38:16.169973 env[1303]: time="2025-11-01T00:38:16.169965036Z" level=warning msg="cleaning up after shim disconnected" id=d719ccc3ab5c106158a0fa98014502df3f59a51837a16439b6e265e1e6e8248b namespace=k8s.io Nov 1 00:38:16.169973 env[1303]: time="2025-11-01T00:38:16.169976738Z" level=info msg="cleaning up dead shim" Nov 1 00:38:16.176208 env[1303]: time="2025-11-01T00:38:16.176149559Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:38:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2561 runtime=io.containerd.runc.v2\n" Nov 1 00:38:16.268485 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d719ccc3ab5c106158a0fa98014502df3f59a51837a16439b6e265e1e6e8248b-rootfs.mount: Deactivated successfully. Nov 1 00:38:17.029413 kubelet[2056]: E1101 00:38:17.029343 2056 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:38:17.031267 env[1303]: time="2025-11-01T00:38:17.031182452Z" level=info msg="CreateContainer within sandbox \"af50cc3ea70c0317f50b0b60306f649b3f1fc6b1286f7579cfec46daa2932012\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 1 00:38:17.399669 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1439859405.mount: Deactivated successfully. Nov 1 00:38:17.420161 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3941825683.mount: Deactivated successfully. Nov 1 00:38:17.427392 env[1303]: time="2025-11-01T00:38:17.426031020Z" level=info msg="CreateContainer within sandbox \"af50cc3ea70c0317f50b0b60306f649b3f1fc6b1286f7579cfec46daa2932012\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"fc8b6f7e24a183150118ea193ecf051e873a00c85b5dc91b5bd337539ccddebd\"" Nov 1 00:38:17.428036 env[1303]: time="2025-11-01T00:38:17.427977439Z" level=info msg="StartContainer for \"fc8b6f7e24a183150118ea193ecf051e873a00c85b5dc91b5bd337539ccddebd\"" Nov 1 00:38:17.483527 env[1303]: time="2025-11-01T00:38:17.483446459Z" level=info msg="StartContainer for \"fc8b6f7e24a183150118ea193ecf051e873a00c85b5dc91b5bd337539ccddebd\" returns successfully" Nov 1 00:38:17.520122 env[1303]: time="2025-11-01T00:38:17.520039522Z" level=info msg="shim disconnected" id=fc8b6f7e24a183150118ea193ecf051e873a00c85b5dc91b5bd337539ccddebd Nov 1 00:38:17.520122 env[1303]: time="2025-11-01T00:38:17.520093375Z" level=warning msg="cleaning up after shim disconnected" id=fc8b6f7e24a183150118ea193ecf051e873a00c85b5dc91b5bd337539ccddebd namespace=k8s.io Nov 1 00:38:17.520122 env[1303]: time="2025-11-01T00:38:17.520103935Z" level=info msg="cleaning up dead shim" Nov 1 00:38:17.527754 env[1303]: time="2025-11-01T00:38:17.527707680Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:38:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2615 runtime=io.containerd.runc.v2\n" Nov 1 00:38:18.032670 kubelet[2056]: E1101 00:38:18.032601 2056 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:38:18.038568 env[1303]: time="2025-11-01T00:38:18.038509641Z" level=info msg="CreateContainer within sandbox \"af50cc3ea70c0317f50b0b60306f649b3f1fc6b1286f7579cfec46daa2932012\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 1 00:38:18.058682 env[1303]: time="2025-11-01T00:38:18.058594112Z" level=info msg="CreateContainer within sandbox \"af50cc3ea70c0317f50b0b60306f649b3f1fc6b1286f7579cfec46daa2932012\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"9ea4654912dc1c4eea8ddc6fdd2954da7706aa5ffd13487f48d61ae973e39c83\"" Nov 1 00:38:18.059636 env[1303]: time="2025-11-01T00:38:18.059579222Z" level=info msg="StartContainer for \"9ea4654912dc1c4eea8ddc6fdd2954da7706aa5ffd13487f48d61ae973e39c83\"" Nov 1 00:38:18.111776 env[1303]: time="2025-11-01T00:38:18.111718254Z" level=info msg="StartContainer for \"9ea4654912dc1c4eea8ddc6fdd2954da7706aa5ffd13487f48d61ae973e39c83\" returns successfully" Nov 1 00:38:18.475141 env[1303]: time="2025-11-01T00:38:18.475060581Z" level=info msg="shim disconnected" id=9ea4654912dc1c4eea8ddc6fdd2954da7706aa5ffd13487f48d61ae973e39c83 Nov 1 00:38:18.475141 env[1303]: time="2025-11-01T00:38:18.475117189Z" level=warning msg="cleaning up after shim disconnected" id=9ea4654912dc1c4eea8ddc6fdd2954da7706aa5ffd13487f48d61ae973e39c83 namespace=k8s.io Nov 1 00:38:18.475141 env[1303]: time="2025-11-01T00:38:18.475129422Z" level=info msg="cleaning up dead shim" Nov 1 00:38:18.483048 env[1303]: time="2025-11-01T00:38:18.482973546Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:38:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2672 runtime=io.containerd.runc.v2\n" Nov 1 00:38:18.485318 env[1303]: time="2025-11-01T00:38:18.485247557Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:38:18.489060 env[1303]: time="2025-11-01T00:38:18.489020430Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:38:18.491069 env[1303]: time="2025-11-01T00:38:18.491016840Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:38:18.491779 env[1303]: time="2025-11-01T00:38:18.491710676Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Nov 1 00:38:18.494272 env[1303]: time="2025-11-01T00:38:18.494210137Z" level=info msg="CreateContainer within sandbox \"e64f66a53060ab59ff41d28e801b4b233d9af237299527ebbfcf1602771a809f\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Nov 1 00:38:18.507794 env[1303]: time="2025-11-01T00:38:18.507724215Z" level=info msg="CreateContainer within sandbox \"e64f66a53060ab59ff41d28e801b4b233d9af237299527ebbfcf1602771a809f\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"e0edff8bbda060d7ccfbccc84e87d25c171888993b9f78df8ad3e75d906b2ac1\"" Nov 1 00:38:18.508344 env[1303]: time="2025-11-01T00:38:18.508322947Z" level=info msg="StartContainer for \"e0edff8bbda060d7ccfbccc84e87d25c171888993b9f78df8ad3e75d906b2ac1\"" Nov 1 00:38:18.559637 env[1303]: time="2025-11-01T00:38:18.559569605Z" level=info msg="StartContainer for \"e0edff8bbda060d7ccfbccc84e87d25c171888993b9f78df8ad3e75d906b2ac1\" returns successfully" Nov 1 00:38:19.035574 kubelet[2056]: E1101 00:38:19.035542 2056 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:38:19.038304 kubelet[2056]: E1101 00:38:19.038269 2056 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:38:19.040728 env[1303]: time="2025-11-01T00:38:19.040681051Z" level=info msg="CreateContainer within sandbox \"af50cc3ea70c0317f50b0b60306f649b3f1fc6b1286f7579cfec46daa2932012\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 1 00:38:19.062729 env[1303]: time="2025-11-01T00:38:19.062660582Z" level=info msg="CreateContainer within sandbox \"af50cc3ea70c0317f50b0b60306f649b3f1fc6b1286f7579cfec46daa2932012\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4c2a298bd3368c98ec57fcf2c60c95e30917f72fccc783ef5a170be9b1484ec8\"" Nov 1 00:38:19.063331 env[1303]: time="2025-11-01T00:38:19.063296876Z" level=info msg="StartContainer for \"4c2a298bd3368c98ec57fcf2c60c95e30917f72fccc783ef5a170be9b1484ec8\"" Nov 1 00:38:19.068669 kubelet[2056]: I1101 00:38:19.068607 2056 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-zz6x5" podStartSLOduration=1.977406006 podStartE2EDuration="19.068587489s" podCreationTimestamp="2025-11-01 00:38:00 +0000 UTC" firstStartedPulling="2025-11-01 00:38:01.401478184 +0000 UTC m=+6.600117482" lastFinishedPulling="2025-11-01 00:38:18.492659667 +0000 UTC m=+23.691298965" observedRunningTime="2025-11-01 00:38:19.047472948 +0000 UTC m=+24.246112246" watchObservedRunningTime="2025-11-01 00:38:19.068587489 +0000 UTC m=+24.267226777" Nov 1 00:38:19.125813 env[1303]: time="2025-11-01T00:38:19.125757325Z" level=info msg="StartContainer for \"4c2a298bd3368c98ec57fcf2c60c95e30917f72fccc783ef5a170be9b1484ec8\" returns successfully" Nov 1 00:38:19.263438 kubelet[2056]: I1101 00:38:19.263402 2056 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 1 00:38:19.290635 kubelet[2056]: I1101 00:38:19.290504 2056 status_manager.go:890] "Failed to get status for pod" podUID="87368007-39db-4dab-bc4a-437ecc36f1b1" pod="kube-system/coredns-668d6bf9bc-wcrwc" err="pods \"coredns-668d6bf9bc-wcrwc\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" Nov 1 00:38:19.340272 kubelet[2056]: I1101 00:38:19.340226 2056 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jddhj\" (UniqueName: \"kubernetes.io/projected/87368007-39db-4dab-bc4a-437ecc36f1b1-kube-api-access-jddhj\") pod \"coredns-668d6bf9bc-wcrwc\" (UID: \"87368007-39db-4dab-bc4a-437ecc36f1b1\") " pod="kube-system/coredns-668d6bf9bc-wcrwc" Nov 1 00:38:19.340272 kubelet[2056]: I1101 00:38:19.340276 2056 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87368007-39db-4dab-bc4a-437ecc36f1b1-config-volume\") pod \"coredns-668d6bf9bc-wcrwc\" (UID: \"87368007-39db-4dab-bc4a-437ecc36f1b1\") " pod="kube-system/coredns-668d6bf9bc-wcrwc" Nov 1 00:38:19.340514 kubelet[2056]: I1101 00:38:19.340341 2056 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/14c72fc8-f308-440c-905a-e984399a3c78-config-volume\") pod \"coredns-668d6bf9bc-g89mq\" (UID: \"14c72fc8-f308-440c-905a-e984399a3c78\") " pod="kube-system/coredns-668d6bf9bc-g89mq" Nov 1 00:38:19.340514 kubelet[2056]: I1101 00:38:19.340387 2056 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzsb7\" (UniqueName: \"kubernetes.io/projected/14c72fc8-f308-440c-905a-e984399a3c78-kube-api-access-wzsb7\") pod \"coredns-668d6bf9bc-g89mq\" (UID: \"14c72fc8-f308-440c-905a-e984399a3c78\") " pod="kube-system/coredns-668d6bf9bc-g89mq" Nov 1 00:38:19.597906 kubelet[2056]: E1101 00:38:19.597735 2056 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:38:19.598205 kubelet[2056]: E1101 00:38:19.598163 2056 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:38:19.598814 env[1303]: time="2025-11-01T00:38:19.598739801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wcrwc,Uid:87368007-39db-4dab-bc4a-437ecc36f1b1,Namespace:kube-system,Attempt:0,}" Nov 1 00:38:19.599022 env[1303]: time="2025-11-01T00:38:19.598998574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-g89mq,Uid:14c72fc8-f308-440c-905a-e984399a3c78,Namespace:kube-system,Attempt:0,}" Nov 1 00:38:20.044922 kubelet[2056]: E1101 00:38:20.044874 2056 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:38:20.045518 kubelet[2056]: E1101 00:38:20.045484 2056 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:38:21.045561 kubelet[2056]: E1101 00:38:21.045528 2056 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:38:22.037688 systemd-networkd[1077]: cilium_host: Link UP Nov 1 00:38:22.037837 systemd-networkd[1077]: cilium_net: Link UP Nov 1 00:38:22.040735 systemd-networkd[1077]: cilium_net: Gained carrier Nov 1 00:38:22.042849 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Nov 1 00:38:22.042915 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Nov 1 00:38:22.042925 systemd-networkd[1077]: cilium_host: Gained carrier Nov 1 00:38:22.043041 systemd-networkd[1077]: cilium_net: Gained IPv6LL Nov 1 00:38:22.043192 systemd-networkd[1077]: cilium_host: Gained IPv6LL Nov 1 00:38:22.046495 kubelet[2056]: E1101 00:38:22.046476 2056 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:38:22.124289 systemd-networkd[1077]: cilium_vxlan: Link UP Nov 1 00:38:22.124297 systemd-networkd[1077]: cilium_vxlan: Gained carrier Nov 1 00:38:22.364416 kernel: NET: Registered PF_ALG protocol family Nov 1 00:38:22.945818 systemd-networkd[1077]: lxc_health: Link UP Nov 1 00:38:22.961788 systemd-networkd[1077]: lxc_health: Gained carrier Nov 1 00:38:22.962490 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Nov 1 00:38:23.156514 systemd-networkd[1077]: lxce0ca454ca8aa: Link UP Nov 1 00:38:23.169723 kernel: eth0: renamed from tmp78e61 Nov 1 00:38:23.174703 systemd-networkd[1077]: lxc904577d534bf: Link UP Nov 1 00:38:23.180417 kernel: eth0: renamed from tmpcbffb Nov 1 00:38:23.189709 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Nov 1 00:38:23.189826 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxce0ca454ca8aa: link becomes ready Nov 1 00:38:23.189779 systemd-networkd[1077]: lxce0ca454ca8aa: Gained carrier Nov 1 00:38:23.205734 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc904577d534bf: link becomes ready Nov 1 00:38:23.202592 systemd-networkd[1077]: lxc904577d534bf: Gained carrier Nov 1 00:38:23.568590 systemd-networkd[1077]: cilium_vxlan: Gained IPv6LL Nov 1 00:38:24.144590 systemd-networkd[1077]: lxc_health: Gained IPv6LL Nov 1 00:38:24.656559 systemd-networkd[1077]: lxce0ca454ca8aa: Gained IPv6LL Nov 1 00:38:24.867399 kubelet[2056]: E1101 00:38:24.867315 2056 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:38:24.885335 kubelet[2056]: I1101 00:38:24.885235 2056 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-fkd72" podStartSLOduration=12.017023923 podStartE2EDuration="24.885214244s" podCreationTimestamp="2025-11-01 00:38:00 +0000 UTC" firstStartedPulling="2025-11-01 00:38:01.383376009 +0000 UTC m=+6.582015307" lastFinishedPulling="2025-11-01 00:38:14.25156633 +0000 UTC m=+19.450205628" observedRunningTime="2025-11-01 00:38:20.063062288 +0000 UTC m=+25.261701606" watchObservedRunningTime="2025-11-01 00:38:24.885214244 +0000 UTC m=+30.083853573" Nov 1 00:38:25.232547 systemd-networkd[1077]: lxc904577d534bf: Gained IPv6LL Nov 1 00:38:26.600733 env[1303]: time="2025-11-01T00:38:26.600643287Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:38:26.601100 env[1303]: time="2025-11-01T00:38:26.600724731Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:38:26.601100 env[1303]: time="2025-11-01T00:38:26.600740361Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:38:26.601100 env[1303]: time="2025-11-01T00:38:26.600836233Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:38:26.601100 env[1303]: time="2025-11-01T00:38:26.600912328Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/78e6119fffe4a0f08c1cda26ee570d6e3ac62a4b20e9b775663028fe562538bd pid=3298 runtime=io.containerd.runc.v2 Nov 1 00:38:26.601100 env[1303]: time="2025-11-01T00:38:26.600969146Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:38:26.601100 env[1303]: time="2025-11-01T00:38:26.601000275Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:38:26.601239 env[1303]: time="2025-11-01T00:38:26.601179666Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/cbffb201fc170de2d840260471210063511e28cef27a0b1adbadf76deeff1f19 pid=3299 runtime=io.containerd.runc.v2 Nov 1 00:38:26.623306 systemd-resolved[1220]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 00:38:26.635467 systemd-resolved[1220]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 00:38:26.655251 env[1303]: time="2025-11-01T00:38:26.655200132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-g89mq,Uid:14c72fc8-f308-440c-905a-e984399a3c78,Namespace:kube-system,Attempt:0,} returns sandbox id \"cbffb201fc170de2d840260471210063511e28cef27a0b1adbadf76deeff1f19\"" Nov 1 00:38:26.655989 kubelet[2056]: E1101 00:38:26.655961 2056 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:38:26.658167 env[1303]: time="2025-11-01T00:38:26.658130042Z" level=info msg="CreateContainer within sandbox \"cbffb201fc170de2d840260471210063511e28cef27a0b1adbadf76deeff1f19\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 00:38:26.659828 env[1303]: time="2025-11-01T00:38:26.659799716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wcrwc,Uid:87368007-39db-4dab-bc4a-437ecc36f1b1,Namespace:kube-system,Attempt:0,} returns sandbox id \"78e6119fffe4a0f08c1cda26ee570d6e3ac62a4b20e9b775663028fe562538bd\"" Nov 1 00:38:26.660194 kubelet[2056]: E1101 00:38:26.660170 2056 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:38:26.662054 env[1303]: time="2025-11-01T00:38:26.662027611Z" level=info msg="CreateContainer within sandbox \"78e6119fffe4a0f08c1cda26ee570d6e3ac62a4b20e9b775663028fe562538bd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 00:38:26.682917 env[1303]: time="2025-11-01T00:38:26.682847411Z" level=info msg="CreateContainer within sandbox \"78e6119fffe4a0f08c1cda26ee570d6e3ac62a4b20e9b775663028fe562538bd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"563414ae86fce12d04586dcaf67e98ef41de96109b5484263eae19e862507276\"" Nov 1 00:38:26.683604 env[1303]: time="2025-11-01T00:38:26.683552141Z" level=info msg="StartContainer for \"563414ae86fce12d04586dcaf67e98ef41de96109b5484263eae19e862507276\"" Nov 1 00:38:26.683815 env[1303]: time="2025-11-01T00:38:26.683781927Z" level=info msg="CreateContainer within sandbox \"cbffb201fc170de2d840260471210063511e28cef27a0b1adbadf76deeff1f19\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"17ca063d3f2660d15d56074782fab9ae8f755eb44c75a5939492eb97d4e0156b\"" Nov 1 00:38:26.684108 env[1303]: time="2025-11-01T00:38:26.684084032Z" level=info msg="StartContainer for \"17ca063d3f2660d15d56074782fab9ae8f755eb44c75a5939492eb97d4e0156b\"" Nov 1 00:38:26.741604 env[1303]: time="2025-11-01T00:38:26.741541963Z" level=info msg="StartContainer for \"563414ae86fce12d04586dcaf67e98ef41de96109b5484263eae19e862507276\" returns successfully" Nov 1 00:38:26.747653 env[1303]: time="2025-11-01T00:38:26.747592254Z" level=info msg="StartContainer for \"17ca063d3f2660d15d56074782fab9ae8f755eb44c75a5939492eb97d4e0156b\" returns successfully" Nov 1 00:38:27.057722 kubelet[2056]: E1101 00:38:27.057625 2056 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:38:27.059555 kubelet[2056]: E1101 00:38:27.059528 2056 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:38:27.084683 kubelet[2056]: I1101 00:38:27.084599 2056 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-g89mq" podStartSLOduration=27.084573734 podStartE2EDuration="27.084573734s" podCreationTimestamp="2025-11-01 00:38:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:38:27.073909518 +0000 UTC m=+32.272548826" watchObservedRunningTime="2025-11-01 00:38:27.084573734 +0000 UTC m=+32.283213032" Nov 1 00:38:27.094810 kubelet[2056]: I1101 00:38:27.094694 2056 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-wcrwc" podStartSLOduration=27.094672555 podStartE2EDuration="27.094672555s" podCreationTimestamp="2025-11-01 00:38:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:38:27.084927386 +0000 UTC m=+32.283566694" watchObservedRunningTime="2025-11-01 00:38:27.094672555 +0000 UTC m=+32.293311853" Nov 1 00:38:27.713977 kubelet[2056]: I1101 00:38:27.713939 2056 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 00:38:27.714440 kubelet[2056]: E1101 00:38:27.714415 2056 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:38:28.061814 kubelet[2056]: E1101 00:38:28.061689 2056 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:38:28.061977 kubelet[2056]: E1101 00:38:28.061962 2056 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:38:28.062036 kubelet[2056]: E1101 00:38:28.061991 2056 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:38:29.063657 kubelet[2056]: E1101 00:38:29.063618 2056 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:38:29.064032 kubelet[2056]: E1101 00:38:29.063727 2056 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:38:32.857532 systemd[1]: Started sshd@5-10.0.0.57:22-10.0.0.1:51334.service. Nov 1 00:38:32.900117 sshd[3446]: Accepted publickey for core from 10.0.0.1 port 51334 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:38:32.902490 sshd[3446]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:38:32.906539 systemd-logind[1291]: New session 6 of user core. Nov 1 00:38:32.907491 systemd[1]: Started session-6.scope. Nov 1 00:38:33.081902 sshd[3446]: pam_unix(sshd:session): session closed for user core Nov 1 00:38:33.084791 systemd[1]: sshd@5-10.0.0.57:22-10.0.0.1:51334.service: Deactivated successfully. Nov 1 00:38:33.086249 systemd[1]: session-6.scope: Deactivated successfully. Nov 1 00:38:33.086883 systemd-logind[1291]: Session 6 logged out. Waiting for processes to exit. Nov 1 00:38:33.087977 systemd-logind[1291]: Removed session 6. Nov 1 00:38:38.086609 systemd[1]: Started sshd@6-10.0.0.57:22-10.0.0.1:51348.service. Nov 1 00:38:38.123950 sshd[3462]: Accepted publickey for core from 10.0.0.1 port 51348 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:38:38.125129 sshd[3462]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:38:38.129215 systemd-logind[1291]: New session 7 of user core. Nov 1 00:38:38.130271 systemd[1]: Started session-7.scope. Nov 1 00:38:38.264492 sshd[3462]: pam_unix(sshd:session): session closed for user core Nov 1 00:38:38.267000 systemd[1]: sshd@6-10.0.0.57:22-10.0.0.1:51348.service: Deactivated successfully. Nov 1 00:38:38.268271 systemd-logind[1291]: Session 7 logged out. Waiting for processes to exit. Nov 1 00:38:38.268344 systemd[1]: session-7.scope: Deactivated successfully. Nov 1 00:38:38.269311 systemd-logind[1291]: Removed session 7. Nov 1 00:38:43.267272 systemd[1]: Started sshd@7-10.0.0.57:22-10.0.0.1:53038.service. Nov 1 00:38:43.349884 sshd[3477]: Accepted publickey for core from 10.0.0.1 port 53038 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:38:43.351235 sshd[3477]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:38:43.355491 systemd-logind[1291]: New session 8 of user core. Nov 1 00:38:43.356570 systemd[1]: Started session-8.scope. Nov 1 00:38:43.515516 sshd[3477]: pam_unix(sshd:session): session closed for user core Nov 1 00:38:43.517774 systemd[1]: sshd@7-10.0.0.57:22-10.0.0.1:53038.service: Deactivated successfully. Nov 1 00:38:43.518648 systemd[1]: session-8.scope: Deactivated successfully. Nov 1 00:38:43.519242 systemd-logind[1291]: Session 8 logged out. Waiting for processes to exit. Nov 1 00:38:43.520097 systemd-logind[1291]: Removed session 8. Nov 1 00:38:48.519358 systemd[1]: Started sshd@8-10.0.0.57:22-10.0.0.1:53040.service. Nov 1 00:38:48.554893 sshd[3492]: Accepted publickey for core from 10.0.0.1 port 53040 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:38:48.556281 sshd[3492]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:38:48.560250 systemd-logind[1291]: New session 9 of user core. Nov 1 00:38:48.561360 systemd[1]: Started session-9.scope. Nov 1 00:38:48.674394 sshd[3492]: pam_unix(sshd:session): session closed for user core Nov 1 00:38:48.676801 systemd[1]: sshd@8-10.0.0.57:22-10.0.0.1:53040.service: Deactivated successfully. Nov 1 00:38:48.677835 systemd-logind[1291]: Session 9 logged out. Waiting for processes to exit. Nov 1 00:38:48.677862 systemd[1]: session-9.scope: Deactivated successfully. Nov 1 00:38:48.678813 systemd-logind[1291]: Removed session 9. Nov 1 00:38:53.677789 systemd[1]: Started sshd@9-10.0.0.57:22-10.0.0.1:54436.service. Nov 1 00:38:53.717198 sshd[3508]: Accepted publickey for core from 10.0.0.1 port 54436 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:38:53.718668 sshd[3508]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:38:53.722590 systemd-logind[1291]: New session 10 of user core. Nov 1 00:38:53.723813 systemd[1]: Started session-10.scope. Nov 1 00:38:53.847478 sshd[3508]: pam_unix(sshd:session): session closed for user core Nov 1 00:38:53.850486 systemd[1]: Started sshd@10-10.0.0.57:22-10.0.0.1:54442.service. Nov 1 00:38:53.851192 systemd[1]: sshd@9-10.0.0.57:22-10.0.0.1:54436.service: Deactivated successfully. Nov 1 00:38:53.852309 systemd[1]: session-10.scope: Deactivated successfully. Nov 1 00:38:53.852438 systemd-logind[1291]: Session 10 logged out. Waiting for processes to exit. Nov 1 00:38:53.853616 systemd-logind[1291]: Removed session 10. Nov 1 00:38:53.890879 sshd[3521]: Accepted publickey for core from 10.0.0.1 port 54442 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:38:53.892378 sshd[3521]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:38:53.896054 systemd-logind[1291]: New session 11 of user core. Nov 1 00:38:53.897027 systemd[1]: Started session-11.scope. Nov 1 00:38:54.111569 systemd[1]: Started sshd@11-10.0.0.57:22-10.0.0.1:54456.service. Nov 1 00:38:54.112521 sshd[3521]: pam_unix(sshd:session): session closed for user core Nov 1 00:38:54.116259 systemd[1]: sshd@10-10.0.0.57:22-10.0.0.1:54442.service: Deactivated successfully. Nov 1 00:38:54.117835 systemd[1]: session-11.scope: Deactivated successfully. Nov 1 00:38:54.118271 systemd-logind[1291]: Session 11 logged out. Waiting for processes to exit. Nov 1 00:38:54.119662 systemd-logind[1291]: Removed session 11. Nov 1 00:38:54.154289 sshd[3533]: Accepted publickey for core from 10.0.0.1 port 54456 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:38:54.155766 sshd[3533]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:38:54.159835 systemd-logind[1291]: New session 12 of user core. Nov 1 00:38:54.160762 systemd[1]: Started session-12.scope. Nov 1 00:38:54.287032 sshd[3533]: pam_unix(sshd:session): session closed for user core Nov 1 00:38:54.289352 systemd[1]: sshd@11-10.0.0.57:22-10.0.0.1:54456.service: Deactivated successfully. Nov 1 00:38:54.290239 systemd[1]: session-12.scope: Deactivated successfully. Nov 1 00:38:54.291330 systemd-logind[1291]: Session 12 logged out. Waiting for processes to exit. Nov 1 00:38:54.292151 systemd-logind[1291]: Removed session 12. Nov 1 00:38:59.289998 systemd[1]: Started sshd@12-10.0.0.57:22-10.0.0.1:54468.service. Nov 1 00:38:59.329038 sshd[3551]: Accepted publickey for core from 10.0.0.1 port 54468 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:38:59.330077 sshd[3551]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:38:59.334393 systemd-logind[1291]: New session 13 of user core. Nov 1 00:38:59.335480 systemd[1]: Started session-13.scope. Nov 1 00:38:59.464634 sshd[3551]: pam_unix(sshd:session): session closed for user core Nov 1 00:38:59.467063 systemd[1]: sshd@12-10.0.0.57:22-10.0.0.1:54468.service: Deactivated successfully. Nov 1 00:38:59.468145 systemd[1]: session-13.scope: Deactivated successfully. Nov 1 00:38:59.468148 systemd-logind[1291]: Session 13 logged out. Waiting for processes to exit. Nov 1 00:38:59.469004 systemd-logind[1291]: Removed session 13. Nov 1 00:39:04.468975 systemd[1]: Started sshd@13-10.0.0.57:22-10.0.0.1:47598.service. Nov 1 00:39:04.512418 sshd[3567]: Accepted publickey for core from 10.0.0.1 port 47598 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:39:04.513867 sshd[3567]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:39:04.518299 systemd-logind[1291]: New session 14 of user core. Nov 1 00:39:04.519430 systemd[1]: Started session-14.scope. Nov 1 00:39:04.682197 sshd[3567]: pam_unix(sshd:session): session closed for user core Nov 1 00:39:04.685059 systemd[1]: sshd@13-10.0.0.57:22-10.0.0.1:47598.service: Deactivated successfully. Nov 1 00:39:04.686334 systemd-logind[1291]: Session 14 logged out. Waiting for processes to exit. Nov 1 00:39:04.686458 systemd[1]: session-14.scope: Deactivated successfully. Nov 1 00:39:04.687563 systemd-logind[1291]: Removed session 14. Nov 1 00:39:09.685656 systemd[1]: Started sshd@14-10.0.0.57:22-10.0.0.1:47612.service. Nov 1 00:39:09.720773 sshd[3582]: Accepted publickey for core from 10.0.0.1 port 47612 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:39:09.721917 sshd[3582]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:39:09.725053 systemd-logind[1291]: New session 15 of user core. Nov 1 00:39:09.725842 systemd[1]: Started session-15.scope. Nov 1 00:39:09.880252 sshd[3582]: pam_unix(sshd:session): session closed for user core Nov 1 00:39:09.882405 systemd[1]: sshd@14-10.0.0.57:22-10.0.0.1:47612.service: Deactivated successfully. Nov 1 00:39:09.883284 systemd[1]: session-15.scope: Deactivated successfully. Nov 1 00:39:09.884073 systemd-logind[1291]: Session 15 logged out. Waiting for processes to exit. Nov 1 00:39:09.884842 systemd-logind[1291]: Removed session 15. Nov 1 00:39:14.884291 systemd[1]: Started sshd@15-10.0.0.57:22-10.0.0.1:45750.service. Nov 1 00:39:14.888214 kubelet[2056]: E1101 00:39:14.888181 2056 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:39:14.923137 sshd[3596]: Accepted publickey for core from 10.0.0.1 port 45750 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:39:14.924491 sshd[3596]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:39:14.928250 systemd-logind[1291]: New session 16 of user core. Nov 1 00:39:14.929206 systemd[1]: Started session-16.scope. Nov 1 00:39:15.047437 sshd[3596]: pam_unix(sshd:session): session closed for user core Nov 1 00:39:15.050419 systemd[1]: Started sshd@16-10.0.0.57:22-10.0.0.1:45764.service. Nov 1 00:39:15.050996 systemd[1]: sshd@15-10.0.0.57:22-10.0.0.1:45750.service: Deactivated successfully. Nov 1 00:39:15.052076 systemd[1]: session-16.scope: Deactivated successfully. Nov 1 00:39:15.052104 systemd-logind[1291]: Session 16 logged out. Waiting for processes to exit. Nov 1 00:39:15.053100 systemd-logind[1291]: Removed session 16. Nov 1 00:39:15.086590 sshd[3609]: Accepted publickey for core from 10.0.0.1 port 45764 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:39:15.088130 sshd[3609]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:39:15.092822 systemd-logind[1291]: New session 17 of user core. Nov 1 00:39:15.093800 systemd[1]: Started session-17.scope. Nov 1 00:39:15.359588 sshd[3609]: pam_unix(sshd:session): session closed for user core Nov 1 00:39:15.361874 systemd[1]: Started sshd@17-10.0.0.57:22-10.0.0.1:45772.service. Nov 1 00:39:15.362595 systemd[1]: sshd@16-10.0.0.57:22-10.0.0.1:45764.service: Deactivated successfully. Nov 1 00:39:15.363509 systemd[1]: session-17.scope: Deactivated successfully. Nov 1 00:39:15.364090 systemd-logind[1291]: Session 17 logged out. Waiting for processes to exit. Nov 1 00:39:15.364929 systemd-logind[1291]: Removed session 17. Nov 1 00:39:15.402214 sshd[3621]: Accepted publickey for core from 10.0.0.1 port 45772 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:39:15.403597 sshd[3621]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:39:15.407951 systemd-logind[1291]: New session 18 of user core. Nov 1 00:39:15.409077 systemd[1]: Started session-18.scope. Nov 1 00:39:15.887600 kubelet[2056]: E1101 00:39:15.887553 2056 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:39:17.588247 sshd[3621]: pam_unix(sshd:session): session closed for user core Nov 1 00:39:17.591471 systemd[1]: Started sshd@18-10.0.0.57:22-10.0.0.1:45778.service. Nov 1 00:39:17.592092 systemd[1]: sshd@17-10.0.0.57:22-10.0.0.1:45772.service: Deactivated successfully. Nov 1 00:39:17.593192 systemd-logind[1291]: Session 18 logged out. Waiting for processes to exit. Nov 1 00:39:17.593270 systemd[1]: session-18.scope: Deactivated successfully. Nov 1 00:39:17.594371 systemd-logind[1291]: Removed session 18. Nov 1 00:39:17.627808 sshd[3641]: Accepted publickey for core from 10.0.0.1 port 45778 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:39:17.628947 sshd[3641]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:39:17.632358 systemd-logind[1291]: New session 19 of user core. Nov 1 00:39:17.633354 systemd[1]: Started session-19.scope. Nov 1 00:39:18.439261 sshd[3641]: pam_unix(sshd:session): session closed for user core Nov 1 00:39:18.442578 systemd[1]: Started sshd@19-10.0.0.57:22-10.0.0.1:45784.service. Nov 1 00:39:18.443316 systemd[1]: sshd@18-10.0.0.57:22-10.0.0.1:45778.service: Deactivated successfully. Nov 1 00:39:18.444328 systemd[1]: session-19.scope: Deactivated successfully. Nov 1 00:39:18.444908 systemd-logind[1291]: Session 19 logged out. Waiting for processes to exit. Nov 1 00:39:18.445787 systemd-logind[1291]: Removed session 19. Nov 1 00:39:18.477829 sshd[3654]: Accepted publickey for core from 10.0.0.1 port 45784 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:39:18.479066 sshd[3654]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:39:18.482759 systemd-logind[1291]: New session 20 of user core. Nov 1 00:39:18.483644 systemd[1]: Started session-20.scope. Nov 1 00:39:18.791311 sshd[3654]: pam_unix(sshd:session): session closed for user core Nov 1 00:39:18.794261 systemd[1]: sshd@19-10.0.0.57:22-10.0.0.1:45784.service: Deactivated successfully. Nov 1 00:39:18.795175 systemd[1]: session-20.scope: Deactivated successfully. Nov 1 00:39:18.796467 systemd-logind[1291]: Session 20 logged out. Waiting for processes to exit. Nov 1 00:39:18.797552 systemd-logind[1291]: Removed session 20. Nov 1 00:39:20.888089 kubelet[2056]: E1101 00:39:20.888044 2056 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:39:21.887748 kubelet[2056]: E1101 00:39:21.887681 2056 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:39:23.795294 systemd[1]: Started sshd@20-10.0.0.57:22-10.0.0.1:34878.service. Nov 1 00:39:23.834090 sshd[3670]: Accepted publickey for core from 10.0.0.1 port 34878 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:39:23.835256 sshd[3670]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:39:23.839669 systemd-logind[1291]: New session 21 of user core. Nov 1 00:39:23.840322 systemd[1]: Started session-21.scope. Nov 1 00:39:23.989639 sshd[3670]: pam_unix(sshd:session): session closed for user core Nov 1 00:39:23.993042 systemd[1]: sshd@20-10.0.0.57:22-10.0.0.1:34878.service: Deactivated successfully. Nov 1 00:39:23.994519 systemd[1]: session-21.scope: Deactivated successfully. Nov 1 00:39:23.995733 systemd-logind[1291]: Session 21 logged out. Waiting for processes to exit. Nov 1 00:39:23.996728 systemd-logind[1291]: Removed session 21. Nov 1 00:39:25.887881 kubelet[2056]: E1101 00:39:25.887805 2056 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:39:28.994297 systemd[1]: Started sshd@21-10.0.0.57:22-10.0.0.1:34894.service. Nov 1 00:39:29.035538 sshd[3686]: Accepted publickey for core from 10.0.0.1 port 34894 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:39:29.036768 sshd[3686]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:39:29.040789 systemd-logind[1291]: New session 22 of user core. Nov 1 00:39:29.041558 systemd[1]: Started session-22.scope. Nov 1 00:39:29.211762 sshd[3686]: pam_unix(sshd:session): session closed for user core Nov 1 00:39:29.214200 systemd[1]: sshd@21-10.0.0.57:22-10.0.0.1:34894.service: Deactivated successfully. Nov 1 00:39:29.215257 systemd-logind[1291]: Session 22 logged out. Waiting for processes to exit. Nov 1 00:39:29.215318 systemd[1]: session-22.scope: Deactivated successfully. Nov 1 00:39:29.216275 systemd-logind[1291]: Removed session 22. Nov 1 00:39:31.887872 kubelet[2056]: E1101 00:39:31.887836 2056 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:39:32.888033 kubelet[2056]: E1101 00:39:32.887979 2056 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:39:34.216231 systemd[1]: Started sshd@22-10.0.0.57:22-10.0.0.1:50560.service. Nov 1 00:39:34.262214 sshd[3702]: Accepted publickey for core from 10.0.0.1 port 50560 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:39:34.263535 sshd[3702]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:39:34.267521 systemd-logind[1291]: New session 23 of user core. Nov 1 00:39:34.268361 systemd[1]: Started session-23.scope. Nov 1 00:39:34.375914 sshd[3702]: pam_unix(sshd:session): session closed for user core Nov 1 00:39:34.378194 systemd[1]: sshd@22-10.0.0.57:22-10.0.0.1:50560.service: Deactivated successfully. Nov 1 00:39:34.379115 systemd[1]: session-23.scope: Deactivated successfully. Nov 1 00:39:34.379127 systemd-logind[1291]: Session 23 logged out. Waiting for processes to exit. Nov 1 00:39:34.379996 systemd-logind[1291]: Removed session 23. Nov 1 00:39:39.380757 systemd[1]: Started sshd@23-10.0.0.57:22-10.0.0.1:50566.service. Nov 1 00:39:39.417863 sshd[3717]: Accepted publickey for core from 10.0.0.1 port 50566 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:39:39.419444 sshd[3717]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:39:39.423522 systemd-logind[1291]: New session 24 of user core. Nov 1 00:39:39.424185 systemd[1]: Started session-24.scope. Nov 1 00:39:39.539252 sshd[3717]: pam_unix(sshd:session): session closed for user core Nov 1 00:39:39.541333 systemd[1]: sshd@23-10.0.0.57:22-10.0.0.1:50566.service: Deactivated successfully. Nov 1 00:39:39.542321 systemd[1]: session-24.scope: Deactivated successfully. Nov 1 00:39:39.542329 systemd-logind[1291]: Session 24 logged out. Waiting for processes to exit. Nov 1 00:39:39.543068 systemd-logind[1291]: Removed session 24. Nov 1 00:39:44.543627 systemd[1]: Started sshd@24-10.0.0.57:22-10.0.0.1:37746.service. Nov 1 00:39:44.581068 sshd[3731]: Accepted publickey for core from 10.0.0.1 port 37746 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:39:44.582541 sshd[3731]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:39:44.586182 systemd-logind[1291]: New session 25 of user core. Nov 1 00:39:44.587162 systemd[1]: Started session-25.scope. Nov 1 00:39:44.695782 sshd[3731]: pam_unix(sshd:session): session closed for user core Nov 1 00:39:44.698483 systemd[1]: Started sshd@25-10.0.0.57:22-10.0.0.1:37756.service. Nov 1 00:39:44.698952 systemd[1]: sshd@24-10.0.0.57:22-10.0.0.1:37746.service: Deactivated successfully. Nov 1 00:39:44.700865 systemd[1]: session-25.scope: Deactivated successfully. Nov 1 00:39:44.701232 systemd-logind[1291]: Session 25 logged out. Waiting for processes to exit. Nov 1 00:39:44.702709 systemd-logind[1291]: Removed session 25. Nov 1 00:39:44.735738 sshd[3744]: Accepted publickey for core from 10.0.0.1 port 37756 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:39:44.737211 sshd[3744]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:39:44.740774 systemd-logind[1291]: New session 26 of user core. Nov 1 00:39:44.741545 systemd[1]: Started session-26.scope. Nov 1 00:39:46.126310 env[1303]: time="2025-11-01T00:39:46.126242331Z" level=info msg="StopContainer for \"e0edff8bbda060d7ccfbccc84e87d25c171888993b9f78df8ad3e75d906b2ac1\" with timeout 30 (s)" Nov 1 00:39:46.127952 env[1303]: time="2025-11-01T00:39:46.127144382Z" level=info msg="Stop container \"e0edff8bbda060d7ccfbccc84e87d25c171888993b9f78df8ad3e75d906b2ac1\" with signal terminated" Nov 1 00:39:46.143060 systemd[1]: run-containerd-runc-k8s.io-4c2a298bd3368c98ec57fcf2c60c95e30917f72fccc783ef5a170be9b1484ec8-runc.nudV20.mount: Deactivated successfully. Nov 1 00:39:46.153007 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e0edff8bbda060d7ccfbccc84e87d25c171888993b9f78df8ad3e75d906b2ac1-rootfs.mount: Deactivated successfully. Nov 1 00:39:46.156747 env[1303]: time="2025-11-01T00:39:46.156566690Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 1 00:39:46.163908 env[1303]: time="2025-11-01T00:39:46.163862477Z" level=info msg="shim disconnected" id=e0edff8bbda060d7ccfbccc84e87d25c171888993b9f78df8ad3e75d906b2ac1 Nov 1 00:39:46.163908 env[1303]: time="2025-11-01T00:39:46.163908555Z" level=warning msg="cleaning up after shim disconnected" id=e0edff8bbda060d7ccfbccc84e87d25c171888993b9f78df8ad3e75d906b2ac1 namespace=k8s.io Nov 1 00:39:46.164043 env[1303]: time="2025-11-01T00:39:46.163917993Z" level=info msg="cleaning up dead shim" Nov 1 00:39:46.164617 env[1303]: time="2025-11-01T00:39:46.164583796Z" level=info msg="StopContainer for \"4c2a298bd3368c98ec57fcf2c60c95e30917f72fccc783ef5a170be9b1484ec8\" with timeout 2 (s)" Nov 1 00:39:46.164871 env[1303]: time="2025-11-01T00:39:46.164834913Z" level=info msg="Stop container \"4c2a298bd3368c98ec57fcf2c60c95e30917f72fccc783ef5a170be9b1484ec8\" with signal terminated" Nov 1 00:39:46.171246 systemd-networkd[1077]: lxc_health: Link DOWN Nov 1 00:39:46.171254 systemd-networkd[1077]: lxc_health: Lost carrier Nov 1 00:39:46.171734 env[1303]: time="2025-11-01T00:39:46.171695405Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:39:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3793 runtime=io.containerd.runc.v2\n" Nov 1 00:39:46.174911 env[1303]: time="2025-11-01T00:39:46.174881122Z" level=info msg="StopContainer for \"e0edff8bbda060d7ccfbccc84e87d25c171888993b9f78df8ad3e75d906b2ac1\" returns successfully" Nov 1 00:39:46.175623 env[1303]: time="2025-11-01T00:39:46.175580389Z" level=info msg="StopPodSandbox for \"e64f66a53060ab59ff41d28e801b4b233d9af237299527ebbfcf1602771a809f\"" Nov 1 00:39:46.175679 env[1303]: time="2025-11-01T00:39:46.175663195Z" level=info msg="Container to stop \"e0edff8bbda060d7ccfbccc84e87d25c171888993b9f78df8ad3e75d906b2ac1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 00:39:46.177641 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e64f66a53060ab59ff41d28e801b4b233d9af237299527ebbfcf1602771a809f-shm.mount: Deactivated successfully. Nov 1 00:39:46.210704 env[1303]: time="2025-11-01T00:39:46.210364633Z" level=info msg="shim disconnected" id=e64f66a53060ab59ff41d28e801b4b233d9af237299527ebbfcf1602771a809f Nov 1 00:39:46.210704 env[1303]: time="2025-11-01T00:39:46.210433013Z" level=warning msg="cleaning up after shim disconnected" id=e64f66a53060ab59ff41d28e801b4b233d9af237299527ebbfcf1602771a809f namespace=k8s.io Nov 1 00:39:46.210704 env[1303]: time="2025-11-01T00:39:46.210444544Z" level=info msg="cleaning up dead shim" Nov 1 00:39:46.217111 env[1303]: time="2025-11-01T00:39:46.217073597Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:39:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3835 runtime=io.containerd.runc.v2\n" Nov 1 00:39:46.217472 env[1303]: time="2025-11-01T00:39:46.217443639Z" level=info msg="TearDown network for sandbox \"e64f66a53060ab59ff41d28e801b4b233d9af237299527ebbfcf1602771a809f\" successfully" Nov 1 00:39:46.217538 env[1303]: time="2025-11-01T00:39:46.217471352Z" level=info msg="StopPodSandbox for \"e64f66a53060ab59ff41d28e801b4b233d9af237299527ebbfcf1602771a809f\" returns successfully" Nov 1 00:39:46.238235 env[1303]: time="2025-11-01T00:39:46.238158187Z" level=info msg="shim disconnected" id=4c2a298bd3368c98ec57fcf2c60c95e30917f72fccc783ef5a170be9b1484ec8 Nov 1 00:39:46.238235 env[1303]: time="2025-11-01T00:39:46.238218813Z" level=warning msg="cleaning up after shim disconnected" id=4c2a298bd3368c98ec57fcf2c60c95e30917f72fccc783ef5a170be9b1484ec8 namespace=k8s.io Nov 1 00:39:46.238235 env[1303]: time="2025-11-01T00:39:46.238227559Z" level=info msg="cleaning up dead shim" Nov 1 00:39:46.244773 env[1303]: time="2025-11-01T00:39:46.244722718Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:39:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3861 runtime=io.containerd.runc.v2\n" Nov 1 00:39:46.247909 env[1303]: time="2025-11-01T00:39:46.247853069Z" level=info msg="StopContainer for \"4c2a298bd3368c98ec57fcf2c60c95e30917f72fccc783ef5a170be9b1484ec8\" returns successfully" Nov 1 00:39:46.248503 env[1303]: time="2025-11-01T00:39:46.248471954Z" level=info msg="StopPodSandbox for \"af50cc3ea70c0317f50b0b60306f649b3f1fc6b1286f7579cfec46daa2932012\"" Nov 1 00:39:46.248595 env[1303]: time="2025-11-01T00:39:46.248566734Z" level=info msg="Container to stop \"9ea4654912dc1c4eea8ddc6fdd2954da7706aa5ffd13487f48d61ae973e39c83\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 00:39:46.248686 env[1303]: time="2025-11-01T00:39:46.248596721Z" level=info msg="Container to stop \"381bcdc0941a40298fe56c83b6647c862c50711033786ee1aeaf16b9f8939120\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 00:39:46.248686 env[1303]: time="2025-11-01T00:39:46.248611348Z" level=info msg="Container to stop \"d719ccc3ab5c106158a0fa98014502df3f59a51837a16439b6e265e1e6e8248b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 00:39:46.248686 env[1303]: time="2025-11-01T00:39:46.248625396Z" level=info msg="Container to stop \"fc8b6f7e24a183150118ea193ecf051e873a00c85b5dc91b5bd337539ccddebd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 00:39:46.248686 env[1303]: time="2025-11-01T00:39:46.248639051Z" level=info msg="Container to stop \"4c2a298bd3368c98ec57fcf2c60c95e30917f72fccc783ef5a170be9b1484ec8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 00:39:46.272888 env[1303]: time="2025-11-01T00:39:46.272830499Z" level=info msg="shim disconnected" id=af50cc3ea70c0317f50b0b60306f649b3f1fc6b1286f7579cfec46daa2932012 Nov 1 00:39:46.272888 env[1303]: time="2025-11-01T00:39:46.272878860Z" level=warning msg="cleaning up after shim disconnected" id=af50cc3ea70c0317f50b0b60306f649b3f1fc6b1286f7579cfec46daa2932012 namespace=k8s.io Nov 1 00:39:46.272888 env[1303]: time="2025-11-01T00:39:46.272887988Z" level=info msg="cleaning up dead shim" Nov 1 00:39:46.281051 env[1303]: time="2025-11-01T00:39:46.281006227Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:39:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3892 runtime=io.containerd.runc.v2\n" Nov 1 00:39:46.281827 env[1303]: time="2025-11-01T00:39:46.281681058Z" level=info msg="TearDown network for sandbox \"af50cc3ea70c0317f50b0b60306f649b3f1fc6b1286f7579cfec46daa2932012\" successfully" Nov 1 00:39:46.281827 env[1303]: time="2025-11-01T00:39:46.281704753Z" level=info msg="StopPodSandbox for \"af50cc3ea70c0317f50b0b60306f649b3f1fc6b1286f7579cfec46daa2932012\" returns successfully" Nov 1 00:39:46.295718 kubelet[2056]: I1101 00:39:46.295669 2056 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5j56\" (UniqueName: \"kubernetes.io/projected/8c15cc8d-d91f-465f-9473-8fb8521f361d-kube-api-access-w5j56\") pod \"8c15cc8d-d91f-465f-9473-8fb8521f361d\" (UID: \"8c15cc8d-d91f-465f-9473-8fb8521f361d\") " Nov 1 00:39:46.295718 kubelet[2056]: I1101 00:39:46.295711 2056 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8c15cc8d-d91f-465f-9473-8fb8521f361d-cilium-config-path\") pod \"8c15cc8d-d91f-465f-9473-8fb8521f361d\" (UID: \"8c15cc8d-d91f-465f-9473-8fb8521f361d\") " Nov 1 00:39:46.298160 kubelet[2056]: I1101 00:39:46.298139 2056 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c15cc8d-d91f-465f-9473-8fb8521f361d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8c15cc8d-d91f-465f-9473-8fb8521f361d" (UID: "8c15cc8d-d91f-465f-9473-8fb8521f361d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 1 00:39:46.299259 kubelet[2056]: I1101 00:39:46.298593 2056 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c15cc8d-d91f-465f-9473-8fb8521f361d-kube-api-access-w5j56" (OuterVolumeSpecName: "kube-api-access-w5j56") pod "8c15cc8d-d91f-465f-9473-8fb8521f361d" (UID: "8c15cc8d-d91f-465f-9473-8fb8521f361d"). InnerVolumeSpecName "kube-api-access-w5j56". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 00:39:46.396742 kubelet[2056]: I1101 00:39:46.395957 2056 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/42ff1dd8-0fa8-4203-9456-376e4add7b0f-bpf-maps\") pod \"42ff1dd8-0fa8-4203-9456-376e4add7b0f\" (UID: \"42ff1dd8-0fa8-4203-9456-376e4add7b0f\") " Nov 1 00:39:46.396742 kubelet[2056]: I1101 00:39:46.396019 2056 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/42ff1dd8-0fa8-4203-9456-376e4add7b0f-lib-modules\") pod \"42ff1dd8-0fa8-4203-9456-376e4add7b0f\" (UID: \"42ff1dd8-0fa8-4203-9456-376e4add7b0f\") " Nov 1 00:39:46.396742 kubelet[2056]: I1101 00:39:46.396042 2056 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/42ff1dd8-0fa8-4203-9456-376e4add7b0f-host-proc-sys-net\") pod \"42ff1dd8-0fa8-4203-9456-376e4add7b0f\" (UID: \"42ff1dd8-0fa8-4203-9456-376e4add7b0f\") " Nov 1 00:39:46.396742 kubelet[2056]: I1101 00:39:46.396076 2056 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/42ff1dd8-0fa8-4203-9456-376e4add7b0f-clustermesh-secrets\") pod \"42ff1dd8-0fa8-4203-9456-376e4add7b0f\" (UID: \"42ff1dd8-0fa8-4203-9456-376e4add7b0f\") " Nov 1 00:39:46.396742 kubelet[2056]: I1101 00:39:46.396100 2056 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/42ff1dd8-0fa8-4203-9456-376e4add7b0f-etc-cni-netd\") pod \"42ff1dd8-0fa8-4203-9456-376e4add7b0f\" (UID: \"42ff1dd8-0fa8-4203-9456-376e4add7b0f\") " Nov 1 00:39:46.396742 kubelet[2056]: I1101 00:39:46.396121 2056 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/42ff1dd8-0fa8-4203-9456-376e4add7b0f-xtables-lock\") pod \"42ff1dd8-0fa8-4203-9456-376e4add7b0f\" (UID: \"42ff1dd8-0fa8-4203-9456-376e4add7b0f\") " Nov 1 00:39:46.397097 kubelet[2056]: I1101 00:39:46.396126 2056 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42ff1dd8-0fa8-4203-9456-376e4add7b0f-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "42ff1dd8-0fa8-4203-9456-376e4add7b0f" (UID: "42ff1dd8-0fa8-4203-9456-376e4add7b0f"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:39:46.397097 kubelet[2056]: I1101 00:39:46.396134 2056 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42ff1dd8-0fa8-4203-9456-376e4add7b0f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "42ff1dd8-0fa8-4203-9456-376e4add7b0f" (UID: "42ff1dd8-0fa8-4203-9456-376e4add7b0f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:39:46.397097 kubelet[2056]: I1101 00:39:46.396162 2056 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42ff1dd8-0fa8-4203-9456-376e4add7b0f-cni-path" (OuterVolumeSpecName: "cni-path") pod "42ff1dd8-0fa8-4203-9456-376e4add7b0f" (UID: "42ff1dd8-0fa8-4203-9456-376e4add7b0f"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:39:46.397097 kubelet[2056]: I1101 00:39:46.396140 2056 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/42ff1dd8-0fa8-4203-9456-376e4add7b0f-cni-path\") pod \"42ff1dd8-0fa8-4203-9456-376e4add7b0f\" (UID: \"42ff1dd8-0fa8-4203-9456-376e4add7b0f\") " Nov 1 00:39:46.397097 kubelet[2056]: I1101 00:39:46.396187 2056 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42ff1dd8-0fa8-4203-9456-376e4add7b0f-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "42ff1dd8-0fa8-4203-9456-376e4add7b0f" (UID: "42ff1dd8-0fa8-4203-9456-376e4add7b0f"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:39:46.397312 kubelet[2056]: I1101 00:39:46.396203 2056 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42ff1dd8-0fa8-4203-9456-376e4add7b0f-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "42ff1dd8-0fa8-4203-9456-376e4add7b0f" (UID: "42ff1dd8-0fa8-4203-9456-376e4add7b0f"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:39:46.397312 kubelet[2056]: I1101 00:39:46.396206 2056 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/42ff1dd8-0fa8-4203-9456-376e4add7b0f-hostproc\") pod \"42ff1dd8-0fa8-4203-9456-376e4add7b0f\" (UID: \"42ff1dd8-0fa8-4203-9456-376e4add7b0f\") " Nov 1 00:39:46.397312 kubelet[2056]: I1101 00:39:46.396220 2056 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42ff1dd8-0fa8-4203-9456-376e4add7b0f-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "42ff1dd8-0fa8-4203-9456-376e4add7b0f" (UID: "42ff1dd8-0fa8-4203-9456-376e4add7b0f"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:39:46.397312 kubelet[2056]: I1101 00:39:46.396228 2056 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/42ff1dd8-0fa8-4203-9456-376e4add7b0f-cilium-cgroup\") pod \"42ff1dd8-0fa8-4203-9456-376e4add7b0f\" (UID: \"42ff1dd8-0fa8-4203-9456-376e4add7b0f\") " Nov 1 00:39:46.397312 kubelet[2056]: I1101 00:39:46.396235 2056 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42ff1dd8-0fa8-4203-9456-376e4add7b0f-hostproc" (OuterVolumeSpecName: "hostproc") pod "42ff1dd8-0fa8-4203-9456-376e4add7b0f" (UID: "42ff1dd8-0fa8-4203-9456-376e4add7b0f"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:39:46.397751 kubelet[2056]: I1101 00:39:46.396248 2056 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/42ff1dd8-0fa8-4203-9456-376e4add7b0f-host-proc-sys-kernel\") pod \"42ff1dd8-0fa8-4203-9456-376e4add7b0f\" (UID: \"42ff1dd8-0fa8-4203-9456-376e4add7b0f\") " Nov 1 00:39:46.397751 kubelet[2056]: I1101 00:39:46.396271 2056 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/42ff1dd8-0fa8-4203-9456-376e4add7b0f-cilium-run\") pod \"42ff1dd8-0fa8-4203-9456-376e4add7b0f\" (UID: \"42ff1dd8-0fa8-4203-9456-376e4add7b0f\") " Nov 1 00:39:46.397751 kubelet[2056]: I1101 00:39:46.396273 2056 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42ff1dd8-0fa8-4203-9456-376e4add7b0f-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "42ff1dd8-0fa8-4203-9456-376e4add7b0f" (UID: "42ff1dd8-0fa8-4203-9456-376e4add7b0f"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:39:46.397751 kubelet[2056]: I1101 00:39:46.396309 2056 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/42ff1dd8-0fa8-4203-9456-376e4add7b0f-cilium-config-path\") pod \"42ff1dd8-0fa8-4203-9456-376e4add7b0f\" (UID: \"42ff1dd8-0fa8-4203-9456-376e4add7b0f\") " Nov 1 00:39:46.397751 kubelet[2056]: I1101 00:39:46.396315 2056 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42ff1dd8-0fa8-4203-9456-376e4add7b0f-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "42ff1dd8-0fa8-4203-9456-376e4add7b0f" (UID: "42ff1dd8-0fa8-4203-9456-376e4add7b0f"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:39:46.398099 kubelet[2056]: I1101 00:39:46.396331 2056 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42ff1dd8-0fa8-4203-9456-376e4add7b0f-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "42ff1dd8-0fa8-4203-9456-376e4add7b0f" (UID: "42ff1dd8-0fa8-4203-9456-376e4add7b0f"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:39:46.398099 kubelet[2056]: I1101 00:39:46.396333 2056 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxrss\" (UniqueName: \"kubernetes.io/projected/42ff1dd8-0fa8-4203-9456-376e4add7b0f-kube-api-access-hxrss\") pod \"42ff1dd8-0fa8-4203-9456-376e4add7b0f\" (UID: \"42ff1dd8-0fa8-4203-9456-376e4add7b0f\") " Nov 1 00:39:46.398099 kubelet[2056]: I1101 00:39:46.396365 2056 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/42ff1dd8-0fa8-4203-9456-376e4add7b0f-hubble-tls\") pod \"42ff1dd8-0fa8-4203-9456-376e4add7b0f\" (UID: \"42ff1dd8-0fa8-4203-9456-376e4add7b0f\") " Nov 1 00:39:46.398099 kubelet[2056]: I1101 00:39:46.396437 2056 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/42ff1dd8-0fa8-4203-9456-376e4add7b0f-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Nov 1 00:39:46.398099 kubelet[2056]: I1101 00:39:46.396453 2056 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/42ff1dd8-0fa8-4203-9456-376e4add7b0f-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Nov 1 00:39:46.398099 kubelet[2056]: I1101 00:39:46.396463 2056 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/42ff1dd8-0fa8-4203-9456-376e4add7b0f-xtables-lock\") on node \"localhost\" DevicePath \"\"" Nov 1 00:39:46.398099 kubelet[2056]: I1101 00:39:46.396474 2056 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/42ff1dd8-0fa8-4203-9456-376e4add7b0f-cni-path\") on node \"localhost\" DevicePath \"\"" Nov 1 00:39:46.398422 kubelet[2056]: I1101 00:39:46.396484 2056 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/42ff1dd8-0fa8-4203-9456-376e4add7b0f-hostproc\") on node \"localhost\" DevicePath \"\"" Nov 1 00:39:46.398422 kubelet[2056]: I1101 00:39:46.396494 2056 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/42ff1dd8-0fa8-4203-9456-376e4add7b0f-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Nov 1 00:39:46.398422 kubelet[2056]: I1101 00:39:46.396505 2056 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/42ff1dd8-0fa8-4203-9456-376e4add7b0f-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Nov 1 00:39:46.398422 kubelet[2056]: I1101 00:39:46.396514 2056 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/42ff1dd8-0fa8-4203-9456-376e4add7b0f-cilium-run\") on node \"localhost\" DevicePath \"\"" Nov 1 00:39:46.398422 kubelet[2056]: I1101 00:39:46.396525 2056 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w5j56\" (UniqueName: \"kubernetes.io/projected/8c15cc8d-d91f-465f-9473-8fb8521f361d-kube-api-access-w5j56\") on node \"localhost\" DevicePath \"\"" Nov 1 00:39:46.398422 kubelet[2056]: I1101 00:39:46.396536 2056 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8c15cc8d-d91f-465f-9473-8fb8521f361d-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Nov 1 00:39:46.398422 kubelet[2056]: I1101 00:39:46.396546 2056 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/42ff1dd8-0fa8-4203-9456-376e4add7b0f-bpf-maps\") on node \"localhost\" DevicePath \"\"" Nov 1 00:39:46.398422 kubelet[2056]: I1101 00:39:46.396555 2056 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/42ff1dd8-0fa8-4203-9456-376e4add7b0f-lib-modules\") on node \"localhost\" DevicePath \"\"" Nov 1 00:39:46.399046 kubelet[2056]: I1101 00:39:46.399015 2056 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42ff1dd8-0fa8-4203-9456-376e4add7b0f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "42ff1dd8-0fa8-4203-9456-376e4add7b0f" (UID: "42ff1dd8-0fa8-4203-9456-376e4add7b0f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 1 00:39:46.399413 kubelet[2056]: I1101 00:39:46.399364 2056 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42ff1dd8-0fa8-4203-9456-376e4add7b0f-kube-api-access-hxrss" (OuterVolumeSpecName: "kube-api-access-hxrss") pod "42ff1dd8-0fa8-4203-9456-376e4add7b0f" (UID: "42ff1dd8-0fa8-4203-9456-376e4add7b0f"). InnerVolumeSpecName "kube-api-access-hxrss". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 00:39:46.400005 kubelet[2056]: I1101 00:39:46.399952 2056 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42ff1dd8-0fa8-4203-9456-376e4add7b0f-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "42ff1dd8-0fa8-4203-9456-376e4add7b0f" (UID: "42ff1dd8-0fa8-4203-9456-376e4add7b0f"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 00:39:46.400720 kubelet[2056]: I1101 00:39:46.400689 2056 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42ff1dd8-0fa8-4203-9456-376e4add7b0f-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "42ff1dd8-0fa8-4203-9456-376e4add7b0f" (UID: "42ff1dd8-0fa8-4203-9456-376e4add7b0f"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 1 00:39:46.497177 kubelet[2056]: I1101 00:39:46.497119 2056 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/42ff1dd8-0fa8-4203-9456-376e4add7b0f-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Nov 1 00:39:46.497177 kubelet[2056]: I1101 00:39:46.497154 2056 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hxrss\" (UniqueName: \"kubernetes.io/projected/42ff1dd8-0fa8-4203-9456-376e4add7b0f-kube-api-access-hxrss\") on node \"localhost\" DevicePath \"\"" Nov 1 00:39:46.497177 kubelet[2056]: I1101 00:39:46.497165 2056 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/42ff1dd8-0fa8-4203-9456-376e4add7b0f-hubble-tls\") on node \"localhost\" DevicePath \"\"" Nov 1 00:39:46.497177 kubelet[2056]: I1101 00:39:46.497173 2056 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/42ff1dd8-0fa8-4203-9456-376e4add7b0f-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Nov 1 00:39:47.133474 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4c2a298bd3368c98ec57fcf2c60c95e30917f72fccc783ef5a170be9b1484ec8-rootfs.mount: Deactivated successfully. Nov 1 00:39:47.133634 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-af50cc3ea70c0317f50b0b60306f649b3f1fc6b1286f7579cfec46daa2932012-rootfs.mount: Deactivated successfully. Nov 1 00:39:47.133724 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e64f66a53060ab59ff41d28e801b4b233d9af237299527ebbfcf1602771a809f-rootfs.mount: Deactivated successfully. Nov 1 00:39:47.133803 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-af50cc3ea70c0317f50b0b60306f649b3f1fc6b1286f7579cfec46daa2932012-shm.mount: Deactivated successfully. Nov 1 00:39:47.133889 systemd[1]: var-lib-kubelet-pods-8c15cc8d\x2dd91f\x2d465f\x2d9473\x2d8fb8521f361d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dw5j56.mount: Deactivated successfully. Nov 1 00:39:47.133975 systemd[1]: var-lib-kubelet-pods-42ff1dd8\x2d0fa8\x2d4203\x2d9456\x2d376e4add7b0f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhxrss.mount: Deactivated successfully. Nov 1 00:39:47.134057 systemd[1]: var-lib-kubelet-pods-42ff1dd8\x2d0fa8\x2d4203\x2d9456\x2d376e4add7b0f-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Nov 1 00:39:47.134165 systemd[1]: var-lib-kubelet-pods-42ff1dd8\x2d0fa8\x2d4203\x2d9456\x2d376e4add7b0f-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Nov 1 00:39:47.216581 kubelet[2056]: I1101 00:39:47.216535 2056 scope.go:117] "RemoveContainer" containerID="e0edff8bbda060d7ccfbccc84e87d25c171888993b9f78df8ad3e75d906b2ac1" Nov 1 00:39:47.218022 env[1303]: time="2025-11-01T00:39:47.217854065Z" level=info msg="RemoveContainer for \"e0edff8bbda060d7ccfbccc84e87d25c171888993b9f78df8ad3e75d906b2ac1\"" Nov 1 00:39:47.222738 env[1303]: time="2025-11-01T00:39:47.222693460Z" level=info msg="RemoveContainer for \"e0edff8bbda060d7ccfbccc84e87d25c171888993b9f78df8ad3e75d906b2ac1\" returns successfully" Nov 1 00:39:47.223303 kubelet[2056]: I1101 00:39:47.223269 2056 scope.go:117] "RemoveContainer" containerID="4c2a298bd3368c98ec57fcf2c60c95e30917f72fccc783ef5a170be9b1484ec8" Nov 1 00:39:47.224404 env[1303]: time="2025-11-01T00:39:47.224350425Z" level=info msg="RemoveContainer for \"4c2a298bd3368c98ec57fcf2c60c95e30917f72fccc783ef5a170be9b1484ec8\"" Nov 1 00:39:47.228135 env[1303]: time="2025-11-01T00:39:47.228087999Z" level=info msg="RemoveContainer for \"4c2a298bd3368c98ec57fcf2c60c95e30917f72fccc783ef5a170be9b1484ec8\" returns successfully" Nov 1 00:39:47.228328 kubelet[2056]: I1101 00:39:47.228304 2056 scope.go:117] "RemoveContainer" containerID="9ea4654912dc1c4eea8ddc6fdd2954da7706aa5ffd13487f48d61ae973e39c83" Nov 1 00:39:47.229321 env[1303]: time="2025-11-01T00:39:47.229291793Z" level=info msg="RemoveContainer for \"9ea4654912dc1c4eea8ddc6fdd2954da7706aa5ffd13487f48d61ae973e39c83\"" Nov 1 00:39:47.233074 env[1303]: time="2025-11-01T00:39:47.232897387Z" level=info msg="RemoveContainer for \"9ea4654912dc1c4eea8ddc6fdd2954da7706aa5ffd13487f48d61ae973e39c83\" returns successfully" Nov 1 00:39:47.233534 kubelet[2056]: I1101 00:39:47.233502 2056 scope.go:117] "RemoveContainer" containerID="fc8b6f7e24a183150118ea193ecf051e873a00c85b5dc91b5bd337539ccddebd" Nov 1 00:39:47.235343 env[1303]: time="2025-11-01T00:39:47.235313301Z" level=info msg="RemoveContainer for \"fc8b6f7e24a183150118ea193ecf051e873a00c85b5dc91b5bd337539ccddebd\"" Nov 1 00:39:47.238777 env[1303]: time="2025-11-01T00:39:47.238723424Z" level=info msg="RemoveContainer for \"fc8b6f7e24a183150118ea193ecf051e873a00c85b5dc91b5bd337539ccddebd\" returns successfully" Nov 1 00:39:47.239040 kubelet[2056]: I1101 00:39:47.239015 2056 scope.go:117] "RemoveContainer" containerID="d719ccc3ab5c106158a0fa98014502df3f59a51837a16439b6e265e1e6e8248b" Nov 1 00:39:47.239917 env[1303]: time="2025-11-01T00:39:47.239895619Z" level=info msg="RemoveContainer for \"d719ccc3ab5c106158a0fa98014502df3f59a51837a16439b6e265e1e6e8248b\"" Nov 1 00:39:47.246634 env[1303]: time="2025-11-01T00:39:47.246566510Z" level=info msg="RemoveContainer for \"d719ccc3ab5c106158a0fa98014502df3f59a51837a16439b6e265e1e6e8248b\" returns successfully" Nov 1 00:39:47.246828 kubelet[2056]: I1101 00:39:47.246804 2056 scope.go:117] "RemoveContainer" containerID="381bcdc0941a40298fe56c83b6647c862c50711033786ee1aeaf16b9f8939120" Nov 1 00:39:47.247831 env[1303]: time="2025-11-01T00:39:47.247802987Z" level=info msg="RemoveContainer for \"381bcdc0941a40298fe56c83b6647c862c50711033786ee1aeaf16b9f8939120\"" Nov 1 00:39:47.251045 env[1303]: time="2025-11-01T00:39:47.250994844Z" level=info msg="RemoveContainer for \"381bcdc0941a40298fe56c83b6647c862c50711033786ee1aeaf16b9f8939120\" returns successfully" Nov 1 00:39:47.251216 kubelet[2056]: I1101 00:39:47.251188 2056 scope.go:117] "RemoveContainer" containerID="4c2a298bd3368c98ec57fcf2c60c95e30917f72fccc783ef5a170be9b1484ec8" Nov 1 00:39:47.251502 env[1303]: time="2025-11-01T00:39:47.251400444Z" level=error msg="ContainerStatus for \"4c2a298bd3368c98ec57fcf2c60c95e30917f72fccc783ef5a170be9b1484ec8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4c2a298bd3368c98ec57fcf2c60c95e30917f72fccc783ef5a170be9b1484ec8\": not found" Nov 1 00:39:47.251608 kubelet[2056]: E1101 00:39:47.251587 2056 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4c2a298bd3368c98ec57fcf2c60c95e30917f72fccc783ef5a170be9b1484ec8\": not found" containerID="4c2a298bd3368c98ec57fcf2c60c95e30917f72fccc783ef5a170be9b1484ec8" Nov 1 00:39:47.251688 kubelet[2056]: I1101 00:39:47.251616 2056 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4c2a298bd3368c98ec57fcf2c60c95e30917f72fccc783ef5a170be9b1484ec8"} err="failed to get container status \"4c2a298bd3368c98ec57fcf2c60c95e30917f72fccc783ef5a170be9b1484ec8\": rpc error: code = NotFound desc = an error occurred when try to find container \"4c2a298bd3368c98ec57fcf2c60c95e30917f72fccc783ef5a170be9b1484ec8\": not found" Nov 1 00:39:47.251688 kubelet[2056]: I1101 00:39:47.251688 2056 scope.go:117] "RemoveContainer" containerID="9ea4654912dc1c4eea8ddc6fdd2954da7706aa5ffd13487f48d61ae973e39c83" Nov 1 00:39:47.251858 env[1303]: time="2025-11-01T00:39:47.251813137Z" level=error msg="ContainerStatus for \"9ea4654912dc1c4eea8ddc6fdd2954da7706aa5ffd13487f48d61ae973e39c83\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9ea4654912dc1c4eea8ddc6fdd2954da7706aa5ffd13487f48d61ae973e39c83\": not found" Nov 1 00:39:47.251929 kubelet[2056]: E1101 00:39:47.251915 2056 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9ea4654912dc1c4eea8ddc6fdd2954da7706aa5ffd13487f48d61ae973e39c83\": not found" containerID="9ea4654912dc1c4eea8ddc6fdd2954da7706aa5ffd13487f48d61ae973e39c83" Nov 1 00:39:47.251958 kubelet[2056]: I1101 00:39:47.251931 2056 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9ea4654912dc1c4eea8ddc6fdd2954da7706aa5ffd13487f48d61ae973e39c83"} err="failed to get container status \"9ea4654912dc1c4eea8ddc6fdd2954da7706aa5ffd13487f48d61ae973e39c83\": rpc error: code = NotFound desc = an error occurred when try to find container \"9ea4654912dc1c4eea8ddc6fdd2954da7706aa5ffd13487f48d61ae973e39c83\": not found" Nov 1 00:39:47.251958 kubelet[2056]: I1101 00:39:47.251944 2056 scope.go:117] "RemoveContainer" containerID="fc8b6f7e24a183150118ea193ecf051e873a00c85b5dc91b5bd337539ccddebd" Nov 1 00:39:47.252092 env[1303]: time="2025-11-01T00:39:47.252052802Z" level=error msg="ContainerStatus for \"fc8b6f7e24a183150118ea193ecf051e873a00c85b5dc91b5bd337539ccddebd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fc8b6f7e24a183150118ea193ecf051e873a00c85b5dc91b5bd337539ccddebd\": not found" Nov 1 00:39:47.252180 kubelet[2056]: E1101 00:39:47.252163 2056 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fc8b6f7e24a183150118ea193ecf051e873a00c85b5dc91b5bd337539ccddebd\": not found" containerID="fc8b6f7e24a183150118ea193ecf051e873a00c85b5dc91b5bd337539ccddebd" Nov 1 00:39:47.252231 kubelet[2056]: I1101 00:39:47.252180 2056 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fc8b6f7e24a183150118ea193ecf051e873a00c85b5dc91b5bd337539ccddebd"} err="failed to get container status \"fc8b6f7e24a183150118ea193ecf051e873a00c85b5dc91b5bd337539ccddebd\": rpc error: code = NotFound desc = an error occurred when try to find container \"fc8b6f7e24a183150118ea193ecf051e873a00c85b5dc91b5bd337539ccddebd\": not found" Nov 1 00:39:47.252231 kubelet[2056]: I1101 00:39:47.252192 2056 scope.go:117] "RemoveContainer" containerID="d719ccc3ab5c106158a0fa98014502df3f59a51837a16439b6e265e1e6e8248b" Nov 1 00:39:47.252500 env[1303]: time="2025-11-01T00:39:47.252434787Z" level=error msg="ContainerStatus for \"d719ccc3ab5c106158a0fa98014502df3f59a51837a16439b6e265e1e6e8248b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d719ccc3ab5c106158a0fa98014502df3f59a51837a16439b6e265e1e6e8248b\": not found" Nov 1 00:39:47.252618 kubelet[2056]: E1101 00:39:47.252600 2056 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d719ccc3ab5c106158a0fa98014502df3f59a51837a16439b6e265e1e6e8248b\": not found" containerID="d719ccc3ab5c106158a0fa98014502df3f59a51837a16439b6e265e1e6e8248b" Nov 1 00:39:47.252618 kubelet[2056]: I1101 00:39:47.252616 2056 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d719ccc3ab5c106158a0fa98014502df3f59a51837a16439b6e265e1e6e8248b"} err="failed to get container status \"d719ccc3ab5c106158a0fa98014502df3f59a51837a16439b6e265e1e6e8248b\": rpc error: code = NotFound desc = an error occurred when try to find container \"d719ccc3ab5c106158a0fa98014502df3f59a51837a16439b6e265e1e6e8248b\": not found" Nov 1 00:39:47.252714 kubelet[2056]: I1101 00:39:47.252626 2056 scope.go:117] "RemoveContainer" containerID="381bcdc0941a40298fe56c83b6647c862c50711033786ee1aeaf16b9f8939120" Nov 1 00:39:47.252843 env[1303]: time="2025-11-01T00:39:47.252794480Z" level=error msg="ContainerStatus for \"381bcdc0941a40298fe56c83b6647c862c50711033786ee1aeaf16b9f8939120\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"381bcdc0941a40298fe56c83b6647c862c50711033786ee1aeaf16b9f8939120\": not found" Nov 1 00:39:47.252949 kubelet[2056]: E1101 00:39:47.252928 2056 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"381bcdc0941a40298fe56c83b6647c862c50711033786ee1aeaf16b9f8939120\": not found" containerID="381bcdc0941a40298fe56c83b6647c862c50711033786ee1aeaf16b9f8939120" Nov 1 00:39:47.253004 kubelet[2056]: I1101 00:39:47.252951 2056 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"381bcdc0941a40298fe56c83b6647c862c50711033786ee1aeaf16b9f8939120"} err="failed to get container status \"381bcdc0941a40298fe56c83b6647c862c50711033786ee1aeaf16b9f8939120\": rpc error: code = NotFound desc = an error occurred when try to find container \"381bcdc0941a40298fe56c83b6647c862c50711033786ee1aeaf16b9f8939120\": not found" Nov 1 00:39:48.081342 sshd[3744]: pam_unix(sshd:session): session closed for user core Nov 1 00:39:48.084304 systemd[1]: Started sshd@26-10.0.0.57:22-10.0.0.1:37770.service. Nov 1 00:39:48.084806 systemd[1]: sshd@25-10.0.0.57:22-10.0.0.1:37756.service: Deactivated successfully. Nov 1 00:39:48.085626 systemd[1]: session-26.scope: Deactivated successfully. Nov 1 00:39:48.085934 systemd-logind[1291]: Session 26 logged out. Waiting for processes to exit. Nov 1 00:39:48.086748 systemd-logind[1291]: Removed session 26. Nov 1 00:39:48.125169 sshd[3911]: Accepted publickey for core from 10.0.0.1 port 37770 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:39:48.126514 sshd[3911]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:39:48.130143 systemd-logind[1291]: New session 27 of user core. Nov 1 00:39:48.131108 systemd[1]: Started session-27.scope. Nov 1 00:39:48.889403 kubelet[2056]: I1101 00:39:48.889339 2056 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42ff1dd8-0fa8-4203-9456-376e4add7b0f" path="/var/lib/kubelet/pods/42ff1dd8-0fa8-4203-9456-376e4add7b0f/volumes" Nov 1 00:39:48.890102 kubelet[2056]: I1101 00:39:48.890070 2056 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c15cc8d-d91f-465f-9473-8fb8521f361d" path="/var/lib/kubelet/pods/8c15cc8d-d91f-465f-9473-8fb8521f361d/volumes" Nov 1 00:39:49.207792 sshd[3911]: pam_unix(sshd:session): session closed for user core Nov 1 00:39:49.210413 systemd[1]: Started sshd@27-10.0.0.57:22-10.0.0.1:37772.service. Nov 1 00:39:49.210866 systemd[1]: sshd@26-10.0.0.57:22-10.0.0.1:37770.service: Deactivated successfully. Nov 1 00:39:49.212360 systemd[1]: session-27.scope: Deactivated successfully. Nov 1 00:39:49.212459 systemd-logind[1291]: Session 27 logged out. Waiting for processes to exit. Nov 1 00:39:49.213371 systemd-logind[1291]: Removed session 27. Nov 1 00:39:49.250053 sshd[3924]: Accepted publickey for core from 10.0.0.1 port 37772 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:39:49.251273 sshd[3924]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:39:49.255567 systemd-logind[1291]: New session 28 of user core. Nov 1 00:39:49.256540 systemd[1]: Started session-28.scope. Nov 1 00:39:49.605702 kubelet[2056]: I1101 00:39:49.605556 2056 memory_manager.go:355] "RemoveStaleState removing state" podUID="8c15cc8d-d91f-465f-9473-8fb8521f361d" containerName="cilium-operator" Nov 1 00:39:49.605702 kubelet[2056]: I1101 00:39:49.605592 2056 memory_manager.go:355] "RemoveStaleState removing state" podUID="42ff1dd8-0fa8-4203-9456-376e4add7b0f" containerName="cilium-agent" Nov 1 00:39:49.715361 kubelet[2056]: I1101 00:39:49.715296 2056 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab-host-proc-sys-kernel\") pod \"cilium-kcjl4\" (UID: \"03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab\") " pod="kube-system/cilium-kcjl4" Nov 1 00:39:49.715361 kubelet[2056]: I1101 00:39:49.715341 2056 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab-cilium-run\") pod \"cilium-kcjl4\" (UID: \"03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab\") " pod="kube-system/cilium-kcjl4" Nov 1 00:39:49.715361 kubelet[2056]: I1101 00:39:49.715364 2056 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rm6p\" (UniqueName: \"kubernetes.io/projected/03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab-kube-api-access-2rm6p\") pod \"cilium-kcjl4\" (UID: \"03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab\") " pod="kube-system/cilium-kcjl4" Nov 1 00:39:49.715684 kubelet[2056]: I1101 00:39:49.715394 2056 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab-etc-cni-netd\") pod \"cilium-kcjl4\" (UID: \"03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab\") " pod="kube-system/cilium-kcjl4" Nov 1 00:39:49.715684 kubelet[2056]: I1101 00:39:49.715409 2056 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab-lib-modules\") pod \"cilium-kcjl4\" (UID: \"03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab\") " pod="kube-system/cilium-kcjl4" Nov 1 00:39:49.715684 kubelet[2056]: I1101 00:39:49.715426 2056 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab-bpf-maps\") pod \"cilium-kcjl4\" (UID: \"03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab\") " pod="kube-system/cilium-kcjl4" Nov 1 00:39:49.715684 kubelet[2056]: I1101 00:39:49.715446 2056 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab-cni-path\") pod \"cilium-kcjl4\" (UID: \"03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab\") " pod="kube-system/cilium-kcjl4" Nov 1 00:39:49.715684 kubelet[2056]: I1101 00:39:49.715464 2056 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab-hubble-tls\") pod \"cilium-kcjl4\" (UID: \"03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab\") " pod="kube-system/cilium-kcjl4" Nov 1 00:39:49.715684 kubelet[2056]: I1101 00:39:49.715508 2056 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab-cilium-config-path\") pod \"cilium-kcjl4\" (UID: \"03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab\") " pod="kube-system/cilium-kcjl4" Nov 1 00:39:49.715856 kubelet[2056]: I1101 00:39:49.715532 2056 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab-cilium-cgroup\") pod \"cilium-kcjl4\" (UID: \"03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab\") " pod="kube-system/cilium-kcjl4" Nov 1 00:39:49.715856 kubelet[2056]: I1101 00:39:49.715622 2056 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab-cilium-ipsec-secrets\") pod \"cilium-kcjl4\" (UID: \"03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab\") " pod="kube-system/cilium-kcjl4" Nov 1 00:39:49.715856 kubelet[2056]: I1101 00:39:49.715688 2056 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab-host-proc-sys-net\") pod \"cilium-kcjl4\" (UID: \"03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab\") " pod="kube-system/cilium-kcjl4" Nov 1 00:39:49.715856 kubelet[2056]: I1101 00:39:49.715721 2056 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab-xtables-lock\") pod \"cilium-kcjl4\" (UID: \"03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab\") " pod="kube-system/cilium-kcjl4" Nov 1 00:39:49.715856 kubelet[2056]: I1101 00:39:49.715748 2056 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab-hostproc\") pod \"cilium-kcjl4\" (UID: \"03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab\") " pod="kube-system/cilium-kcjl4" Nov 1 00:39:49.715856 kubelet[2056]: I1101 00:39:49.715771 2056 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab-clustermesh-secrets\") pod \"cilium-kcjl4\" (UID: \"03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab\") " pod="kube-system/cilium-kcjl4" Nov 1 00:39:49.769907 sshd[3924]: pam_unix(sshd:session): session closed for user core Nov 1 00:39:49.773086 systemd[1]: Started sshd@28-10.0.0.57:22-10.0.0.1:37778.service. Nov 1 00:39:49.773793 systemd[1]: sshd@27-10.0.0.57:22-10.0.0.1:37772.service: Deactivated successfully. Nov 1 00:39:49.775581 systemd[1]: session-28.scope: Deactivated successfully. Nov 1 00:39:49.775777 systemd-logind[1291]: Session 28 logged out. Waiting for processes to exit. Nov 1 00:39:49.777148 systemd-logind[1291]: Removed session 28. Nov 1 00:39:49.813984 sshd[3937]: Accepted publickey for core from 10.0.0.1 port 37778 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:39:49.815465 sshd[3937]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:39:49.821104 systemd[1]: Started session-29.scope. Nov 1 00:39:49.822960 systemd-logind[1291]: New session 29 of user core. Nov 1 00:39:49.947588 kubelet[2056]: E1101 00:39:49.947456 2056 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 1 00:39:50.089618 kubelet[2056]: E1101 00:39:50.089579 2056 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:39:50.091192 env[1303]: time="2025-11-01T00:39:50.091136920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kcjl4,Uid:03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab,Namespace:kube-system,Attempt:0,}" Nov 1 00:39:50.150025 env[1303]: time="2025-11-01T00:39:50.149771907Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:39:50.150025 env[1303]: time="2025-11-01T00:39:50.149826602Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:39:50.150025 env[1303]: time="2025-11-01T00:39:50.149840989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:39:50.150312 env[1303]: time="2025-11-01T00:39:50.150067789Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8f798c270562a70128e60d28587e205615ecd09bd129f15849a02afbd3d6f7bf pid=3963 runtime=io.containerd.runc.v2 Nov 1 00:39:50.190826 env[1303]: time="2025-11-01T00:39:50.190504315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kcjl4,Uid:03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab,Namespace:kube-system,Attempt:0,} returns sandbox id \"8f798c270562a70128e60d28587e205615ecd09bd129f15849a02afbd3d6f7bf\"" Nov 1 00:39:50.192234 kubelet[2056]: E1101 00:39:50.191703 2056 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:39:50.194842 env[1303]: time="2025-11-01T00:39:50.194698784Z" level=info msg="CreateContainer within sandbox \"8f798c270562a70128e60d28587e205615ecd09bd129f15849a02afbd3d6f7bf\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 1 00:39:50.219057 env[1303]: time="2025-11-01T00:39:50.218993452Z" level=info msg="CreateContainer within sandbox \"8f798c270562a70128e60d28587e205615ecd09bd129f15849a02afbd3d6f7bf\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6385c6b29cb9a2ef7a67789041e76cf5e8f3631e69f022a544c0505c1e2d59f6\"" Nov 1 00:39:50.219739 env[1303]: time="2025-11-01T00:39:50.219684173Z" level=info msg="StartContainer for \"6385c6b29cb9a2ef7a67789041e76cf5e8f3631e69f022a544c0505c1e2d59f6\"" Nov 1 00:39:50.264651 env[1303]: time="2025-11-01T00:39:50.264599067Z" level=info msg="StartContainer for \"6385c6b29cb9a2ef7a67789041e76cf5e8f3631e69f022a544c0505c1e2d59f6\" returns successfully" Nov 1 00:39:50.302237 env[1303]: time="2025-11-01T00:39:50.302141041Z" level=info msg="shim disconnected" id=6385c6b29cb9a2ef7a67789041e76cf5e8f3631e69f022a544c0505c1e2d59f6 Nov 1 00:39:50.302237 env[1303]: time="2025-11-01T00:39:50.302201436Z" level=warning msg="cleaning up after shim disconnected" id=6385c6b29cb9a2ef7a67789041e76cf5e8f3631e69f022a544c0505c1e2d59f6 namespace=k8s.io Nov 1 00:39:50.302237 env[1303]: time="2025-11-01T00:39:50.302218959Z" level=info msg="cleaning up dead shim" Nov 1 00:39:50.308531 env[1303]: time="2025-11-01T00:39:50.308503274Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:39:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4046 runtime=io.containerd.runc.v2\n" Nov 1 00:39:51.231699 env[1303]: time="2025-11-01T00:39:51.231653826Z" level=info msg="StopPodSandbox for \"8f798c270562a70128e60d28587e205615ecd09bd129f15849a02afbd3d6f7bf\"" Nov 1 00:39:51.232229 env[1303]: time="2025-11-01T00:39:51.232195384Z" level=info msg="Container to stop \"6385c6b29cb9a2ef7a67789041e76cf5e8f3631e69f022a544c0505c1e2d59f6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 00:39:51.234890 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8f798c270562a70128e60d28587e205615ecd09bd129f15849a02afbd3d6f7bf-shm.mount: Deactivated successfully. Nov 1 00:39:51.253749 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8f798c270562a70128e60d28587e205615ecd09bd129f15849a02afbd3d6f7bf-rootfs.mount: Deactivated successfully. Nov 1 00:39:51.309989 env[1303]: time="2025-11-01T00:39:51.309925020Z" level=info msg="shim disconnected" id=8f798c270562a70128e60d28587e205615ecd09bd129f15849a02afbd3d6f7bf Nov 1 00:39:51.309989 env[1303]: time="2025-11-01T00:39:51.309988311Z" level=warning msg="cleaning up after shim disconnected" id=8f798c270562a70128e60d28587e205615ecd09bd129f15849a02afbd3d6f7bf namespace=k8s.io Nov 1 00:39:51.310280 env[1303]: time="2025-11-01T00:39:51.310003369Z" level=info msg="cleaning up dead shim" Nov 1 00:39:51.317308 env[1303]: time="2025-11-01T00:39:51.317261961Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:39:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4079 runtime=io.containerd.runc.v2\n" Nov 1 00:39:51.317880 env[1303]: time="2025-11-01T00:39:51.317841020Z" level=info msg="TearDown network for sandbox \"8f798c270562a70128e60d28587e205615ecd09bd129f15849a02afbd3d6f7bf\" successfully" Nov 1 00:39:51.317880 env[1303]: time="2025-11-01T00:39:51.317870435Z" level=info msg="StopPodSandbox for \"8f798c270562a70128e60d28587e205615ecd09bd129f15849a02afbd3d6f7bf\" returns successfully" Nov 1 00:39:51.434804 kubelet[2056]: I1101 00:39:51.434708 2056 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab-hostproc\") pod \"03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab\" (UID: \"03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab\") " Nov 1 00:39:51.434804 kubelet[2056]: I1101 00:39:51.434795 2056 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab-clustermesh-secrets\") pod \"03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab\" (UID: \"03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab\") " Nov 1 00:39:51.435369 kubelet[2056]: I1101 00:39:51.434828 2056 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rm6p\" (UniqueName: \"kubernetes.io/projected/03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab-kube-api-access-2rm6p\") pod \"03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab\" (UID: \"03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab\") " Nov 1 00:39:51.435369 kubelet[2056]: I1101 00:39:51.434848 2056 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab-lib-modules\") pod \"03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab\" (UID: \"03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab\") " Nov 1 00:39:51.435369 kubelet[2056]: I1101 00:39:51.434867 2056 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab-cni-path\") pod \"03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab\" (UID: \"03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab\") " Nov 1 00:39:51.435369 kubelet[2056]: I1101 00:39:51.434890 2056 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab-host-proc-sys-net\") pod \"03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab\" (UID: \"03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab\") " Nov 1 00:39:51.435369 kubelet[2056]: I1101 00:39:51.434896 2056 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab-hostproc" (OuterVolumeSpecName: "hostproc") pod "03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab" (UID: "03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:39:51.435369 kubelet[2056]: I1101 00:39:51.434908 2056 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab-xtables-lock\") pod \"03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab\" (UID: \"03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab\") " Nov 1 00:39:51.435645 kubelet[2056]: I1101 00:39:51.434980 2056 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab-hubble-tls\") pod \"03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab\" (UID: \"03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab\") " Nov 1 00:39:51.435645 kubelet[2056]: I1101 00:39:51.435007 2056 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab-cilium-ipsec-secrets\") pod \"03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab\" (UID: \"03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab\") " Nov 1 00:39:51.435645 kubelet[2056]: I1101 00:39:51.435027 2056 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab-bpf-maps\") pod \"03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab\" (UID: \"03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab\") " Nov 1 00:39:51.435645 kubelet[2056]: I1101 00:39:51.435051 2056 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab-etc-cni-netd\") pod \"03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab\" (UID: \"03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab\") " Nov 1 00:39:51.435645 kubelet[2056]: I1101 00:39:51.435075 2056 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab-cilium-cgroup\") pod \"03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab\" (UID: \"03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab\") " Nov 1 00:39:51.435645 kubelet[2056]: I1101 00:39:51.435095 2056 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab-host-proc-sys-kernel\") pod \"03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab\" (UID: \"03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab\") " Nov 1 00:39:51.435868 kubelet[2056]: I1101 00:39:51.435115 2056 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab-cilium-config-path\") pod \"03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab\" (UID: \"03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab\") " Nov 1 00:39:51.435868 kubelet[2056]: I1101 00:39:51.435137 2056 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab-cilium-run\") pod \"03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab\" (UID: \"03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab\") " Nov 1 00:39:51.435868 kubelet[2056]: I1101 00:39:51.435179 2056 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab-hostproc\") on node \"localhost\" DevicePath \"\"" Nov 1 00:39:51.435868 kubelet[2056]: I1101 00:39:51.434929 2056 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab" (UID: "03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:39:51.435868 kubelet[2056]: I1101 00:39:51.435203 2056 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab" (UID: "03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:39:51.435868 kubelet[2056]: I1101 00:39:51.435436 2056 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab" (UID: "03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:39:51.438129 kubelet[2056]: I1101 00:39:51.438094 2056 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab-kube-api-access-2rm6p" (OuterVolumeSpecName: "kube-api-access-2rm6p") pod "03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab" (UID: "03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab"). InnerVolumeSpecName "kube-api-access-2rm6p". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 00:39:51.438129 kubelet[2056]: I1101 00:39:51.438093 2056 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab" (UID: "03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 1 00:39:51.438270 kubelet[2056]: I1101 00:39:51.438129 2056 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab-cni-path" (OuterVolumeSpecName: "cni-path") pod "03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab" (UID: "03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:39:51.438270 kubelet[2056]: I1101 00:39:51.438141 2056 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab" (UID: "03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:39:51.438270 kubelet[2056]: I1101 00:39:51.438161 2056 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab" (UID: "03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:39:51.438270 kubelet[2056]: I1101 00:39:51.438166 2056 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab" (UID: "03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:39:51.438270 kubelet[2056]: I1101 00:39:51.438185 2056 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab" (UID: "03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:39:51.438487 kubelet[2056]: I1101 00:39:51.438185 2056 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab" (UID: "03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:39:51.438751 kubelet[2056]: I1101 00:39:51.438729 2056 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab" (UID: "03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 1 00:39:51.438951 kubelet[2056]: I1101 00:39:51.438919 2056 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab" (UID: "03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 00:39:51.440153 kubelet[2056]: I1101 00:39:51.440119 2056 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab" (UID: "03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 1 00:39:51.440627 systemd[1]: var-lib-kubelet-pods-03a7516a\x2dc108\x2d4af0\x2d8ed9\x2dd9e9c2b7b2ab-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2rm6p.mount: Deactivated successfully. Nov 1 00:39:51.440791 systemd[1]: var-lib-kubelet-pods-03a7516a\x2dc108\x2d4af0\x2d8ed9\x2dd9e9c2b7b2ab-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Nov 1 00:39:51.440919 systemd[1]: var-lib-kubelet-pods-03a7516a\x2dc108\x2d4af0\x2d8ed9\x2dd9e9c2b7b2ab-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Nov 1 00:39:51.441049 systemd[1]: var-lib-kubelet-pods-03a7516a\x2dc108\x2d4af0\x2d8ed9\x2dd9e9c2b7b2ab-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Nov 1 00:39:51.536535 kubelet[2056]: I1101 00:39:51.536399 2056 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Nov 1 00:39:51.536535 kubelet[2056]: I1101 00:39:51.536437 2056 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab-xtables-lock\") on node \"localhost\" DevicePath \"\"" Nov 1 00:39:51.536535 kubelet[2056]: I1101 00:39:51.536445 2056 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab-hubble-tls\") on node \"localhost\" DevicePath \"\"" Nov 1 00:39:51.536535 kubelet[2056]: I1101 00:39:51.536452 2056 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Nov 1 00:39:51.536535 kubelet[2056]: I1101 00:39:51.536461 2056 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab-bpf-maps\") on node \"localhost\" DevicePath \"\"" Nov 1 00:39:51.536535 kubelet[2056]: I1101 00:39:51.536470 2056 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Nov 1 00:39:51.536535 kubelet[2056]: I1101 00:39:51.536477 2056 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Nov 1 00:39:51.536535 kubelet[2056]: I1101 00:39:51.536484 2056 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab-cilium-run\") on node \"localhost\" DevicePath \"\"" Nov 1 00:39:51.536904 kubelet[2056]: I1101 00:39:51.536490 2056 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Nov 1 00:39:51.536904 kubelet[2056]: I1101 00:39:51.536498 2056 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Nov 1 00:39:51.536904 kubelet[2056]: I1101 00:39:51.536504 2056 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab-cni-path\") on node \"localhost\" DevicePath \"\"" Nov 1 00:39:51.536904 kubelet[2056]: I1101 00:39:51.536511 2056 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Nov 1 00:39:51.536904 kubelet[2056]: I1101 00:39:51.536519 2056 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2rm6p\" (UniqueName: \"kubernetes.io/projected/03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab-kube-api-access-2rm6p\") on node \"localhost\" DevicePath \"\"" Nov 1 00:39:51.536904 kubelet[2056]: I1101 00:39:51.536530 2056 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab-lib-modules\") on node \"localhost\" DevicePath \"\"" Nov 1 00:39:52.236884 kubelet[2056]: I1101 00:39:52.236476 2056 scope.go:117] "RemoveContainer" containerID="6385c6b29cb9a2ef7a67789041e76cf5e8f3631e69f022a544c0505c1e2d59f6" Nov 1 00:39:52.243316 env[1303]: time="2025-11-01T00:39:52.243035062Z" level=info msg="RemoveContainer for \"6385c6b29cb9a2ef7a67789041e76cf5e8f3631e69f022a544c0505c1e2d59f6\"" Nov 1 00:39:52.258422 env[1303]: time="2025-11-01T00:39:52.257831223Z" level=info msg="RemoveContainer for \"6385c6b29cb9a2ef7a67789041e76cf5e8f3631e69f022a544c0505c1e2d59f6\" returns successfully" Nov 1 00:39:52.388868 kubelet[2056]: I1101 00:39:52.388810 2056 memory_manager.go:355] "RemoveStaleState removing state" podUID="03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab" containerName="mount-cgroup" Nov 1 00:39:52.545773 kubelet[2056]: I1101 00:39:52.545583 2056 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ab965146-7a80-4b93-a3f0-b654a4592966-host-proc-sys-kernel\") pod \"cilium-gqfj2\" (UID: \"ab965146-7a80-4b93-a3f0-b654a4592966\") " pod="kube-system/cilium-gqfj2" Nov 1 00:39:52.545773 kubelet[2056]: I1101 00:39:52.545651 2056 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ab965146-7a80-4b93-a3f0-b654a4592966-cilium-cgroup\") pod \"cilium-gqfj2\" (UID: \"ab965146-7a80-4b93-a3f0-b654a4592966\") " pod="kube-system/cilium-gqfj2" Nov 1 00:39:52.545773 kubelet[2056]: I1101 00:39:52.545675 2056 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ab965146-7a80-4b93-a3f0-b654a4592966-host-proc-sys-net\") pod \"cilium-gqfj2\" (UID: \"ab965146-7a80-4b93-a3f0-b654a4592966\") " pod="kube-system/cilium-gqfj2" Nov 1 00:39:52.545773 kubelet[2056]: I1101 00:39:52.545705 2056 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ab965146-7a80-4b93-a3f0-b654a4592966-cilium-run\") pod \"cilium-gqfj2\" (UID: \"ab965146-7a80-4b93-a3f0-b654a4592966\") " pod="kube-system/cilium-gqfj2" Nov 1 00:39:52.545773 kubelet[2056]: I1101 00:39:52.545726 2056 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ab965146-7a80-4b93-a3f0-b654a4592966-etc-cni-netd\") pod \"cilium-gqfj2\" (UID: \"ab965146-7a80-4b93-a3f0-b654a4592966\") " pod="kube-system/cilium-gqfj2" Nov 1 00:39:52.545773 kubelet[2056]: I1101 00:39:52.545747 2056 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ab965146-7a80-4b93-a3f0-b654a4592966-xtables-lock\") pod \"cilium-gqfj2\" (UID: \"ab965146-7a80-4b93-a3f0-b654a4592966\") " pod="kube-system/cilium-gqfj2" Nov 1 00:39:52.546604 kubelet[2056]: I1101 00:39:52.545768 2056 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ab965146-7a80-4b93-a3f0-b654a4592966-cilium-config-path\") pod \"cilium-gqfj2\" (UID: \"ab965146-7a80-4b93-a3f0-b654a4592966\") " pod="kube-system/cilium-gqfj2" Nov 1 00:39:52.546604 kubelet[2056]: I1101 00:39:52.545788 2056 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ab965146-7a80-4b93-a3f0-b654a4592966-bpf-maps\") pod \"cilium-gqfj2\" (UID: \"ab965146-7a80-4b93-a3f0-b654a4592966\") " pod="kube-system/cilium-gqfj2" Nov 1 00:39:52.546604 kubelet[2056]: I1101 00:39:52.545807 2056 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ab965146-7a80-4b93-a3f0-b654a4592966-lib-modules\") pod \"cilium-gqfj2\" (UID: \"ab965146-7a80-4b93-a3f0-b654a4592966\") " pod="kube-system/cilium-gqfj2" Nov 1 00:39:52.546604 kubelet[2056]: I1101 00:39:52.545829 2056 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ab965146-7a80-4b93-a3f0-b654a4592966-clustermesh-secrets\") pod \"cilium-gqfj2\" (UID: \"ab965146-7a80-4b93-a3f0-b654a4592966\") " pod="kube-system/cilium-gqfj2" Nov 1 00:39:52.546604 kubelet[2056]: I1101 00:39:52.545847 2056 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ab965146-7a80-4b93-a3f0-b654a4592966-hubble-tls\") pod \"cilium-gqfj2\" (UID: \"ab965146-7a80-4b93-a3f0-b654a4592966\") " pod="kube-system/cilium-gqfj2" Nov 1 00:39:52.546604 kubelet[2056]: I1101 00:39:52.545867 2056 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ab965146-7a80-4b93-a3f0-b654a4592966-hostproc\") pod \"cilium-gqfj2\" (UID: \"ab965146-7a80-4b93-a3f0-b654a4592966\") " pod="kube-system/cilium-gqfj2" Nov 1 00:39:52.546822 kubelet[2056]: I1101 00:39:52.545885 2056 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ab965146-7a80-4b93-a3f0-b654a4592966-cni-path\") pod \"cilium-gqfj2\" (UID: \"ab965146-7a80-4b93-a3f0-b654a4592966\") " pod="kube-system/cilium-gqfj2" Nov 1 00:39:52.546822 kubelet[2056]: I1101 00:39:52.545904 2056 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ab965146-7a80-4b93-a3f0-b654a4592966-cilium-ipsec-secrets\") pod \"cilium-gqfj2\" (UID: \"ab965146-7a80-4b93-a3f0-b654a4592966\") " pod="kube-system/cilium-gqfj2" Nov 1 00:39:52.546822 kubelet[2056]: I1101 00:39:52.545926 2056 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpsbz\" (UniqueName: \"kubernetes.io/projected/ab965146-7a80-4b93-a3f0-b654a4592966-kube-api-access-bpsbz\") pod \"cilium-gqfj2\" (UID: \"ab965146-7a80-4b93-a3f0-b654a4592966\") " pod="kube-system/cilium-gqfj2" Nov 1 00:39:52.699839 kubelet[2056]: E1101 00:39:52.699724 2056 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:39:52.701852 env[1303]: time="2025-11-01T00:39:52.700676182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gqfj2,Uid:ab965146-7a80-4b93-a3f0-b654a4592966,Namespace:kube-system,Attempt:0,}" Nov 1 00:39:52.730870 env[1303]: time="2025-11-01T00:39:52.730664234Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:39:52.730870 env[1303]: time="2025-11-01T00:39:52.730722245Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:39:52.730870 env[1303]: time="2025-11-01T00:39:52.730736201Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:39:52.734686 env[1303]: time="2025-11-01T00:39:52.730966688Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6b1c6cc0b2a83c8492e74b3131b5b40ce47305918d9d655371209a555f775fd1 pid=4105 runtime=io.containerd.runc.v2 Nov 1 00:39:52.785314 env[1303]: time="2025-11-01T00:39:52.782527357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gqfj2,Uid:ab965146-7a80-4b93-a3f0-b654a4592966,Namespace:kube-system,Attempt:0,} returns sandbox id \"6b1c6cc0b2a83c8492e74b3131b5b40ce47305918d9d655371209a555f775fd1\"" Nov 1 00:39:52.785678 kubelet[2056]: E1101 00:39:52.783181 2056 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:39:52.787290 env[1303]: time="2025-11-01T00:39:52.786933876Z" level=info msg="CreateContainer within sandbox \"6b1c6cc0b2a83c8492e74b3131b5b40ce47305918d9d655371209a555f775fd1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 1 00:39:52.814625 env[1303]: time="2025-11-01T00:39:52.814462704Z" level=info msg="CreateContainer within sandbox \"6b1c6cc0b2a83c8492e74b3131b5b40ce47305918d9d655371209a555f775fd1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f21aeb9f2eba65cb33b45213622ac3db6f8c58b3c7a25cf39746ac10ad4b481f\"" Nov 1 00:39:52.816476 env[1303]: time="2025-11-01T00:39:52.815281397Z" level=info msg="StartContainer for \"f21aeb9f2eba65cb33b45213622ac3db6f8c58b3c7a25cf39746ac10ad4b481f\"" Nov 1 00:39:52.884841 env[1303]: time="2025-11-01T00:39:52.884785618Z" level=info msg="StartContainer for \"f21aeb9f2eba65cb33b45213622ac3db6f8c58b3c7a25cf39746ac10ad4b481f\" returns successfully" Nov 1 00:39:52.890226 kubelet[2056]: I1101 00:39:52.890195 2056 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab" path="/var/lib/kubelet/pods/03a7516a-c108-4af0-8ed9-d9e9c2b7b2ab/volumes" Nov 1 00:39:52.931392 env[1303]: time="2025-11-01T00:39:52.931297839Z" level=info msg="shim disconnected" id=f21aeb9f2eba65cb33b45213622ac3db6f8c58b3c7a25cf39746ac10ad4b481f Nov 1 00:39:52.931629 env[1303]: time="2025-11-01T00:39:52.931403941Z" level=warning msg="cleaning up after shim disconnected" id=f21aeb9f2eba65cb33b45213622ac3db6f8c58b3c7a25cf39746ac10ad4b481f namespace=k8s.io Nov 1 00:39:52.931629 env[1303]: time="2025-11-01T00:39:52.931427986Z" level=info msg="cleaning up dead shim" Nov 1 00:39:52.948055 env[1303]: time="2025-11-01T00:39:52.947988415Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:39:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4187 runtime=io.containerd.runc.v2\ntime=\"2025-11-01T00:39:52Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" Nov 1 00:39:53.250867 kubelet[2056]: E1101 00:39:53.250809 2056 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:39:53.252985 env[1303]: time="2025-11-01T00:39:53.252912601Z" level=info msg="CreateContainer within sandbox \"6b1c6cc0b2a83c8492e74b3131b5b40ce47305918d9d655371209a555f775fd1\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 1 00:39:53.773876 env[1303]: time="2025-11-01T00:39:53.773769460Z" level=info msg="CreateContainer within sandbox \"6b1c6cc0b2a83c8492e74b3131b5b40ce47305918d9d655371209a555f775fd1\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"bb565d17a260a322de05c6cc25270114de888073f173010155cc009f223e0123\"" Nov 1 00:39:53.778503 env[1303]: time="2025-11-01T00:39:53.776104230Z" level=info msg="StartContainer for \"bb565d17a260a322de05c6cc25270114de888073f173010155cc009f223e0123\"" Nov 1 00:39:53.855398 env[1303]: time="2025-11-01T00:39:53.855321547Z" level=info msg="StartContainer for \"bb565d17a260a322de05c6cc25270114de888073f173010155cc009f223e0123\" returns successfully" Nov 1 00:39:53.878931 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bb565d17a260a322de05c6cc25270114de888073f173010155cc009f223e0123-rootfs.mount: Deactivated successfully. Nov 1 00:39:54.019500 env[1303]: time="2025-11-01T00:39:54.018118795Z" level=info msg="shim disconnected" id=bb565d17a260a322de05c6cc25270114de888073f173010155cc009f223e0123 Nov 1 00:39:54.019500 env[1303]: time="2025-11-01T00:39:54.018172066Z" level=warning msg="cleaning up after shim disconnected" id=bb565d17a260a322de05c6cc25270114de888073f173010155cc009f223e0123 namespace=k8s.io Nov 1 00:39:54.019500 env[1303]: time="2025-11-01T00:39:54.018184360Z" level=info msg="cleaning up dead shim" Nov 1 00:39:54.040723 env[1303]: time="2025-11-01T00:39:54.037449934Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:39:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4252 runtime=io.containerd.runc.v2\n" Nov 1 00:39:54.265649 kubelet[2056]: E1101 00:39:54.265541 2056 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:39:54.269061 env[1303]: time="2025-11-01T00:39:54.268997285Z" level=info msg="CreateContainer within sandbox \"6b1c6cc0b2a83c8492e74b3131b5b40ce47305918d9d655371209a555f775fd1\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 1 00:39:54.311670 env[1303]: time="2025-11-01T00:39:54.311483727Z" level=info msg="CreateContainer within sandbox \"6b1c6cc0b2a83c8492e74b3131b5b40ce47305918d9d655371209a555f775fd1\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"82c562f9f21cb4e88e8546925ed371545e031a103f3a9cd467444fad3b4bede5\"" Nov 1 00:39:54.313431 env[1303]: time="2025-11-01T00:39:54.313346992Z" level=info msg="StartContainer for \"82c562f9f21cb4e88e8546925ed371545e031a103f3a9cd467444fad3b4bede5\"" Nov 1 00:39:54.383478 env[1303]: time="2025-11-01T00:39:54.383375629Z" level=info msg="StartContainer for \"82c562f9f21cb4e88e8546925ed371545e031a103f3a9cd467444fad3b4bede5\" returns successfully" Nov 1 00:39:54.462452 env[1303]: time="2025-11-01T00:39:54.461761298Z" level=info msg="shim disconnected" id=82c562f9f21cb4e88e8546925ed371545e031a103f3a9cd467444fad3b4bede5 Nov 1 00:39:54.462452 env[1303]: time="2025-11-01T00:39:54.461813837Z" level=warning msg="cleaning up after shim disconnected" id=82c562f9f21cb4e88e8546925ed371545e031a103f3a9cd467444fad3b4bede5 namespace=k8s.io Nov 1 00:39:54.462452 env[1303]: time="2025-11-01T00:39:54.461824747Z" level=info msg="cleaning up dead shim" Nov 1 00:39:54.475447 env[1303]: time="2025-11-01T00:39:54.475372727Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:39:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4311 runtime=io.containerd.runc.v2\n" Nov 1 00:39:54.881888 env[1303]: time="2025-11-01T00:39:54.881839350Z" level=info msg="StopPodSandbox for \"af50cc3ea70c0317f50b0b60306f649b3f1fc6b1286f7579cfec46daa2932012\"" Nov 1 00:39:54.882106 env[1303]: time="2025-11-01T00:39:54.881953697Z" level=info msg="TearDown network for sandbox \"af50cc3ea70c0317f50b0b60306f649b3f1fc6b1286f7579cfec46daa2932012\" successfully" Nov 1 00:39:54.882106 env[1303]: time="2025-11-01T00:39:54.881998192Z" level=info msg="StopPodSandbox for \"af50cc3ea70c0317f50b0b60306f649b3f1fc6b1286f7579cfec46daa2932012\" returns successfully" Nov 1 00:39:54.882543 env[1303]: time="2025-11-01T00:39:54.882516625Z" level=info msg="RemovePodSandbox for \"af50cc3ea70c0317f50b0b60306f649b3f1fc6b1286f7579cfec46daa2932012\"" Nov 1 00:39:54.882617 env[1303]: time="2025-11-01T00:39:54.882545981Z" level=info msg="Forcibly stopping sandbox \"af50cc3ea70c0317f50b0b60306f649b3f1fc6b1286f7579cfec46daa2932012\"" Nov 1 00:39:54.882658 env[1303]: time="2025-11-01T00:39:54.882616775Z" level=info msg="TearDown network for sandbox \"af50cc3ea70c0317f50b0b60306f649b3f1fc6b1286f7579cfec46daa2932012\" successfully" Nov 1 00:39:54.895529 env[1303]: time="2025-11-01T00:39:54.895447143Z" level=info msg="RemovePodSandbox \"af50cc3ea70c0317f50b0b60306f649b3f1fc6b1286f7579cfec46daa2932012\" returns successfully" Nov 1 00:39:54.896084 env[1303]: time="2025-11-01T00:39:54.896041340Z" level=info msg="StopPodSandbox for \"8f798c270562a70128e60d28587e205615ecd09bd129f15849a02afbd3d6f7bf\"" Nov 1 00:39:54.896292 env[1303]: time="2025-11-01T00:39:54.896147792Z" level=info msg="TearDown network for sandbox \"8f798c270562a70128e60d28587e205615ecd09bd129f15849a02afbd3d6f7bf\" successfully" Nov 1 00:39:54.896292 env[1303]: time="2025-11-01T00:39:54.896197776Z" level=info msg="StopPodSandbox for \"8f798c270562a70128e60d28587e205615ecd09bd129f15849a02afbd3d6f7bf\" returns successfully" Nov 1 00:39:54.896547 env[1303]: time="2025-11-01T00:39:54.896521040Z" level=info msg="RemovePodSandbox for \"8f798c270562a70128e60d28587e205615ecd09bd129f15849a02afbd3d6f7bf\"" Nov 1 00:39:54.896609 env[1303]: time="2025-11-01T00:39:54.896551698Z" level=info msg="Forcibly stopping sandbox \"8f798c270562a70128e60d28587e205615ecd09bd129f15849a02afbd3d6f7bf\"" Nov 1 00:39:54.896668 env[1303]: time="2025-11-01T00:39:54.896628914Z" level=info msg="TearDown network for sandbox \"8f798c270562a70128e60d28587e205615ecd09bd129f15849a02afbd3d6f7bf\" successfully" Nov 1 00:39:54.902134 env[1303]: time="2025-11-01T00:39:54.902033136Z" level=info msg="RemovePodSandbox \"8f798c270562a70128e60d28587e205615ecd09bd129f15849a02afbd3d6f7bf\" returns successfully" Nov 1 00:39:54.902626 env[1303]: time="2025-11-01T00:39:54.902586054Z" level=info msg="StopPodSandbox for \"e64f66a53060ab59ff41d28e801b4b233d9af237299527ebbfcf1602771a809f\"" Nov 1 00:39:54.902714 env[1303]: time="2025-11-01T00:39:54.902674903Z" level=info msg="TearDown network for sandbox \"e64f66a53060ab59ff41d28e801b4b233d9af237299527ebbfcf1602771a809f\" successfully" Nov 1 00:39:54.902758 env[1303]: time="2025-11-01T00:39:54.902712164Z" level=info msg="StopPodSandbox for \"e64f66a53060ab59ff41d28e801b4b233d9af237299527ebbfcf1602771a809f\" returns successfully" Nov 1 00:39:54.903017 env[1303]: time="2025-11-01T00:39:54.902991153Z" level=info msg="RemovePodSandbox for \"e64f66a53060ab59ff41d28e801b4b233d9af237299527ebbfcf1602771a809f\"" Nov 1 00:39:54.903146 env[1303]: time="2025-11-01T00:39:54.903094118Z" level=info msg="Forcibly stopping sandbox \"e64f66a53060ab59ff41d28e801b4b233d9af237299527ebbfcf1602771a809f\"" Nov 1 00:39:54.903213 env[1303]: time="2025-11-01T00:39:54.903177727Z" level=info msg="TearDown network for sandbox \"e64f66a53060ab59ff41d28e801b4b233d9af237299527ebbfcf1602771a809f\" successfully" Nov 1 00:39:54.908416 env[1303]: time="2025-11-01T00:39:54.908268424Z" level=info msg="RemovePodSandbox \"e64f66a53060ab59ff41d28e801b4b233d9af237299527ebbfcf1602771a809f\" returns successfully" Nov 1 00:39:54.948256 kubelet[2056]: E1101 00:39:54.948143 2056 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 1 00:39:55.272753 kubelet[2056]: E1101 00:39:55.272722 2056 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:39:55.276699 env[1303]: time="2025-11-01T00:39:55.276423308Z" level=info msg="CreateContainer within sandbox \"6b1c6cc0b2a83c8492e74b3131b5b40ce47305918d9d655371209a555f775fd1\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 1 00:39:55.326944 env[1303]: time="2025-11-01T00:39:55.326859482Z" level=info msg="CreateContainer within sandbox \"6b1c6cc0b2a83c8492e74b3131b5b40ce47305918d9d655371209a555f775fd1\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ec7dc3449da3d58a920ca461754ae2454c5116fdffb889df79265ff571eaf42a\"" Nov 1 00:39:55.328126 env[1303]: time="2025-11-01T00:39:55.328084837Z" level=info msg="StartContainer for \"ec7dc3449da3d58a920ca461754ae2454c5116fdffb889df79265ff571eaf42a\"" Nov 1 00:39:55.501970 env[1303]: time="2025-11-01T00:39:55.500440834Z" level=info msg="StartContainer for \"ec7dc3449da3d58a920ca461754ae2454c5116fdffb889df79265ff571eaf42a\" returns successfully" Nov 1 00:39:55.655592 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ec7dc3449da3d58a920ca461754ae2454c5116fdffb889df79265ff571eaf42a-rootfs.mount: Deactivated successfully. Nov 1 00:39:55.664319 env[1303]: time="2025-11-01T00:39:55.664262075Z" level=info msg="shim disconnected" id=ec7dc3449da3d58a920ca461754ae2454c5116fdffb889df79265ff571eaf42a Nov 1 00:39:55.664480 env[1303]: time="2025-11-01T00:39:55.664322960Z" level=warning msg="cleaning up after shim disconnected" id=ec7dc3449da3d58a920ca461754ae2454c5116fdffb889df79265ff571eaf42a namespace=k8s.io Nov 1 00:39:55.664480 env[1303]: time="2025-11-01T00:39:55.664336315Z" level=info msg="cleaning up dead shim" Nov 1 00:39:55.673265 env[1303]: time="2025-11-01T00:39:55.672738213Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:39:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4367 runtime=io.containerd.runc.v2\ntime=\"2025-11-01T00:39:55Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" Nov 1 00:39:56.283422 kubelet[2056]: E1101 00:39:56.283115 2056 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:39:56.285982 env[1303]: time="2025-11-01T00:39:56.285936989Z" level=info msg="CreateContainer within sandbox \"6b1c6cc0b2a83c8492e74b3131b5b40ce47305918d9d655371209a555f775fd1\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 1 00:39:56.431344 env[1303]: time="2025-11-01T00:39:56.431146679Z" level=info msg="CreateContainer within sandbox \"6b1c6cc0b2a83c8492e74b3131b5b40ce47305918d9d655371209a555f775fd1\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8bc630b38869c8ee183e30eb8ff3f1cff27f29a2b484944a8dc49ab97029b64f\"" Nov 1 00:39:56.432309 env[1303]: time="2025-11-01T00:39:56.432273736Z" level=info msg="StartContainer for \"8bc630b38869c8ee183e30eb8ff3f1cff27f29a2b484944a8dc49ab97029b64f\"" Nov 1 00:39:56.680064 env[1303]: time="2025-11-01T00:39:56.679613069Z" level=info msg="StartContainer for \"8bc630b38869c8ee183e30eb8ff3f1cff27f29a2b484944a8dc49ab97029b64f\" returns successfully" Nov 1 00:39:56.700660 systemd[1]: run-containerd-runc-k8s.io-8bc630b38869c8ee183e30eb8ff3f1cff27f29a2b484944a8dc49ab97029b64f-runc.Ed2xrm.mount: Deactivated successfully. Nov 1 00:39:56.889243 kubelet[2056]: E1101 00:39:56.889190 2056 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-wcrwc" podUID="87368007-39db-4dab-bc4a-437ecc36f1b1" Nov 1 00:39:57.109360 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Nov 1 00:39:57.286938 kubelet[2056]: E1101 00:39:57.286895 2056 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:39:58.249559 kubelet[2056]: I1101 00:39:58.249493 2056 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-01T00:39:58Z","lastTransitionTime":"2025-11-01T00:39:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Nov 1 00:39:58.520926 systemd[1]: run-containerd-runc-k8s.io-8bc630b38869c8ee183e30eb8ff3f1cff27f29a2b484944a8dc49ab97029b64f-runc.R6vwW6.mount: Deactivated successfully. Nov 1 00:39:58.701520 kubelet[2056]: E1101 00:39:58.701483 2056 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:39:58.891505 kubelet[2056]: E1101 00:39:58.890698 2056 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-wcrwc" podUID="87368007-39db-4dab-bc4a-437ecc36f1b1" Nov 1 00:40:00.374204 systemd-networkd[1077]: lxc_health: Link UP Nov 1 00:40:00.408553 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Nov 1 00:40:00.413204 systemd-networkd[1077]: lxc_health: Gained carrier Nov 1 00:40:00.703427 systemd[1]: run-containerd-runc-k8s.io-8bc630b38869c8ee183e30eb8ff3f1cff27f29a2b484944a8dc49ab97029b64f-runc.mr32nl.mount: Deactivated successfully. Nov 1 00:40:00.709791 kubelet[2056]: E1101 00:40:00.709752 2056 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:40:00.766540 kubelet[2056]: I1101 00:40:00.765972 2056 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-gqfj2" podStartSLOduration=8.765947699 podStartE2EDuration="8.765947699s" podCreationTimestamp="2025-11-01 00:39:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:39:57.386967506 +0000 UTC m=+122.585606814" watchObservedRunningTime="2025-11-01 00:40:00.765947699 +0000 UTC m=+125.964586997" Nov 1 00:40:00.887849 kubelet[2056]: E1101 00:40:00.887797 2056 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:40:01.295440 kubelet[2056]: E1101 00:40:01.295407 2056 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:40:02.306528 kubelet[2056]: E1101 00:40:02.305904 2056 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:40:02.450555 systemd-networkd[1077]: lxc_health: Gained IPv6LL Nov 1 00:40:02.957947 systemd[1]: run-containerd-runc-k8s.io-8bc630b38869c8ee183e30eb8ff3f1cff27f29a2b484944a8dc49ab97029b64f-runc.rkHeu4.mount: Deactivated successfully. Nov 1 00:40:05.088034 systemd[1]: run-containerd-runc-k8s.io-8bc630b38869c8ee183e30eb8ff3f1cff27f29a2b484944a8dc49ab97029b64f-runc.sNSCDA.mount: Deactivated successfully. Nov 1 00:40:07.248921 systemd[1]: run-containerd-runc-k8s.io-8bc630b38869c8ee183e30eb8ff3f1cff27f29a2b484944a8dc49ab97029b64f-runc.iUK9NV.mount: Deactivated successfully. Nov 1 00:40:07.337425 sshd[3937]: pam_unix(sshd:session): session closed for user core Nov 1 00:40:07.341122 systemd[1]: sshd@28-10.0.0.57:22-10.0.0.1:37778.service: Deactivated successfully. Nov 1 00:40:07.342420 systemd[1]: session-29.scope: Deactivated successfully. Nov 1 00:40:07.342620 systemd-logind[1291]: Session 29 logged out. Waiting for processes to exit. Nov 1 00:40:07.343791 systemd-logind[1291]: Removed session 29.